00:00:00.000 Started by upstream project "autotest-per-patch" build number 124182 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.120 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.120 The recommended git tool is: git 00:00:00.120 using credential 00000000-0000-0000-0000-000000000002 00:00:00.122 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.158 Fetching changes from the remote Git repository 00:00:00.159 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.197 Using shallow fetch with depth 1 00:00:00.197 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.197 > git --version # timeout=10 00:00:00.223 > git --version # 'git version 2.39.2' 00:00:00.223 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.244 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.244 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.581 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.591 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.602 Checking out Revision 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 (FETCH_HEAD) 00:00:06.602 > git config core.sparsecheckout # timeout=10 00:00:06.614 > git read-tree -mu HEAD # timeout=10 00:00:06.630 > git checkout -f 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=5 00:00:06.647 Commit message: "pool: fixes for VisualBuild class" 00:00:06.647 > git rev-list --no-walk 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=10 00:00:06.730 [Pipeline] Start of Pipeline 00:00:06.745 [Pipeline] library 00:00:06.746 Loading library shm_lib@master 00:00:06.746 Library shm_lib@master is cached. Copying from home. 00:00:06.768 [Pipeline] node 00:00:06.778 Running on WFP16 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.779 [Pipeline] { 00:00:06.792 [Pipeline] catchError 00:00:06.794 [Pipeline] { 00:00:06.807 [Pipeline] wrap 00:00:06.816 [Pipeline] { 00:00:06.821 [Pipeline] stage 00:00:06.823 [Pipeline] { (Prologue) 00:00:07.175 [Pipeline] sh 00:00:07.453 + logger -p user.info -t JENKINS-CI 00:00:07.467 [Pipeline] echo 00:00:07.469 Node: WFP16 00:00:07.475 [Pipeline] sh 00:00:07.769 [Pipeline] setCustomBuildProperty 00:00:07.777 [Pipeline] echo 00:00:07.779 Cleanup processes 00:00:07.782 [Pipeline] sh 00:00:08.060 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.060 1121885 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.070 [Pipeline] sh 00:00:08.346 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.346 ++ grep -v 'sudo pgrep' 00:00:08.346 ++ awk '{print $1}' 00:00:08.346 + sudo kill -9 00:00:08.346 + true 00:00:08.357 [Pipeline] cleanWs 00:00:08.364 [WS-CLEANUP] Deleting project workspace... 00:00:08.364 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.370 [WS-CLEANUP] done 00:00:08.374 [Pipeline] setCustomBuildProperty 00:00:08.389 [Pipeline] sh 00:00:08.668 + sudo git config --global --replace-all safe.directory '*' 00:00:08.724 [Pipeline] nodesByLabel 00:00:08.725 Found a total of 2 nodes with the 'sorcerer' label 00:00:08.735 [Pipeline] httpRequest 00:00:08.740 HttpMethod: GET 00:00:08.740 URL: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:08.743 Sending request to url: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:08.764 Response Code: HTTP/1.1 200 OK 00:00:08.765 Success: Status code 200 is in the accepted range: 200,404 00:00:08.765 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:32.451 [Pipeline] sh 00:00:32.734 + tar --no-same-owner -xf jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:32.751 [Pipeline] httpRequest 00:00:32.756 HttpMethod: GET 00:00:32.756 URL: http://10.211.164.101/packages/spdk_422f7ef4e2c0d485598aef97daf75b71a64af813.tar.gz 00:00:32.757 Sending request to url: http://10.211.164.101/packages/spdk_422f7ef4e2c0d485598aef97daf75b71a64af813.tar.gz 00:00:32.767 Response Code: HTTP/1.1 200 OK 00:00:32.768 Success: Status code 200 is in the accepted range: 200,404 00:00:32.769 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_422f7ef4e2c0d485598aef97daf75b71a64af813.tar.gz 00:02:11.948 [Pipeline] sh 00:02:12.231 + tar --no-same-owner -xf spdk_422f7ef4e2c0d485598aef97daf75b71a64af813.tar.gz 00:02:16.436 [Pipeline] sh 00:02:16.722 + git -C spdk log --oneline -n5 00:02:16.722 422f7ef4e dpdk_governor: don't load if app core mask has subset of SMT siblings 00:02:16.722 7f778d681 dpdk_governor: use rte_power_set_env() to reduce noisy log messages 00:02:16.722 fb60bd1df env: add spdk_env_core_get_smt_cpuset 00:02:16.722 5e7c3ebab util: allow commas in spdk_cpuset_parse() 00:02:16.722 79d8da58d spdk_top: shorten Frequency column name 00:02:16.734 [Pipeline] } 00:02:16.752 [Pipeline] // stage 00:02:16.761 [Pipeline] stage 00:02:16.762 [Pipeline] { (Prepare) 00:02:16.776 [Pipeline] writeFile 00:02:16.789 [Pipeline] sh 00:02:17.073 + logger -p user.info -t JENKINS-CI 00:02:17.086 [Pipeline] sh 00:02:17.371 + logger -p user.info -t JENKINS-CI 00:02:17.384 [Pipeline] sh 00:02:17.667 + cat autorun-spdk.conf 00:02:17.667 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:17.667 SPDK_TEST_NVMF=1 00:02:17.667 SPDK_TEST_NVME_CLI=1 00:02:17.667 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:17.667 SPDK_TEST_NVMF_NICS=e810 00:02:17.667 SPDK_TEST_VFIOUSER=1 00:02:17.667 SPDK_RUN_UBSAN=1 00:02:17.667 NET_TYPE=phy 00:02:17.675 RUN_NIGHTLY=0 00:02:17.679 [Pipeline] readFile 00:02:17.704 [Pipeline] withEnv 00:02:17.706 [Pipeline] { 00:02:17.718 [Pipeline] sh 00:02:18.051 + set -ex 00:02:18.051 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:18.051 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:18.051 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:18.051 ++ SPDK_TEST_NVMF=1 00:02:18.051 ++ SPDK_TEST_NVME_CLI=1 00:02:18.051 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:18.051 ++ SPDK_TEST_NVMF_NICS=e810 00:02:18.051 ++ SPDK_TEST_VFIOUSER=1 00:02:18.051 ++ SPDK_RUN_UBSAN=1 00:02:18.051 ++ NET_TYPE=phy 00:02:18.051 ++ RUN_NIGHTLY=0 00:02:18.051 + case $SPDK_TEST_NVMF_NICS in 00:02:18.051 + DRIVERS=ice 00:02:18.051 + [[ tcp == \r\d\m\a ]] 00:02:18.051 + [[ -n ice ]] 00:02:18.051 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:18.051 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:18.051 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:18.051 rmmod: ERROR: Module irdma is not currently loaded 00:02:18.051 rmmod: ERROR: Module i40iw is not currently loaded 00:02:18.051 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:18.051 + true 00:02:18.051 + for D in $DRIVERS 00:02:18.051 + sudo modprobe ice 00:02:18.051 + exit 0 00:02:18.064 [Pipeline] } 00:02:18.083 [Pipeline] // withEnv 00:02:18.089 [Pipeline] } 00:02:18.107 [Pipeline] // stage 00:02:18.117 [Pipeline] catchError 00:02:18.119 [Pipeline] { 00:02:18.136 [Pipeline] timeout 00:02:18.136 Timeout set to expire in 50 min 00:02:18.138 [Pipeline] { 00:02:18.153 [Pipeline] stage 00:02:18.154 [Pipeline] { (Tests) 00:02:18.168 [Pipeline] sh 00:02:18.452 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:18.452 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:18.452 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:18.452 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:18.452 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:18.452 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:18.452 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:18.452 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:18.452 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:18.452 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:18.452 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:18.452 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:18.452 + source /etc/os-release 00:02:18.452 ++ NAME='Fedora Linux' 00:02:18.452 ++ VERSION='38 (Cloud Edition)' 00:02:18.452 ++ ID=fedora 00:02:18.452 ++ VERSION_ID=38 00:02:18.452 ++ VERSION_CODENAME= 00:02:18.452 ++ PLATFORM_ID=platform:f38 00:02:18.452 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:18.452 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:18.452 ++ LOGO=fedora-logo-icon 00:02:18.452 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:18.452 ++ HOME_URL=https://fedoraproject.org/ 00:02:18.452 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:18.452 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:18.452 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:18.452 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:18.452 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:18.452 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:18.452 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:18.452 ++ SUPPORT_END=2024-05-14 00:02:18.452 ++ VARIANT='Cloud Edition' 00:02:18.452 ++ VARIANT_ID=cloud 00:02:18.452 + uname -a 00:02:18.452 Linux spdk-wfp-16 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:18.452 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:21.744 Hugepages 00:02:21.744 node hugesize free / total 00:02:21.744 node0 1048576kB 0 / 0 00:02:21.744 node0 2048kB 0 / 0 00:02:21.744 node1 1048576kB 0 / 0 00:02:21.744 node1 2048kB 0 / 0 00:02:21.744 00:02:21.744 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:21.744 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:02:21.744 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:02:21.744 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:02:21.744 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:02:21.744 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:02:21.744 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:02:21.744 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:02:21.744 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:02:21.744 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:02:21.744 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:02:21.744 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:02:21.744 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:02:21.744 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:02:21.744 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:02:21.744 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:02:21.744 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:02:21.744 NVMe 0000:86:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:02:21.744 + rm -f /tmp/spdk-ld-path 00:02:21.744 + source autorun-spdk.conf 00:02:21.744 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:21.744 ++ SPDK_TEST_NVMF=1 00:02:21.744 ++ SPDK_TEST_NVME_CLI=1 00:02:21.744 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:21.744 ++ SPDK_TEST_NVMF_NICS=e810 00:02:21.744 ++ SPDK_TEST_VFIOUSER=1 00:02:21.744 ++ SPDK_RUN_UBSAN=1 00:02:21.744 ++ NET_TYPE=phy 00:02:21.744 ++ RUN_NIGHTLY=0 00:02:21.744 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:21.744 + [[ -n '' ]] 00:02:21.744 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:21.744 + for M in /var/spdk/build-*-manifest.txt 00:02:21.744 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:21.744 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:21.744 + for M in /var/spdk/build-*-manifest.txt 00:02:21.744 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:21.744 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:21.744 ++ uname 00:02:21.744 + [[ Linux == \L\i\n\u\x ]] 00:02:21.744 + sudo dmesg -T 00:02:21.744 + sudo dmesg --clear 00:02:21.744 + dmesg_pid=1123420 00:02:21.744 + [[ Fedora Linux == FreeBSD ]] 00:02:21.744 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:21.744 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:21.744 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:21.744 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:02:21.744 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:02:21.744 + [[ -x /usr/src/fio-static/fio ]] 00:02:21.744 + export FIO_BIN=/usr/src/fio-static/fio 00:02:21.744 + FIO_BIN=/usr/src/fio-static/fio 00:02:21.744 + sudo dmesg -Tw 00:02:21.744 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:21.744 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:21.744 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:21.744 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:21.744 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:21.744 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:21.744 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:21.744 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:21.744 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:21.744 Test configuration: 00:02:21.744 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:21.744 SPDK_TEST_NVMF=1 00:02:21.744 SPDK_TEST_NVME_CLI=1 00:02:21.744 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:21.744 SPDK_TEST_NVMF_NICS=e810 00:02:21.744 SPDK_TEST_VFIOUSER=1 00:02:21.744 SPDK_RUN_UBSAN=1 00:02:21.744 NET_TYPE=phy 00:02:21.744 RUN_NIGHTLY=0 21:19:21 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:21.744 21:19:21 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:21.744 21:19:21 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:21.744 21:19:21 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:21.744 21:19:21 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:21.744 21:19:21 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:21.744 21:19:21 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:21.744 21:19:21 -- paths/export.sh@5 -- $ export PATH 00:02:21.744 21:19:21 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:21.744 21:19:21 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:21.744 21:19:21 -- common/autobuild_common.sh@437 -- $ date +%s 00:02:21.744 21:19:21 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1717787961.XXXXXX 00:02:21.744 21:19:21 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1717787961.RnR5yC 00:02:21.744 21:19:21 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:02:21.744 21:19:21 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:02:21.744 21:19:21 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:02:21.744 21:19:21 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:21.745 21:19:21 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:21.745 21:19:21 -- common/autobuild_common.sh@453 -- $ get_config_params 00:02:21.745 21:19:21 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:21.745 21:19:21 -- common/autotest_common.sh@10 -- $ set +x 00:02:21.745 21:19:21 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:02:21.745 21:19:21 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:02:21.745 21:19:21 -- pm/common@17 -- $ local monitor 00:02:21.745 21:19:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:21.745 21:19:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:21.745 21:19:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:21.745 21:19:21 -- pm/common@21 -- $ date +%s 00:02:21.745 21:19:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:21.745 21:19:21 -- pm/common@21 -- $ date +%s 00:02:21.745 21:19:21 -- pm/common@25 -- $ sleep 1 00:02:21.745 21:19:21 -- pm/common@21 -- $ date +%s 00:02:21.745 21:19:21 -- pm/common@21 -- $ date +%s 00:02:21.745 21:19:21 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1717787961 00:02:21.745 21:19:21 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1717787961 00:02:21.745 21:19:21 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1717787961 00:02:21.745 21:19:21 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1717787961 00:02:21.745 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1717787961_collect-vmstat.pm.log 00:02:21.745 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1717787961_collect-cpu-load.pm.log 00:02:21.745 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1717787961_collect-cpu-temp.pm.log 00:02:21.745 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1717787961_collect-bmc-pm.bmc.pm.log 00:02:22.683 21:19:22 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:02:22.683 21:19:22 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:22.683 21:19:22 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:22.683 21:19:22 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:22.683 21:19:22 -- spdk/autobuild.sh@16 -- $ date -u 00:02:22.683 Fri Jun 7 07:19:22 PM UTC 2024 00:02:22.683 21:19:22 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:22.683 v24.09-pre-62-g422f7ef4e 00:02:22.683 21:19:22 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:22.683 21:19:22 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:22.683 21:19:22 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:22.683 21:19:22 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:02:22.683 21:19:22 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:02:22.683 21:19:22 -- common/autotest_common.sh@10 -- $ set +x 00:02:22.683 ************************************ 00:02:22.683 START TEST ubsan 00:02:22.683 ************************************ 00:02:22.683 21:19:22 ubsan -- common/autotest_common.sh@1124 -- $ echo 'using ubsan' 00:02:22.683 using ubsan 00:02:22.683 00:02:22.683 real 0m0.000s 00:02:22.683 user 0m0.000s 00:02:22.683 sys 0m0.000s 00:02:22.683 21:19:22 ubsan -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:02:22.683 21:19:22 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:22.683 ************************************ 00:02:22.683 END TEST ubsan 00:02:22.683 ************************************ 00:02:22.683 21:19:22 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:22.683 21:19:22 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:22.683 21:19:22 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:22.683 21:19:22 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:22.683 21:19:22 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:22.683 21:19:22 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:22.683 21:19:22 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:22.683 21:19:22 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:22.683 21:19:22 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:02:22.683 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:22.683 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:23.254 Using 'verbs' RDMA provider 00:02:36.039 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:50.929 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:50.929 Creating mk/config.mk...done. 00:02:50.929 Creating mk/cc.flags.mk...done. 00:02:50.929 Type 'make' to build. 00:02:50.929 21:19:49 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:02:50.929 21:19:49 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:02:50.929 21:19:49 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:02:50.929 21:19:49 -- common/autotest_common.sh@10 -- $ set +x 00:02:50.929 ************************************ 00:02:50.929 START TEST make 00:02:50.929 ************************************ 00:02:50.929 21:19:49 make -- common/autotest_common.sh@1124 -- $ make -j112 00:02:50.929 make[1]: Nothing to be done for 'all'. 00:02:51.189 The Meson build system 00:02:51.189 Version: 1.3.1 00:02:51.189 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:51.189 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:51.189 Build type: native build 00:02:51.189 Project name: libvfio-user 00:02:51.189 Project version: 0.0.1 00:02:51.189 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:51.189 C linker for the host machine: cc ld.bfd 2.39-16 00:02:51.189 Host machine cpu family: x86_64 00:02:51.189 Host machine cpu: x86_64 00:02:51.189 Run-time dependency threads found: YES 00:02:51.189 Library dl found: YES 00:02:51.189 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:51.189 Run-time dependency json-c found: YES 0.17 00:02:51.189 Run-time dependency cmocka found: YES 1.1.7 00:02:51.189 Program pytest-3 found: NO 00:02:51.189 Program flake8 found: NO 00:02:51.189 Program misspell-fixer found: NO 00:02:51.189 Program restructuredtext-lint found: NO 00:02:51.189 Program valgrind found: YES (/usr/bin/valgrind) 00:02:51.189 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:51.189 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:51.189 Compiler for C supports arguments -Wwrite-strings: YES 00:02:51.189 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:51.189 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:51.189 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:51.189 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:51.189 Build targets in project: 8 00:02:51.189 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:51.189 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:51.189 00:02:51.189 libvfio-user 0.0.1 00:02:51.189 00:02:51.189 User defined options 00:02:51.189 buildtype : debug 00:02:51.189 default_library: shared 00:02:51.189 libdir : /usr/local/lib 00:02:51.189 00:02:51.189 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:51.755 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:51.755 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:51.755 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:51.755 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:51.756 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:51.756 [5/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:51.756 [6/37] Compiling C object samples/null.p/null.c.o 00:02:51.756 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:51.756 [8/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:51.756 [9/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:51.756 [10/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:51.756 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:51.756 [12/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:51.756 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:51.756 [14/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:51.756 [15/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:51.756 [16/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:51.756 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:51.756 [18/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:51.756 [19/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:51.756 [20/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:51.756 [21/37] Compiling C object samples/client.p/client.c.o 00:02:51.756 [22/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:51.756 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:51.756 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:51.756 [25/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:51.756 [26/37] Compiling C object samples/server.p/server.c.o 00:02:51.756 [27/37] Linking target samples/client 00:02:51.756 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:52.014 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:02:52.014 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:52.014 [31/37] Linking target test/unit_tests 00:02:52.014 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:52.014 [33/37] Linking target samples/lspci 00:02:52.014 [34/37] Linking target samples/null 00:02:52.014 [35/37] Linking target samples/server 00:02:52.014 [36/37] Linking target samples/gpio-pci-idio-16 00:02:52.014 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:52.014 INFO: autodetecting backend as ninja 00:02:52.014 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:52.273 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:52.531 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:52.531 ninja: no work to do. 00:02:59.113 The Meson build system 00:02:59.113 Version: 1.3.1 00:02:59.113 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:59.113 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:59.113 Build type: native build 00:02:59.113 Program cat found: YES (/usr/bin/cat) 00:02:59.113 Project name: DPDK 00:02:59.113 Project version: 24.03.0 00:02:59.113 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:59.113 C linker for the host machine: cc ld.bfd 2.39-16 00:02:59.113 Host machine cpu family: x86_64 00:02:59.113 Host machine cpu: x86_64 00:02:59.113 Message: ## Building in Developer Mode ## 00:02:59.113 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:59.113 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:59.113 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:59.113 Program python3 found: YES (/usr/bin/python3) 00:02:59.113 Program cat found: YES (/usr/bin/cat) 00:02:59.113 Compiler for C supports arguments -march=native: YES 00:02:59.113 Checking for size of "void *" : 8 00:02:59.113 Checking for size of "void *" : 8 (cached) 00:02:59.113 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:59.113 Library m found: YES 00:02:59.113 Library numa found: YES 00:02:59.113 Has header "numaif.h" : YES 00:02:59.113 Library fdt found: NO 00:02:59.113 Library execinfo found: NO 00:02:59.113 Has header "execinfo.h" : YES 00:02:59.113 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:59.113 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:59.113 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:59.113 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:59.113 Run-time dependency openssl found: YES 3.0.9 00:02:59.113 Run-time dependency libpcap found: YES 1.10.4 00:02:59.113 Has header "pcap.h" with dependency libpcap: YES 00:02:59.113 Compiler for C supports arguments -Wcast-qual: YES 00:02:59.113 Compiler for C supports arguments -Wdeprecated: YES 00:02:59.113 Compiler for C supports arguments -Wformat: YES 00:02:59.113 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:59.113 Compiler for C supports arguments -Wformat-security: NO 00:02:59.113 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:59.113 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:59.113 Compiler for C supports arguments -Wnested-externs: YES 00:02:59.113 Compiler for C supports arguments -Wold-style-definition: YES 00:02:59.113 Compiler for C supports arguments -Wpointer-arith: YES 00:02:59.113 Compiler for C supports arguments -Wsign-compare: YES 00:02:59.113 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:59.113 Compiler for C supports arguments -Wundef: YES 00:02:59.113 Compiler for C supports arguments -Wwrite-strings: YES 00:02:59.113 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:59.113 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:59.113 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:59.113 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:59.113 Program objdump found: YES (/usr/bin/objdump) 00:02:59.113 Compiler for C supports arguments -mavx512f: YES 00:02:59.113 Checking if "AVX512 checking" compiles: YES 00:02:59.113 Fetching value of define "__SSE4_2__" : 1 00:02:59.113 Fetching value of define "__AES__" : 1 00:02:59.113 Fetching value of define "__AVX__" : 1 00:02:59.113 Fetching value of define "__AVX2__" : 1 00:02:59.113 Fetching value of define "__AVX512BW__" : 1 00:02:59.113 Fetching value of define "__AVX512CD__" : 1 00:02:59.113 Fetching value of define "__AVX512DQ__" : 1 00:02:59.113 Fetching value of define "__AVX512F__" : 1 00:02:59.113 Fetching value of define "__AVX512VL__" : 1 00:02:59.113 Fetching value of define "__PCLMUL__" : 1 00:02:59.113 Fetching value of define "__RDRND__" : 1 00:02:59.113 Fetching value of define "__RDSEED__" : 1 00:02:59.113 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:59.113 Fetching value of define "__znver1__" : (undefined) 00:02:59.113 Fetching value of define "__znver2__" : (undefined) 00:02:59.113 Fetching value of define "__znver3__" : (undefined) 00:02:59.113 Fetching value of define "__znver4__" : (undefined) 00:02:59.113 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:59.113 Message: lib/log: Defining dependency "log" 00:02:59.113 Message: lib/kvargs: Defining dependency "kvargs" 00:02:59.113 Message: lib/telemetry: Defining dependency "telemetry" 00:02:59.113 Checking for function "getentropy" : NO 00:02:59.113 Message: lib/eal: Defining dependency "eal" 00:02:59.113 Message: lib/ring: Defining dependency "ring" 00:02:59.113 Message: lib/rcu: Defining dependency "rcu" 00:02:59.113 Message: lib/mempool: Defining dependency "mempool" 00:02:59.113 Message: lib/mbuf: Defining dependency "mbuf" 00:02:59.113 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:59.113 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:59.113 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:59.113 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:59.113 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:59.113 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:59.113 Compiler for C supports arguments -mpclmul: YES 00:02:59.113 Compiler for C supports arguments -maes: YES 00:02:59.113 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:59.113 Compiler for C supports arguments -mavx512bw: YES 00:02:59.113 Compiler for C supports arguments -mavx512dq: YES 00:02:59.113 Compiler for C supports arguments -mavx512vl: YES 00:02:59.113 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:59.113 Compiler for C supports arguments -mavx2: YES 00:02:59.113 Compiler for C supports arguments -mavx: YES 00:02:59.113 Message: lib/net: Defining dependency "net" 00:02:59.113 Message: lib/meter: Defining dependency "meter" 00:02:59.113 Message: lib/ethdev: Defining dependency "ethdev" 00:02:59.113 Message: lib/pci: Defining dependency "pci" 00:02:59.113 Message: lib/cmdline: Defining dependency "cmdline" 00:02:59.113 Message: lib/hash: Defining dependency "hash" 00:02:59.113 Message: lib/timer: Defining dependency "timer" 00:02:59.113 Message: lib/compressdev: Defining dependency "compressdev" 00:02:59.113 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:59.113 Message: lib/dmadev: Defining dependency "dmadev" 00:02:59.113 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:59.113 Message: lib/power: Defining dependency "power" 00:02:59.113 Message: lib/reorder: Defining dependency "reorder" 00:02:59.113 Message: lib/security: Defining dependency "security" 00:02:59.113 Has header "linux/userfaultfd.h" : YES 00:02:59.113 Has header "linux/vduse.h" : YES 00:02:59.113 Message: lib/vhost: Defining dependency "vhost" 00:02:59.113 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:59.113 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:59.113 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:59.113 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:59.113 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:59.113 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:59.113 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:59.114 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:59.114 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:59.114 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:59.114 Program doxygen found: YES (/usr/bin/doxygen) 00:02:59.114 Configuring doxy-api-html.conf using configuration 00:02:59.114 Configuring doxy-api-man.conf using configuration 00:02:59.114 Program mandb found: YES (/usr/bin/mandb) 00:02:59.114 Program sphinx-build found: NO 00:02:59.114 Configuring rte_build_config.h using configuration 00:02:59.114 Message: 00:02:59.114 ================= 00:02:59.114 Applications Enabled 00:02:59.114 ================= 00:02:59.114 00:02:59.114 apps: 00:02:59.114 00:02:59.114 00:02:59.114 Message: 00:02:59.114 ================= 00:02:59.114 Libraries Enabled 00:02:59.114 ================= 00:02:59.114 00:02:59.114 libs: 00:02:59.114 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:59.114 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:59.114 cryptodev, dmadev, power, reorder, security, vhost, 00:02:59.114 00:02:59.114 Message: 00:02:59.114 =============== 00:02:59.114 Drivers Enabled 00:02:59.114 =============== 00:02:59.114 00:02:59.114 common: 00:02:59.114 00:02:59.114 bus: 00:02:59.114 pci, vdev, 00:02:59.114 mempool: 00:02:59.114 ring, 00:02:59.114 dma: 00:02:59.114 00:02:59.114 net: 00:02:59.114 00:02:59.114 crypto: 00:02:59.114 00:02:59.114 compress: 00:02:59.114 00:02:59.114 vdpa: 00:02:59.114 00:02:59.114 00:02:59.114 Message: 00:02:59.114 ================= 00:02:59.114 Content Skipped 00:02:59.114 ================= 00:02:59.114 00:02:59.114 apps: 00:02:59.114 dumpcap: explicitly disabled via build config 00:02:59.114 graph: explicitly disabled via build config 00:02:59.114 pdump: explicitly disabled via build config 00:02:59.114 proc-info: explicitly disabled via build config 00:02:59.114 test-acl: explicitly disabled via build config 00:02:59.114 test-bbdev: explicitly disabled via build config 00:02:59.114 test-cmdline: explicitly disabled via build config 00:02:59.114 test-compress-perf: explicitly disabled via build config 00:02:59.114 test-crypto-perf: explicitly disabled via build config 00:02:59.114 test-dma-perf: explicitly disabled via build config 00:02:59.114 test-eventdev: explicitly disabled via build config 00:02:59.114 test-fib: explicitly disabled via build config 00:02:59.114 test-flow-perf: explicitly disabled via build config 00:02:59.114 test-gpudev: explicitly disabled via build config 00:02:59.114 test-mldev: explicitly disabled via build config 00:02:59.114 test-pipeline: explicitly disabled via build config 00:02:59.114 test-pmd: explicitly disabled via build config 00:02:59.114 test-regex: explicitly disabled via build config 00:02:59.114 test-sad: explicitly disabled via build config 00:02:59.114 test-security-perf: explicitly disabled via build config 00:02:59.114 00:02:59.114 libs: 00:02:59.114 argparse: explicitly disabled via build config 00:02:59.114 metrics: explicitly disabled via build config 00:02:59.114 acl: explicitly disabled via build config 00:02:59.114 bbdev: explicitly disabled via build config 00:02:59.114 bitratestats: explicitly disabled via build config 00:02:59.114 bpf: explicitly disabled via build config 00:02:59.114 cfgfile: explicitly disabled via build config 00:02:59.114 distributor: explicitly disabled via build config 00:02:59.114 efd: explicitly disabled via build config 00:02:59.114 eventdev: explicitly disabled via build config 00:02:59.114 dispatcher: explicitly disabled via build config 00:02:59.114 gpudev: explicitly disabled via build config 00:02:59.114 gro: explicitly disabled via build config 00:02:59.114 gso: explicitly disabled via build config 00:02:59.114 ip_frag: explicitly disabled via build config 00:02:59.114 jobstats: explicitly disabled via build config 00:02:59.114 latencystats: explicitly disabled via build config 00:02:59.114 lpm: explicitly disabled via build config 00:02:59.114 member: explicitly disabled via build config 00:02:59.114 pcapng: explicitly disabled via build config 00:02:59.114 rawdev: explicitly disabled via build config 00:02:59.114 regexdev: explicitly disabled via build config 00:02:59.114 mldev: explicitly disabled via build config 00:02:59.114 rib: explicitly disabled via build config 00:02:59.114 sched: explicitly disabled via build config 00:02:59.114 stack: explicitly disabled via build config 00:02:59.114 ipsec: explicitly disabled via build config 00:02:59.114 pdcp: explicitly disabled via build config 00:02:59.114 fib: explicitly disabled via build config 00:02:59.114 port: explicitly disabled via build config 00:02:59.114 pdump: explicitly disabled via build config 00:02:59.114 table: explicitly disabled via build config 00:02:59.114 pipeline: explicitly disabled via build config 00:02:59.114 graph: explicitly disabled via build config 00:02:59.114 node: explicitly disabled via build config 00:02:59.114 00:02:59.114 drivers: 00:02:59.114 common/cpt: not in enabled drivers build config 00:02:59.114 common/dpaax: not in enabled drivers build config 00:02:59.114 common/iavf: not in enabled drivers build config 00:02:59.114 common/idpf: not in enabled drivers build config 00:02:59.114 common/ionic: not in enabled drivers build config 00:02:59.114 common/mvep: not in enabled drivers build config 00:02:59.114 common/octeontx: not in enabled drivers build config 00:02:59.114 bus/auxiliary: not in enabled drivers build config 00:02:59.114 bus/cdx: not in enabled drivers build config 00:02:59.114 bus/dpaa: not in enabled drivers build config 00:02:59.114 bus/fslmc: not in enabled drivers build config 00:02:59.114 bus/ifpga: not in enabled drivers build config 00:02:59.114 bus/platform: not in enabled drivers build config 00:02:59.114 bus/uacce: not in enabled drivers build config 00:02:59.114 bus/vmbus: not in enabled drivers build config 00:02:59.114 common/cnxk: not in enabled drivers build config 00:02:59.114 common/mlx5: not in enabled drivers build config 00:02:59.114 common/nfp: not in enabled drivers build config 00:02:59.114 common/nitrox: not in enabled drivers build config 00:02:59.114 common/qat: not in enabled drivers build config 00:02:59.114 common/sfc_efx: not in enabled drivers build config 00:02:59.114 mempool/bucket: not in enabled drivers build config 00:02:59.114 mempool/cnxk: not in enabled drivers build config 00:02:59.114 mempool/dpaa: not in enabled drivers build config 00:02:59.114 mempool/dpaa2: not in enabled drivers build config 00:02:59.114 mempool/octeontx: not in enabled drivers build config 00:02:59.114 mempool/stack: not in enabled drivers build config 00:02:59.114 dma/cnxk: not in enabled drivers build config 00:02:59.114 dma/dpaa: not in enabled drivers build config 00:02:59.114 dma/dpaa2: not in enabled drivers build config 00:02:59.114 dma/hisilicon: not in enabled drivers build config 00:02:59.114 dma/idxd: not in enabled drivers build config 00:02:59.114 dma/ioat: not in enabled drivers build config 00:02:59.114 dma/skeleton: not in enabled drivers build config 00:02:59.114 net/af_packet: not in enabled drivers build config 00:02:59.114 net/af_xdp: not in enabled drivers build config 00:02:59.114 net/ark: not in enabled drivers build config 00:02:59.114 net/atlantic: not in enabled drivers build config 00:02:59.114 net/avp: not in enabled drivers build config 00:02:59.114 net/axgbe: not in enabled drivers build config 00:02:59.114 net/bnx2x: not in enabled drivers build config 00:02:59.114 net/bnxt: not in enabled drivers build config 00:02:59.114 net/bonding: not in enabled drivers build config 00:02:59.114 net/cnxk: not in enabled drivers build config 00:02:59.114 net/cpfl: not in enabled drivers build config 00:02:59.114 net/cxgbe: not in enabled drivers build config 00:02:59.114 net/dpaa: not in enabled drivers build config 00:02:59.114 net/dpaa2: not in enabled drivers build config 00:02:59.114 net/e1000: not in enabled drivers build config 00:02:59.114 net/ena: not in enabled drivers build config 00:02:59.114 net/enetc: not in enabled drivers build config 00:02:59.114 net/enetfec: not in enabled drivers build config 00:02:59.114 net/enic: not in enabled drivers build config 00:02:59.114 net/failsafe: not in enabled drivers build config 00:02:59.114 net/fm10k: not in enabled drivers build config 00:02:59.114 net/gve: not in enabled drivers build config 00:02:59.114 net/hinic: not in enabled drivers build config 00:02:59.114 net/hns3: not in enabled drivers build config 00:02:59.114 net/i40e: not in enabled drivers build config 00:02:59.114 net/iavf: not in enabled drivers build config 00:02:59.114 net/ice: not in enabled drivers build config 00:02:59.114 net/idpf: not in enabled drivers build config 00:02:59.114 net/igc: not in enabled drivers build config 00:02:59.114 net/ionic: not in enabled drivers build config 00:02:59.114 net/ipn3ke: not in enabled drivers build config 00:02:59.114 net/ixgbe: not in enabled drivers build config 00:02:59.114 net/mana: not in enabled drivers build config 00:02:59.114 net/memif: not in enabled drivers build config 00:02:59.114 net/mlx4: not in enabled drivers build config 00:02:59.114 net/mlx5: not in enabled drivers build config 00:02:59.114 net/mvneta: not in enabled drivers build config 00:02:59.114 net/mvpp2: not in enabled drivers build config 00:02:59.114 net/netvsc: not in enabled drivers build config 00:02:59.114 net/nfb: not in enabled drivers build config 00:02:59.114 net/nfp: not in enabled drivers build config 00:02:59.114 net/ngbe: not in enabled drivers build config 00:02:59.114 net/null: not in enabled drivers build config 00:02:59.114 net/octeontx: not in enabled drivers build config 00:02:59.114 net/octeon_ep: not in enabled drivers build config 00:02:59.114 net/pcap: not in enabled drivers build config 00:02:59.114 net/pfe: not in enabled drivers build config 00:02:59.114 net/qede: not in enabled drivers build config 00:02:59.114 net/ring: not in enabled drivers build config 00:02:59.114 net/sfc: not in enabled drivers build config 00:02:59.114 net/softnic: not in enabled drivers build config 00:02:59.114 net/tap: not in enabled drivers build config 00:02:59.114 net/thunderx: not in enabled drivers build config 00:02:59.114 net/txgbe: not in enabled drivers build config 00:02:59.114 net/vdev_netvsc: not in enabled drivers build config 00:02:59.114 net/vhost: not in enabled drivers build config 00:02:59.114 net/virtio: not in enabled drivers build config 00:02:59.115 net/vmxnet3: not in enabled drivers build config 00:02:59.115 raw/*: missing internal dependency, "rawdev" 00:02:59.115 crypto/armv8: not in enabled drivers build config 00:02:59.115 crypto/bcmfs: not in enabled drivers build config 00:02:59.115 crypto/caam_jr: not in enabled drivers build config 00:02:59.115 crypto/ccp: not in enabled drivers build config 00:02:59.115 crypto/cnxk: not in enabled drivers build config 00:02:59.115 crypto/dpaa_sec: not in enabled drivers build config 00:02:59.115 crypto/dpaa2_sec: not in enabled drivers build config 00:02:59.115 crypto/ipsec_mb: not in enabled drivers build config 00:02:59.115 crypto/mlx5: not in enabled drivers build config 00:02:59.115 crypto/mvsam: not in enabled drivers build config 00:02:59.115 crypto/nitrox: not in enabled drivers build config 00:02:59.115 crypto/null: not in enabled drivers build config 00:02:59.115 crypto/octeontx: not in enabled drivers build config 00:02:59.115 crypto/openssl: not in enabled drivers build config 00:02:59.115 crypto/scheduler: not in enabled drivers build config 00:02:59.115 crypto/uadk: not in enabled drivers build config 00:02:59.115 crypto/virtio: not in enabled drivers build config 00:02:59.115 compress/isal: not in enabled drivers build config 00:02:59.115 compress/mlx5: not in enabled drivers build config 00:02:59.115 compress/nitrox: not in enabled drivers build config 00:02:59.115 compress/octeontx: not in enabled drivers build config 00:02:59.115 compress/zlib: not in enabled drivers build config 00:02:59.115 regex/*: missing internal dependency, "regexdev" 00:02:59.115 ml/*: missing internal dependency, "mldev" 00:02:59.115 vdpa/ifc: not in enabled drivers build config 00:02:59.115 vdpa/mlx5: not in enabled drivers build config 00:02:59.115 vdpa/nfp: not in enabled drivers build config 00:02:59.115 vdpa/sfc: not in enabled drivers build config 00:02:59.115 event/*: missing internal dependency, "eventdev" 00:02:59.115 baseband/*: missing internal dependency, "bbdev" 00:02:59.115 gpu/*: missing internal dependency, "gpudev" 00:02:59.115 00:02:59.115 00:02:59.115 Build targets in project: 85 00:02:59.115 00:02:59.115 DPDK 24.03.0 00:02:59.115 00:02:59.115 User defined options 00:02:59.115 buildtype : debug 00:02:59.115 default_library : shared 00:02:59.115 libdir : lib 00:02:59.115 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:59.115 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:59.115 c_link_args : 00:02:59.115 cpu_instruction_set: native 00:02:59.115 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:02:59.115 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,argparse,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:02:59.115 enable_docs : false 00:02:59.115 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:59.115 enable_kmods : false 00:02:59.115 tests : false 00:02:59.115 00:02:59.115 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:59.115 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:59.115 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:59.115 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:59.115 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:59.115 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:59.115 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:59.378 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:59.378 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:59.378 [8/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:59.378 [9/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:59.378 [10/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:59.378 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:59.378 [12/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:59.378 [13/268] Linking static target lib/librte_kvargs.a 00:02:59.378 [14/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:59.378 [15/268] Linking static target lib/librte_log.a 00:02:59.378 [16/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:59.378 [17/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:59.378 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:59.378 [19/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:59.378 [20/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:59.378 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:59.378 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:59.378 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:59.378 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:59.378 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:59.378 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:59.378 [27/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:59.378 [28/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:59.378 [29/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:59.378 [30/268] Linking static target lib/librte_pci.a 00:02:59.378 [31/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:59.642 [32/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:59.642 [33/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:59.642 [34/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:59.642 [35/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:59.642 [36/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:59.642 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:59.901 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:59.901 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:59.901 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:59.901 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:59.902 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:59.902 [43/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:59.902 [44/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:59.902 [45/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:59.902 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:59.902 [47/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:59.902 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:59.902 [49/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:59.902 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:59.902 [51/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:59.902 [52/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:59.902 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:59.902 [54/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:59.902 [55/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:59.902 [56/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:59.902 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:59.902 [58/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:59.902 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:59.902 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:59.902 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:59.902 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:59.902 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:59.902 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:59.902 [65/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:59.902 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:59.902 [67/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:59.902 [68/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:59.902 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:59.902 [70/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:59.902 [71/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:59.902 [72/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:59.902 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:59.902 [74/268] Linking static target lib/librte_telemetry.a 00:02:59.902 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:59.902 [76/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:59.902 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:59.902 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:59.902 [79/268] Linking static target lib/librte_ring.a 00:02:59.902 [80/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:59.902 [81/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:59.902 [82/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:59.902 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:59.902 [84/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:59.902 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:59.902 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:59.902 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:59.902 [88/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:59.902 [89/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:59.902 [90/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:59.902 [91/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:59.902 [92/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:59.902 [93/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:59.902 [94/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:59.902 [95/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:59.902 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:59.902 [97/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:59.902 [98/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:59.902 [99/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:59.902 [100/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:59.902 [101/268] Linking static target lib/librte_meter.a 00:02:59.902 [102/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:59.902 [103/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:59.902 [104/268] Linking static target lib/librte_cmdline.a 00:02:59.902 [105/268] Linking static target lib/librte_net.a 00:02:59.902 [106/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.902 [107/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.902 [108/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:00.161 [109/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:00.161 [110/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:00.161 [111/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:00.161 [112/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:00.161 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:00.161 [114/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:00.161 [115/268] Linking static target lib/librte_mempool.a 00:03:00.161 [116/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:00.161 [117/268] Linking static target lib/librte_timer.a 00:03:00.161 [118/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:00.161 [119/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:00.161 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:00.161 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:00.161 [122/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:00.161 [123/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:00.161 [124/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:00.161 [125/268] Linking static target lib/librte_rcu.a 00:03:00.161 [126/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:00.161 [127/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:00.161 [128/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:00.161 [129/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:00.161 [130/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:00.161 [131/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:00.161 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:00.161 [133/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:00.161 [134/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:00.161 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:00.161 [136/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:00.161 [137/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:00.161 [138/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:00.161 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:00.161 [140/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:00.161 [141/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:00.161 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:00.161 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:00.161 [144/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:00.161 [145/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:00.161 [146/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:00.161 [147/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:00.161 [148/268] Linking static target lib/librte_dmadev.a 00:03:00.161 [149/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.161 [150/268] Linking static target lib/librte_compressdev.a 00:03:00.161 [151/268] Linking static target lib/librte_eal.a 00:03:00.161 [152/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:00.161 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:00.161 [154/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:00.161 [155/268] Linking target lib/librte_log.so.24.1 00:03:00.161 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:00.161 [157/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.420 [158/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:00.420 [159/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:00.420 [160/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:00.420 [161/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.420 [162/268] Linking static target lib/librte_hash.a 00:03:00.420 [163/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:00.420 [164/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.420 [165/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:00.420 [166/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:00.420 [167/268] Linking static target lib/librte_power.a 00:03:00.420 [168/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:00.420 [169/268] Linking static target lib/librte_mbuf.a 00:03:00.420 [170/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:00.420 [171/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:00.420 [172/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:00.420 [173/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:00.420 [174/268] Linking target lib/librte_kvargs.so.24.1 00:03:00.420 [175/268] Linking static target lib/librte_reorder.a 00:03:00.420 [176/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.420 [177/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:00.420 [178/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:00.420 [179/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:00.420 [180/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.420 [181/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.420 [182/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:00.420 [183/268] Linking static target lib/librte_security.a 00:03:00.420 [184/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:00.420 [185/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:00.420 [186/268] Linking target lib/librte_telemetry.so.24.1 00:03:00.420 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:00.420 [188/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:00.420 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:00.679 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:00.679 [191/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:00.679 [192/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:00.679 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:00.679 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:00.679 [195/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:00.679 [196/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:00.679 [197/268] Linking static target lib/librte_cryptodev.a 00:03:00.679 [198/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:00.679 [199/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:00.679 [200/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:00.679 [201/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:00.679 [202/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:00.679 [203/268] Linking static target drivers/librte_bus_vdev.a 00:03:00.679 [204/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:00.679 [205/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:00.937 [206/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:00.937 [207/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:00.937 [208/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:00.937 [209/268] Linking static target drivers/librte_bus_pci.a 00:03:00.937 [210/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.937 [211/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:00.937 [212/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.937 [213/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.937 [214/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.937 [215/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:00.937 [216/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:00.937 [217/268] Linking static target drivers/librte_mempool_ring.a 00:03:01.196 [218/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.196 [219/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.196 [220/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.196 [221/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.196 [222/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.196 [223/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.196 [224/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:01.196 [225/268] Linking static target lib/librte_ethdev.a 00:03:01.454 [226/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:01.454 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.832 [228/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.832 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:02.832 [230/268] Linking static target lib/librte_vhost.a 00:03:04.738 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.018 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.277 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.535 [234/268] Linking target lib/librte_eal.so.24.1 00:03:10.535 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:10.535 [236/268] Linking target lib/librte_pci.so.24.1 00:03:10.535 [237/268] Linking target lib/librte_ring.so.24.1 00:03:10.535 [238/268] Linking target lib/librte_timer.so.24.1 00:03:10.535 [239/268] Linking target lib/librte_meter.so.24.1 00:03:10.535 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:10.535 [241/268] Linking target lib/librte_dmadev.so.24.1 00:03:10.794 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:10.794 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:10.794 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:10.794 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:10.794 [246/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:10.794 [247/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:10.794 [248/268] Linking target lib/librte_rcu.so.24.1 00:03:10.794 [249/268] Linking target lib/librte_mempool.so.24.1 00:03:11.053 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:11.053 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:11.053 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:11.053 [253/268] Linking target lib/librte_mbuf.so.24.1 00:03:11.312 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:11.313 [255/268] Linking target lib/librte_compressdev.so.24.1 00:03:11.313 [256/268] Linking target lib/librte_reorder.so.24.1 00:03:11.313 [257/268] Linking target lib/librte_net.so.24.1 00:03:11.313 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:03:11.313 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:11.313 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:11.313 [261/268] Linking target lib/librte_cmdline.so.24.1 00:03:11.313 [262/268] Linking target lib/librte_hash.so.24.1 00:03:11.313 [263/268] Linking target lib/librte_ethdev.so.24.1 00:03:11.571 [264/268] Linking target lib/librte_security.so.24.1 00:03:11.571 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:11.571 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:11.572 [267/268] Linking target lib/librte_power.so.24.1 00:03:11.572 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:11.572 INFO: autodetecting backend as ninja 00:03:11.572 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 112 00:03:12.947 CC lib/ut/ut.o 00:03:12.947 CC lib/log/log.o 00:03:12.947 CC lib/log/log_deprecated.o 00:03:12.947 CC lib/log/log_flags.o 00:03:12.947 CC lib/ut_mock/mock.o 00:03:12.947 LIB libspdk_ut.a 00:03:12.947 LIB libspdk_log.a 00:03:12.947 LIB libspdk_ut_mock.a 00:03:12.947 SO libspdk_ut.so.2.0 00:03:12.947 SO libspdk_log.so.7.0 00:03:13.205 SO libspdk_ut_mock.so.6.0 00:03:13.205 SYMLINK libspdk_ut.so 00:03:13.205 SYMLINK libspdk_ut_mock.so 00:03:13.205 SYMLINK libspdk_log.so 00:03:13.463 CC lib/dma/dma.o 00:03:13.463 CC lib/util/base64.o 00:03:13.463 CC lib/util/bit_array.o 00:03:13.463 CC lib/util/cpuset.o 00:03:13.463 CC lib/util/crc16.o 00:03:13.463 CC lib/util/crc32.o 00:03:13.463 CC lib/util/crc32c.o 00:03:13.463 CC lib/util/crc64.o 00:03:13.463 CC lib/util/crc32_ieee.o 00:03:13.463 CC lib/util/fd.o 00:03:13.464 CC lib/util/dif.o 00:03:13.464 CC lib/ioat/ioat.o 00:03:13.464 CC lib/util/file.o 00:03:13.464 CXX lib/trace_parser/trace.o 00:03:13.464 CC lib/util/hexlify.o 00:03:13.464 CC lib/util/iov.o 00:03:13.464 CC lib/util/math.o 00:03:13.464 CC lib/util/pipe.o 00:03:13.464 CC lib/util/strerror_tls.o 00:03:13.464 CC lib/util/string.o 00:03:13.464 CC lib/util/uuid.o 00:03:13.464 CC lib/util/fd_group.o 00:03:13.464 CC lib/util/xor.o 00:03:13.464 CC lib/util/zipf.o 00:03:13.722 CC lib/vfio_user/host/vfio_user_pci.o 00:03:13.722 CC lib/vfio_user/host/vfio_user.o 00:03:13.722 LIB libspdk_dma.a 00:03:13.722 SO libspdk_dma.so.4.0 00:03:13.722 SYMLINK libspdk_dma.so 00:03:13.722 LIB libspdk_ioat.a 00:03:13.722 SO libspdk_ioat.so.7.0 00:03:13.980 LIB libspdk_vfio_user.a 00:03:13.980 SYMLINK libspdk_ioat.so 00:03:13.980 SO libspdk_vfio_user.so.5.0 00:03:13.980 SYMLINK libspdk_vfio_user.so 00:03:13.980 LIB libspdk_util.a 00:03:13.980 SO libspdk_util.so.9.0 00:03:14.239 SYMLINK libspdk_util.so 00:03:14.497 LIB libspdk_trace_parser.a 00:03:14.497 SO libspdk_trace_parser.so.5.0 00:03:14.497 SYMLINK libspdk_trace_parser.so 00:03:14.497 CC lib/idxd/idxd.o 00:03:14.497 CC lib/idxd/idxd_user.o 00:03:14.497 CC lib/idxd/idxd_kernel.o 00:03:14.497 CC lib/conf/conf.o 00:03:14.497 CC lib/env_dpdk/env.o 00:03:14.497 CC lib/env_dpdk/memory.o 00:03:14.497 CC lib/env_dpdk/pci.o 00:03:14.497 CC lib/env_dpdk/init.o 00:03:14.497 CC lib/env_dpdk/pci_ioat.o 00:03:14.497 CC lib/env_dpdk/threads.o 00:03:14.497 CC lib/env_dpdk/pci_virtio.o 00:03:14.497 CC lib/env_dpdk/pci_vmd.o 00:03:14.497 CC lib/env_dpdk/pci_idxd.o 00:03:14.497 CC lib/rdma/common.o 00:03:14.497 CC lib/env_dpdk/pci_event.o 00:03:14.497 CC lib/vmd/vmd.o 00:03:14.497 CC lib/env_dpdk/sigbus_handler.o 00:03:14.497 CC lib/vmd/led.o 00:03:14.497 CC lib/rdma/rdma_verbs.o 00:03:14.497 CC lib/env_dpdk/pci_dpdk.o 00:03:14.497 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:14.497 CC lib/json/json_parse.o 00:03:14.497 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:14.497 CC lib/json/json_util.o 00:03:14.497 CC lib/json/json_write.o 00:03:14.755 LIB libspdk_conf.a 00:03:14.755 SO libspdk_conf.so.6.0 00:03:15.013 LIB libspdk_rdma.a 00:03:15.013 LIB libspdk_json.a 00:03:15.013 SYMLINK libspdk_conf.so 00:03:15.013 SO libspdk_json.so.6.0 00:03:15.013 SO libspdk_rdma.so.6.0 00:03:15.013 SYMLINK libspdk_json.so 00:03:15.013 SYMLINK libspdk_rdma.so 00:03:15.271 LIB libspdk_idxd.a 00:03:15.271 SO libspdk_idxd.so.12.0 00:03:15.271 LIB libspdk_vmd.a 00:03:15.271 SYMLINK libspdk_idxd.so 00:03:15.271 SO libspdk_vmd.so.6.0 00:03:15.271 CC lib/jsonrpc/jsonrpc_server.o 00:03:15.271 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:15.271 CC lib/jsonrpc/jsonrpc_client.o 00:03:15.271 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:15.271 SYMLINK libspdk_vmd.so 00:03:15.529 LIB libspdk_jsonrpc.a 00:03:15.529 SO libspdk_jsonrpc.so.6.0 00:03:15.812 SYMLINK libspdk_jsonrpc.so 00:03:16.100 LIB libspdk_env_dpdk.a 00:03:16.100 CC lib/rpc/rpc.o 00:03:16.100 SO libspdk_env_dpdk.so.14.1 00:03:16.100 LIB libspdk_rpc.a 00:03:16.100 SYMLINK libspdk_env_dpdk.so 00:03:16.100 SO libspdk_rpc.so.6.0 00:03:16.374 SYMLINK libspdk_rpc.so 00:03:16.633 CC lib/keyring/keyring.o 00:03:16.633 CC lib/keyring/keyring_rpc.o 00:03:16.633 CC lib/notify/notify.o 00:03:16.633 CC lib/trace/trace_flags.o 00:03:16.633 CC lib/notify/notify_rpc.o 00:03:16.633 CC lib/trace/trace.o 00:03:16.633 CC lib/trace/trace_rpc.o 00:03:16.633 LIB libspdk_notify.a 00:03:16.891 SO libspdk_notify.so.6.0 00:03:16.891 LIB libspdk_keyring.a 00:03:16.891 LIB libspdk_trace.a 00:03:16.891 SYMLINK libspdk_notify.so 00:03:16.891 SO libspdk_keyring.so.1.0 00:03:16.891 SO libspdk_trace.so.10.0 00:03:16.891 SYMLINK libspdk_keyring.so 00:03:16.891 SYMLINK libspdk_trace.so 00:03:17.150 CC lib/thread/thread.o 00:03:17.151 CC lib/thread/iobuf.o 00:03:17.410 CC lib/sock/sock.o 00:03:17.410 CC lib/sock/sock_rpc.o 00:03:17.668 LIB libspdk_sock.a 00:03:17.668 SO libspdk_sock.so.9.0 00:03:17.927 SYMLINK libspdk_sock.so 00:03:18.186 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:18.186 CC lib/nvme/nvme_ctrlr.o 00:03:18.186 CC lib/nvme/nvme_fabric.o 00:03:18.186 CC lib/nvme/nvme_ns_cmd.o 00:03:18.186 CC lib/nvme/nvme_pcie_common.o 00:03:18.186 CC lib/nvme/nvme_ns.o 00:03:18.186 CC lib/nvme/nvme_pcie.o 00:03:18.186 CC lib/nvme/nvme_quirks.o 00:03:18.186 CC lib/nvme/nvme_qpair.o 00:03:18.186 CC lib/nvme/nvme.o 00:03:18.186 CC lib/nvme/nvme_transport.o 00:03:18.186 CC lib/nvme/nvme_discovery.o 00:03:18.186 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:18.186 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:18.186 CC lib/nvme/nvme_tcp.o 00:03:18.186 CC lib/nvme/nvme_opal.o 00:03:18.186 CC lib/nvme/nvme_io_msg.o 00:03:18.186 CC lib/nvme/nvme_poll_group.o 00:03:18.186 CC lib/nvme/nvme_zns.o 00:03:18.186 CC lib/nvme/nvme_stubs.o 00:03:18.186 CC lib/nvme/nvme_auth.o 00:03:18.186 CC lib/nvme/nvme_cuse.o 00:03:18.186 CC lib/nvme/nvme_vfio_user.o 00:03:18.186 CC lib/nvme/nvme_rdma.o 00:03:18.752 LIB libspdk_thread.a 00:03:18.752 SO libspdk_thread.so.10.0 00:03:18.752 SYMLINK libspdk_thread.so 00:03:19.317 CC lib/init/subsystem_rpc.o 00:03:19.317 CC lib/init/json_config.o 00:03:19.317 CC lib/init/subsystem.o 00:03:19.317 CC lib/vfu_tgt/tgt_endpoint.o 00:03:19.317 CC lib/vfu_tgt/tgt_rpc.o 00:03:19.317 CC lib/init/rpc.o 00:03:19.317 CC lib/accel/accel.o 00:03:19.317 CC lib/blob/blobstore.o 00:03:19.317 CC lib/accel/accel_rpc.o 00:03:19.317 CC lib/virtio/virtio.o 00:03:19.317 CC lib/accel/accel_sw.o 00:03:19.317 CC lib/virtio/virtio_vhost_user.o 00:03:19.317 CC lib/blob/request.o 00:03:19.317 CC lib/virtio/virtio_vfio_user.o 00:03:19.317 CC lib/blob/zeroes.o 00:03:19.317 CC lib/virtio/virtio_pci.o 00:03:19.317 CC lib/blob/blob_bs_dev.o 00:03:19.317 LIB libspdk_init.a 00:03:19.575 SO libspdk_init.so.5.0 00:03:19.575 LIB libspdk_vfu_tgt.a 00:03:19.575 LIB libspdk_virtio.a 00:03:19.575 SYMLINK libspdk_init.so 00:03:19.575 SO libspdk_vfu_tgt.so.3.0 00:03:19.575 SO libspdk_virtio.so.7.0 00:03:19.575 SYMLINK libspdk_vfu_tgt.so 00:03:19.575 SYMLINK libspdk_virtio.so 00:03:19.833 CC lib/event/log_rpc.o 00:03:19.834 CC lib/event/app.o 00:03:19.834 CC lib/event/reactor.o 00:03:19.834 CC lib/event/app_rpc.o 00:03:19.834 CC lib/event/scheduler_static.o 00:03:20.091 LIB libspdk_accel.a 00:03:20.350 SO libspdk_accel.so.15.0 00:03:20.350 LIB libspdk_event.a 00:03:20.350 LIB libspdk_nvme.a 00:03:20.350 SO libspdk_event.so.13.1 00:03:20.350 SYMLINK libspdk_accel.so 00:03:20.350 SYMLINK libspdk_event.so 00:03:20.350 SO libspdk_nvme.so.13.0 00:03:20.608 CC lib/bdev/bdev.o 00:03:20.608 CC lib/bdev/bdev_rpc.o 00:03:20.608 CC lib/bdev/bdev_zone.o 00:03:20.608 CC lib/bdev/part.o 00:03:20.608 CC lib/bdev/scsi_nvme.o 00:03:20.867 SYMLINK libspdk_nvme.so 00:03:22.246 LIB libspdk_blob.a 00:03:22.246 SO libspdk_blob.so.11.0 00:03:22.246 SYMLINK libspdk_blob.so 00:03:22.504 CC lib/lvol/lvol.o 00:03:22.504 CC lib/blobfs/blobfs.o 00:03:22.504 CC lib/blobfs/tree.o 00:03:23.436 LIB libspdk_bdev.a 00:03:23.436 SO libspdk_bdev.so.15.0 00:03:23.436 SYMLINK libspdk_bdev.so 00:03:23.436 LIB libspdk_blobfs.a 00:03:23.436 SO libspdk_blobfs.so.10.0 00:03:23.436 LIB libspdk_lvol.a 00:03:23.436 SYMLINK libspdk_blobfs.so 00:03:23.693 SO libspdk_lvol.so.10.0 00:03:23.693 SYMLINK libspdk_lvol.so 00:03:23.693 CC lib/ublk/ublk.o 00:03:23.693 CC lib/ublk/ublk_rpc.o 00:03:23.693 CC lib/nvmf/ctrlr.o 00:03:23.693 CC lib/nvmf/ctrlr_discovery.o 00:03:23.693 CC lib/nvmf/ctrlr_bdev.o 00:03:23.693 CC lib/nvmf/nvmf.o 00:03:23.693 CC lib/nvmf/subsystem.o 00:03:23.693 CC lib/ftl/ftl_core.o 00:03:23.693 CC lib/nvmf/nvmf_rpc.o 00:03:23.693 CC lib/ftl/ftl_init.o 00:03:23.693 CC lib/nbd/nbd.o 00:03:23.693 CC lib/nvmf/transport.o 00:03:23.693 CC lib/ftl/ftl_layout.o 00:03:23.693 CC lib/nvmf/tcp.o 00:03:23.693 CC lib/nbd/nbd_rpc.o 00:03:23.693 CC lib/ftl/ftl_debug.o 00:03:23.693 CC lib/scsi/dev.o 00:03:23.693 CC lib/nvmf/stubs.o 00:03:23.693 CC lib/ftl/ftl_io.o 00:03:23.693 CC lib/nvmf/mdns_server.o 00:03:23.693 CC lib/scsi/lun.o 00:03:23.693 CC lib/nvmf/vfio_user.o 00:03:23.693 CC lib/ftl/ftl_sb.o 00:03:23.693 CC lib/ftl/ftl_l2p.o 00:03:23.693 CC lib/scsi/port.o 00:03:23.693 CC lib/nvmf/rdma.o 00:03:23.693 CC lib/ftl/ftl_l2p_flat.o 00:03:23.693 CC lib/scsi/scsi.o 00:03:23.693 CC lib/nvmf/auth.o 00:03:23.693 CC lib/scsi/scsi_bdev.o 00:03:23.693 CC lib/ftl/ftl_nv_cache.o 00:03:23.693 CC lib/ftl/ftl_band.o 00:03:23.693 CC lib/scsi/scsi_pr.o 00:03:23.693 CC lib/scsi/task.o 00:03:23.693 CC lib/scsi/scsi_rpc.o 00:03:23.693 CC lib/ftl/ftl_band_ops.o 00:03:23.693 CC lib/ftl/ftl_writer.o 00:03:23.693 CC lib/ftl/ftl_rq.o 00:03:23.693 CC lib/ftl/ftl_reloc.o 00:03:23.693 CC lib/ftl/ftl_l2p_cache.o 00:03:23.693 CC lib/ftl/ftl_p2l.o 00:03:23.693 CC lib/ftl/mngt/ftl_mngt.o 00:03:23.693 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:23.693 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:23.693 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:23.693 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:23.693 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:23.693 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:23.693 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:23.693 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:23.693 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:23.693 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:23.693 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:23.693 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:23.693 CC lib/ftl/utils/ftl_conf.o 00:03:23.693 CC lib/ftl/utils/ftl_md.o 00:03:23.694 CC lib/ftl/utils/ftl_mempool.o 00:03:23.694 CC lib/ftl/utils/ftl_property.o 00:03:23.694 CC lib/ftl/utils/ftl_bitmap.o 00:03:23.694 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:23.694 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:23.694 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:23.694 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:23.694 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:23.694 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:23.694 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:23.694 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:23.694 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:23.694 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:23.694 CC lib/ftl/base/ftl_base_dev.o 00:03:23.694 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:23.694 CC lib/ftl/base/ftl_base_bdev.o 00:03:23.694 CC lib/ftl/ftl_trace.o 00:03:24.259 LIB libspdk_nbd.a 00:03:24.518 SO libspdk_nbd.so.7.0 00:03:24.518 LIB libspdk_scsi.a 00:03:24.518 SYMLINK libspdk_nbd.so 00:03:24.518 SO libspdk_scsi.so.9.0 00:03:24.518 LIB libspdk_ublk.a 00:03:24.518 SYMLINK libspdk_scsi.so 00:03:24.518 SO libspdk_ublk.so.3.0 00:03:24.777 SYMLINK libspdk_ublk.so 00:03:24.777 LIB libspdk_ftl.a 00:03:25.035 CC lib/vhost/vhost.o 00:03:25.035 CC lib/vhost/vhost_rpc.o 00:03:25.035 CC lib/vhost/vhost_scsi.o 00:03:25.035 CC lib/vhost/vhost_blk.o 00:03:25.035 CC lib/vhost/rte_vhost_user.o 00:03:25.035 CC lib/iscsi/conn.o 00:03:25.035 CC lib/iscsi/init_grp.o 00:03:25.035 CC lib/iscsi/iscsi.o 00:03:25.035 CC lib/iscsi/md5.o 00:03:25.035 CC lib/iscsi/param.o 00:03:25.035 CC lib/iscsi/portal_grp.o 00:03:25.035 CC lib/iscsi/tgt_node.o 00:03:25.035 CC lib/iscsi/iscsi_subsystem.o 00:03:25.035 CC lib/iscsi/iscsi_rpc.o 00:03:25.035 CC lib/iscsi/task.o 00:03:25.035 SO libspdk_ftl.so.9.0 00:03:25.294 SYMLINK libspdk_ftl.so 00:03:26.232 LIB libspdk_vhost.a 00:03:26.232 LIB libspdk_nvmf.a 00:03:26.232 SO libspdk_vhost.so.8.0 00:03:26.232 SO libspdk_nvmf.so.18.1 00:03:26.232 SYMLINK libspdk_vhost.so 00:03:26.232 LIB libspdk_iscsi.a 00:03:26.492 SO libspdk_iscsi.so.8.0 00:03:26.492 SYMLINK libspdk_nvmf.so 00:03:26.492 SYMLINK libspdk_iscsi.so 00:03:27.059 CC module/env_dpdk/env_dpdk_rpc.o 00:03:27.059 CC module/vfu_device/vfu_virtio.o 00:03:27.059 CC module/vfu_device/vfu_virtio_blk.o 00:03:27.059 CC module/vfu_device/vfu_virtio_rpc.o 00:03:27.059 CC module/vfu_device/vfu_virtio_scsi.o 00:03:27.059 CC module/accel/iaa/accel_iaa.o 00:03:27.059 CC module/accel/iaa/accel_iaa_rpc.o 00:03:27.059 CC module/accel/error/accel_error_rpc.o 00:03:27.059 CC module/accel/error/accel_error.o 00:03:27.059 CC module/blob/bdev/blob_bdev.o 00:03:27.059 CC module/accel/dsa/accel_dsa.o 00:03:27.059 CC module/accel/dsa/accel_dsa_rpc.o 00:03:27.317 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:27.317 CC module/accel/ioat/accel_ioat.o 00:03:27.317 CC module/keyring/linux/keyring.o 00:03:27.317 CC module/accel/ioat/accel_ioat_rpc.o 00:03:27.317 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:27.317 CC module/keyring/linux/keyring_rpc.o 00:03:27.317 CC module/scheduler/gscheduler/gscheduler.o 00:03:27.317 CC module/sock/posix/posix.o 00:03:27.317 CC module/keyring/file/keyring.o 00:03:27.317 CC module/keyring/file/keyring_rpc.o 00:03:27.317 LIB libspdk_env_dpdk_rpc.a 00:03:27.317 SO libspdk_env_dpdk_rpc.so.6.0 00:03:27.317 SYMLINK libspdk_env_dpdk_rpc.so 00:03:27.317 LIB libspdk_keyring_linux.a 00:03:27.317 LIB libspdk_scheduler_gscheduler.a 00:03:27.317 LIB libspdk_keyring_file.a 00:03:27.317 LIB libspdk_scheduler_dpdk_governor.a 00:03:27.317 LIB libspdk_accel_error.a 00:03:27.317 SO libspdk_scheduler_gscheduler.so.4.0 00:03:27.317 LIB libspdk_accel_iaa.a 00:03:27.317 SO libspdk_keyring_linux.so.1.0 00:03:27.317 LIB libspdk_accel_ioat.a 00:03:27.317 LIB libspdk_scheduler_dynamic.a 00:03:27.317 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:27.317 SO libspdk_keyring_file.so.1.0 00:03:27.577 SO libspdk_accel_error.so.2.0 00:03:27.577 SO libspdk_accel_iaa.so.3.0 00:03:27.577 SO libspdk_scheduler_dynamic.so.4.0 00:03:27.577 SO libspdk_accel_ioat.so.6.0 00:03:27.577 SYMLINK libspdk_scheduler_gscheduler.so 00:03:27.577 LIB libspdk_accel_dsa.a 00:03:27.577 SYMLINK libspdk_keyring_linux.so 00:03:27.577 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:27.577 LIB libspdk_blob_bdev.a 00:03:27.577 SYMLINK libspdk_keyring_file.so 00:03:27.577 SYMLINK libspdk_accel_error.so 00:03:27.577 SYMLINK libspdk_scheduler_dynamic.so 00:03:27.577 SO libspdk_accel_dsa.so.5.0 00:03:27.577 SYMLINK libspdk_accel_ioat.so 00:03:27.577 SO libspdk_blob_bdev.so.11.0 00:03:27.577 SYMLINK libspdk_accel_iaa.so 00:03:27.577 SYMLINK libspdk_blob_bdev.so 00:03:27.577 SYMLINK libspdk_accel_dsa.so 00:03:27.577 LIB libspdk_vfu_device.a 00:03:27.577 SO libspdk_vfu_device.so.3.0 00:03:27.836 SYMLINK libspdk_vfu_device.so 00:03:28.095 LIB libspdk_sock_posix.a 00:03:28.095 SO libspdk_sock_posix.so.6.0 00:03:28.095 CC module/bdev/lvol/vbdev_lvol.o 00:03:28.095 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:28.095 CC module/blobfs/bdev/blobfs_bdev.o 00:03:28.095 CC module/bdev/gpt/gpt.o 00:03:28.095 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:28.095 CC module/bdev/gpt/vbdev_gpt.o 00:03:28.095 CC module/bdev/passthru/vbdev_passthru.o 00:03:28.095 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:28.095 CC module/bdev/error/vbdev_error_rpc.o 00:03:28.095 CC module/bdev/error/vbdev_error.o 00:03:28.095 CC module/bdev/null/bdev_null_rpc.o 00:03:28.095 CC module/bdev/malloc/bdev_malloc.o 00:03:28.095 CC module/bdev/null/bdev_null.o 00:03:28.095 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:28.095 CC module/bdev/split/vbdev_split.o 00:03:28.095 CC module/bdev/nvme/bdev_nvme.o 00:03:28.095 CC module/bdev/raid/bdev_raid.o 00:03:28.095 CC module/bdev/split/vbdev_split_rpc.o 00:03:28.095 CC module/bdev/raid/bdev_raid_rpc.o 00:03:28.095 CC module/bdev/iscsi/bdev_iscsi.o 00:03:28.095 CC module/bdev/raid/raid0.o 00:03:28.095 CC module/bdev/raid/bdev_raid_sb.o 00:03:28.095 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:28.095 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:28.095 CC module/bdev/raid/raid1.o 00:03:28.095 CC module/bdev/nvme/bdev_mdns_client.o 00:03:28.095 CC module/bdev/nvme/nvme_rpc.o 00:03:28.095 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:28.095 CC module/bdev/delay/vbdev_delay.o 00:03:28.095 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:28.095 CC module/bdev/raid/concat.o 00:03:28.095 CC module/bdev/nvme/vbdev_opal.o 00:03:28.095 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:28.095 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:28.095 CC module/bdev/aio/bdev_aio.o 00:03:28.095 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:28.095 CC module/bdev/ftl/bdev_ftl.o 00:03:28.095 CC module/bdev/aio/bdev_aio_rpc.o 00:03:28.095 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:28.095 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:28.095 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:28.095 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:28.095 SYMLINK libspdk_sock_posix.so 00:03:28.355 LIB libspdk_blobfs_bdev.a 00:03:28.355 SO libspdk_blobfs_bdev.so.6.0 00:03:28.355 LIB libspdk_bdev_split.a 00:03:28.355 LIB libspdk_bdev_error.a 00:03:28.355 LIB libspdk_bdev_gpt.a 00:03:28.355 SO libspdk_bdev_split.so.6.0 00:03:28.355 LIB libspdk_bdev_null.a 00:03:28.355 SYMLINK libspdk_blobfs_bdev.so 00:03:28.355 SO libspdk_bdev_error.so.6.0 00:03:28.355 SO libspdk_bdev_gpt.so.6.0 00:03:28.355 LIB libspdk_bdev_passthru.a 00:03:28.355 LIB libspdk_bdev_ftl.a 00:03:28.614 SO libspdk_bdev_null.so.6.0 00:03:28.614 SO libspdk_bdev_passthru.so.6.0 00:03:28.614 SO libspdk_bdev_ftl.so.6.0 00:03:28.614 SYMLINK libspdk_bdev_split.so 00:03:28.614 LIB libspdk_bdev_malloc.a 00:03:28.614 SYMLINK libspdk_bdev_error.so 00:03:28.614 LIB libspdk_bdev_aio.a 00:03:28.614 LIB libspdk_bdev_zone_block.a 00:03:28.614 SYMLINK libspdk_bdev_gpt.so 00:03:28.614 LIB libspdk_bdev_delay.a 00:03:28.614 SO libspdk_bdev_malloc.so.6.0 00:03:28.614 SYMLINK libspdk_bdev_null.so 00:03:28.614 LIB libspdk_bdev_iscsi.a 00:03:28.614 SO libspdk_bdev_aio.so.6.0 00:03:28.614 SO libspdk_bdev_zone_block.so.6.0 00:03:28.614 SYMLINK libspdk_bdev_passthru.so 00:03:28.614 SYMLINK libspdk_bdev_ftl.so 00:03:28.614 SO libspdk_bdev_delay.so.6.0 00:03:28.614 SO libspdk_bdev_iscsi.so.6.0 00:03:28.614 LIB libspdk_bdev_lvol.a 00:03:28.614 SYMLINK libspdk_bdev_malloc.so 00:03:28.614 SYMLINK libspdk_bdev_aio.so 00:03:28.614 SYMLINK libspdk_bdev_zone_block.so 00:03:28.614 SYMLINK libspdk_bdev_delay.so 00:03:28.614 SO libspdk_bdev_lvol.so.6.0 00:03:28.614 SYMLINK libspdk_bdev_iscsi.so 00:03:28.614 LIB libspdk_bdev_virtio.a 00:03:28.873 SYMLINK libspdk_bdev_lvol.so 00:03:28.873 SO libspdk_bdev_virtio.so.6.0 00:03:28.873 SYMLINK libspdk_bdev_virtio.so 00:03:29.131 LIB libspdk_bdev_raid.a 00:03:29.131 SO libspdk_bdev_raid.so.6.0 00:03:29.390 SYMLINK libspdk_bdev_raid.so 00:03:30.327 LIB libspdk_bdev_nvme.a 00:03:30.327 SO libspdk_bdev_nvme.so.7.0 00:03:30.586 SYMLINK libspdk_bdev_nvme.so 00:03:31.154 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:31.154 CC module/event/subsystems/iobuf/iobuf.o 00:03:31.154 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:31.154 CC module/event/subsystems/vmd/vmd.o 00:03:31.154 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:31.154 CC module/event/subsystems/keyring/keyring.o 00:03:31.154 CC module/event/subsystems/scheduler/scheduler.o 00:03:31.154 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:31.154 CC module/event/subsystems/sock/sock.o 00:03:31.413 LIB libspdk_event_vhost_blk.a 00:03:31.413 SO libspdk_event_vhost_blk.so.3.0 00:03:31.413 LIB libspdk_event_keyring.a 00:03:31.413 LIB libspdk_event_scheduler.a 00:03:31.413 LIB libspdk_event_vmd.a 00:03:31.413 LIB libspdk_event_iobuf.a 00:03:31.413 LIB libspdk_event_vfu_tgt.a 00:03:31.413 LIB libspdk_event_sock.a 00:03:31.413 SO libspdk_event_iobuf.so.3.0 00:03:31.413 SO libspdk_event_keyring.so.1.0 00:03:31.413 SO libspdk_event_scheduler.so.4.0 00:03:31.413 SO libspdk_event_vmd.so.6.0 00:03:31.413 SO libspdk_event_vfu_tgt.so.3.0 00:03:31.413 SO libspdk_event_sock.so.5.0 00:03:31.413 SYMLINK libspdk_event_vhost_blk.so 00:03:31.413 SYMLINK libspdk_event_iobuf.so 00:03:31.413 SYMLINK libspdk_event_keyring.so 00:03:31.413 SYMLINK libspdk_event_sock.so 00:03:31.413 SYMLINK libspdk_event_scheduler.so 00:03:31.413 SYMLINK libspdk_event_vfu_tgt.so 00:03:31.413 SYMLINK libspdk_event_vmd.so 00:03:31.672 CC module/event/subsystems/accel/accel.o 00:03:31.932 LIB libspdk_event_accel.a 00:03:31.932 SO libspdk_event_accel.so.6.0 00:03:31.932 SYMLINK libspdk_event_accel.so 00:03:32.192 CC module/event/subsystems/bdev/bdev.o 00:03:32.450 LIB libspdk_event_bdev.a 00:03:32.450 SO libspdk_event_bdev.so.6.0 00:03:32.709 SYMLINK libspdk_event_bdev.so 00:03:32.970 CC module/event/subsystems/nbd/nbd.o 00:03:32.970 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:32.970 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:32.970 CC module/event/subsystems/ublk/ublk.o 00:03:32.970 CC module/event/subsystems/scsi/scsi.o 00:03:32.970 LIB libspdk_event_nbd.a 00:03:32.970 SO libspdk_event_nbd.so.6.0 00:03:32.970 LIB libspdk_event_ublk.a 00:03:32.970 LIB libspdk_event_scsi.a 00:03:32.970 SO libspdk_event_ublk.so.3.0 00:03:33.229 SYMLINK libspdk_event_nbd.so 00:03:33.229 SO libspdk_event_scsi.so.6.0 00:03:33.229 LIB libspdk_event_nvmf.a 00:03:33.229 SYMLINK libspdk_event_ublk.so 00:03:33.229 SYMLINK libspdk_event_scsi.so 00:03:33.229 SO libspdk_event_nvmf.so.6.0 00:03:33.229 SYMLINK libspdk_event_nvmf.so 00:03:33.488 CC module/event/subsystems/iscsi/iscsi.o 00:03:33.488 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:33.747 LIB libspdk_event_vhost_scsi.a 00:03:33.747 LIB libspdk_event_iscsi.a 00:03:33.747 SO libspdk_event_vhost_scsi.so.3.0 00:03:33.747 SO libspdk_event_iscsi.so.6.0 00:03:33.747 SYMLINK libspdk_event_vhost_scsi.so 00:03:33.747 SYMLINK libspdk_event_iscsi.so 00:03:34.006 SO libspdk.so.6.0 00:03:34.006 SYMLINK libspdk.so 00:03:34.265 CC app/spdk_top/spdk_top.o 00:03:34.265 CXX app/trace/trace.o 00:03:34.265 CC app/spdk_lspci/spdk_lspci.o 00:03:34.265 TEST_HEADER include/spdk/accel.h 00:03:34.265 CC app/spdk_nvme_perf/perf.o 00:03:34.265 TEST_HEADER include/spdk/accel_module.h 00:03:34.265 TEST_HEADER include/spdk/assert.h 00:03:34.265 TEST_HEADER include/spdk/barrier.h 00:03:34.265 TEST_HEADER include/spdk/bdev.h 00:03:34.265 TEST_HEADER include/spdk/base64.h 00:03:34.265 TEST_HEADER include/spdk/bdev_zone.h 00:03:34.265 CC test/rpc_client/rpc_client_test.o 00:03:34.265 TEST_HEADER include/spdk/bdev_module.h 00:03:34.265 CC app/spdk_nvme_discover/discovery_aer.o 00:03:34.265 TEST_HEADER include/spdk/bit_array.h 00:03:34.265 TEST_HEADER include/spdk/bit_pool.h 00:03:34.265 CC app/trace_record/trace_record.o 00:03:34.265 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:34.265 TEST_HEADER include/spdk/blob_bdev.h 00:03:34.265 CC app/spdk_nvme_identify/identify.o 00:03:34.265 TEST_HEADER include/spdk/blobfs.h 00:03:34.265 TEST_HEADER include/spdk/blob.h 00:03:34.265 TEST_HEADER include/spdk/conf.h 00:03:34.265 TEST_HEADER include/spdk/cpuset.h 00:03:34.265 TEST_HEADER include/spdk/config.h 00:03:34.265 TEST_HEADER include/spdk/crc16.h 00:03:34.265 TEST_HEADER include/spdk/crc32.h 00:03:34.265 TEST_HEADER include/spdk/crc64.h 00:03:34.265 TEST_HEADER include/spdk/dif.h 00:03:34.265 TEST_HEADER include/spdk/endian.h 00:03:34.265 TEST_HEADER include/spdk/dma.h 00:03:34.265 TEST_HEADER include/spdk/env.h 00:03:34.265 TEST_HEADER include/spdk/env_dpdk.h 00:03:34.265 TEST_HEADER include/spdk/event.h 00:03:34.265 TEST_HEADER include/spdk/fd_group.h 00:03:34.265 TEST_HEADER include/spdk/fd.h 00:03:34.265 TEST_HEADER include/spdk/file.h 00:03:34.265 TEST_HEADER include/spdk/ftl.h 00:03:34.265 TEST_HEADER include/spdk/gpt_spec.h 00:03:34.265 CC app/spdk_dd/spdk_dd.o 00:03:34.265 TEST_HEADER include/spdk/hexlify.h 00:03:34.265 TEST_HEADER include/spdk/histogram_data.h 00:03:34.265 TEST_HEADER include/spdk/idxd.h 00:03:34.265 TEST_HEADER include/spdk/init.h 00:03:34.265 TEST_HEADER include/spdk/idxd_spec.h 00:03:34.265 TEST_HEADER include/spdk/ioat.h 00:03:34.265 TEST_HEADER include/spdk/ioat_spec.h 00:03:34.265 TEST_HEADER include/spdk/json.h 00:03:34.265 TEST_HEADER include/spdk/jsonrpc.h 00:03:34.265 TEST_HEADER include/spdk/iscsi_spec.h 00:03:34.265 CC app/iscsi_tgt/iscsi_tgt.o 00:03:34.265 TEST_HEADER include/spdk/likely.h 00:03:34.265 TEST_HEADER include/spdk/keyring.h 00:03:34.265 TEST_HEADER include/spdk/log.h 00:03:34.265 TEST_HEADER include/spdk/keyring_module.h 00:03:34.265 TEST_HEADER include/spdk/lvol.h 00:03:34.265 CC app/nvmf_tgt/nvmf_main.o 00:03:34.265 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:34.265 TEST_HEADER include/spdk/memory.h 00:03:34.265 TEST_HEADER include/spdk/nbd.h 00:03:34.265 TEST_HEADER include/spdk/mmio.h 00:03:34.265 TEST_HEADER include/spdk/notify.h 00:03:34.265 CC app/vhost/vhost.o 00:03:34.265 TEST_HEADER include/spdk/nvme_intel.h 00:03:34.265 TEST_HEADER include/spdk/nvme.h 00:03:34.265 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:34.265 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:34.265 TEST_HEADER include/spdk/nvme_spec.h 00:03:34.265 TEST_HEADER include/spdk/nvme_zns.h 00:03:34.265 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:34.265 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:34.265 TEST_HEADER include/spdk/nvmf_spec.h 00:03:34.524 TEST_HEADER include/spdk/nvmf.h 00:03:34.524 TEST_HEADER include/spdk/nvmf_transport.h 00:03:34.524 TEST_HEADER include/spdk/opal.h 00:03:34.524 TEST_HEADER include/spdk/pci_ids.h 00:03:34.524 TEST_HEADER include/spdk/opal_spec.h 00:03:34.524 TEST_HEADER include/spdk/pipe.h 00:03:34.524 TEST_HEADER include/spdk/queue.h 00:03:34.524 TEST_HEADER include/spdk/rpc.h 00:03:34.524 TEST_HEADER include/spdk/scheduler.h 00:03:34.524 CC app/spdk_tgt/spdk_tgt.o 00:03:34.524 TEST_HEADER include/spdk/reduce.h 00:03:34.524 TEST_HEADER include/spdk/scsi.h 00:03:34.524 TEST_HEADER include/spdk/scsi_spec.h 00:03:34.524 TEST_HEADER include/spdk/sock.h 00:03:34.524 TEST_HEADER include/spdk/stdinc.h 00:03:34.524 TEST_HEADER include/spdk/string.h 00:03:34.524 TEST_HEADER include/spdk/thread.h 00:03:34.524 TEST_HEADER include/spdk/trace.h 00:03:34.524 TEST_HEADER include/spdk/tree.h 00:03:34.524 TEST_HEADER include/spdk/ublk.h 00:03:34.524 TEST_HEADER include/spdk/trace_parser.h 00:03:34.524 TEST_HEADER include/spdk/util.h 00:03:34.524 TEST_HEADER include/spdk/uuid.h 00:03:34.524 TEST_HEADER include/spdk/version.h 00:03:34.524 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:34.524 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:34.524 TEST_HEADER include/spdk/vhost.h 00:03:34.524 TEST_HEADER include/spdk/xor.h 00:03:34.524 TEST_HEADER include/spdk/vmd.h 00:03:34.524 TEST_HEADER include/spdk/zipf.h 00:03:34.524 CXX test/cpp_headers/accel.o 00:03:34.524 CXX test/cpp_headers/accel_module.o 00:03:34.524 CXX test/cpp_headers/assert.o 00:03:34.524 CXX test/cpp_headers/barrier.o 00:03:34.524 CXX test/cpp_headers/base64.o 00:03:34.524 CXX test/cpp_headers/bdev.o 00:03:34.524 CXX test/cpp_headers/bdev_module.o 00:03:34.524 CXX test/cpp_headers/bdev_zone.o 00:03:34.524 CXX test/cpp_headers/bit_array.o 00:03:34.524 CXX test/cpp_headers/bit_pool.o 00:03:34.524 CXX test/cpp_headers/blob_bdev.o 00:03:34.524 CXX test/cpp_headers/blobfs_bdev.o 00:03:34.524 CXX test/cpp_headers/blobfs.o 00:03:34.524 CXX test/cpp_headers/blob.o 00:03:34.524 CXX test/cpp_headers/config.o 00:03:34.524 CXX test/cpp_headers/conf.o 00:03:34.524 CXX test/cpp_headers/cpuset.o 00:03:34.524 CXX test/cpp_headers/crc16.o 00:03:34.524 CXX test/cpp_headers/crc32.o 00:03:34.524 CXX test/cpp_headers/crc64.o 00:03:34.524 CXX test/cpp_headers/dif.o 00:03:34.524 CXX test/cpp_headers/dma.o 00:03:34.524 CXX test/cpp_headers/env.o 00:03:34.524 CXX test/cpp_headers/event.o 00:03:34.524 CXX test/cpp_headers/endian.o 00:03:34.524 CXX test/cpp_headers/env_dpdk.o 00:03:34.524 CXX test/cpp_headers/fd_group.o 00:03:34.524 CXX test/cpp_headers/file.o 00:03:34.524 CXX test/cpp_headers/fd.o 00:03:34.524 CXX test/cpp_headers/ftl.o 00:03:34.524 CXX test/cpp_headers/gpt_spec.o 00:03:34.524 CXX test/cpp_headers/hexlify.o 00:03:34.524 CXX test/cpp_headers/idxd.o 00:03:34.524 CXX test/cpp_headers/histogram_data.o 00:03:34.524 CXX test/cpp_headers/idxd_spec.o 00:03:34.524 CXX test/cpp_headers/init.o 00:03:34.524 CXX test/cpp_headers/ioat.o 00:03:34.524 CXX test/cpp_headers/ioat_spec.o 00:03:34.524 CC examples/sock/hello_world/hello_sock.o 00:03:34.524 CC examples/accel/perf/accel_perf.o 00:03:34.524 CC examples/ioat/perf/perf.o 00:03:34.524 CC examples/nvme/hotplug/hotplug.o 00:03:34.524 CC examples/ioat/verify/verify.o 00:03:34.524 CC test/nvme/e2edp/nvme_dp.o 00:03:34.524 CC test/nvme/err_injection/err_injection.o 00:03:34.524 CC test/app/histogram_perf/histogram_perf.o 00:03:34.524 CC test/env/vtophys/vtophys.o 00:03:34.524 CC test/env/memory/memory_ut.o 00:03:34.524 CC test/nvme/sgl/sgl.o 00:03:34.524 CC examples/vmd/lsvmd/lsvmd.o 00:03:34.524 CC test/env/pci/pci_ut.o 00:03:34.525 CC test/nvme/simple_copy/simple_copy.o 00:03:34.525 CC test/nvme/startup/startup.o 00:03:34.525 CC test/nvme/aer/aer.o 00:03:34.525 CC examples/util/zipf/zipf.o 00:03:34.525 CC examples/vmd/led/led.o 00:03:34.525 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:34.525 CC test/nvme/overhead/overhead.o 00:03:34.525 CC test/nvme/connect_stress/connect_stress.o 00:03:34.794 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:34.794 CC test/nvme/reserve/reserve.o 00:03:34.794 CC test/app/jsoncat/jsoncat.o 00:03:34.794 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:34.794 CC examples/nvme/hello_world/hello_world.o 00:03:34.794 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:34.794 CC test/nvme/reset/reset.o 00:03:34.794 CC test/event/reactor/reactor.o 00:03:34.794 CC test/event/reactor_perf/reactor_perf.o 00:03:34.794 CC test/nvme/fdp/fdp.o 00:03:34.794 CC examples/nvme/abort/abort.o 00:03:34.794 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:34.794 CC test/thread/poller_perf/poller_perf.o 00:03:34.794 CC examples/nvme/reconnect/reconnect.o 00:03:34.794 CC examples/thread/thread/thread_ex.o 00:03:34.794 CC test/nvme/fused_ordering/fused_ordering.o 00:03:34.794 CC examples/nvme/arbitration/arbitration.o 00:03:34.794 CC app/fio/bdev/fio_plugin.o 00:03:34.794 CC test/app/stub/stub.o 00:03:34.794 CC test/blobfs/mkfs/mkfs.o 00:03:34.794 CC examples/bdev/bdevperf/bdevperf.o 00:03:34.794 CC test/nvme/cuse/cuse.o 00:03:34.794 CC examples/idxd/perf/perf.o 00:03:34.794 CC test/nvme/boot_partition/boot_partition.o 00:03:34.794 CC test/accel/dif/dif.o 00:03:34.794 CC test/event/event_perf/event_perf.o 00:03:34.794 CC test/bdev/bdevio/bdevio.o 00:03:34.794 CC examples/blob/hello_world/hello_blob.o 00:03:34.794 CC examples/blob/cli/blobcli.o 00:03:34.794 CC test/nvme/compliance/nvme_compliance.o 00:03:34.794 CC app/fio/nvme/fio_plugin.o 00:03:34.794 LINK spdk_lspci 00:03:34.794 CC test/dma/test_dma/test_dma.o 00:03:34.794 CC examples/bdev/hello_world/hello_bdev.o 00:03:34.794 CC test/app/bdev_svc/bdev_svc.o 00:03:34.794 CC test/event/app_repeat/app_repeat.o 00:03:34.794 CC examples/nvmf/nvmf/nvmf.o 00:03:34.794 LINK rpc_client_test 00:03:34.794 CC test/event/scheduler/scheduler.o 00:03:35.057 LINK spdk_nvme_discover 00:03:35.057 LINK nvmf_tgt 00:03:35.057 LINK interrupt_tgt 00:03:35.057 CC test/env/mem_callbacks/mem_callbacks.o 00:03:35.057 LINK vhost 00:03:35.057 LINK spdk_tgt 00:03:35.057 LINK lsvmd 00:03:35.057 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:35.057 CC test/lvol/esnap/esnap.o 00:03:35.057 LINK vtophys 00:03:35.317 LINK histogram_perf 00:03:35.317 LINK pmr_persistence 00:03:35.317 LINK zipf 00:03:35.317 CXX test/cpp_headers/iscsi_spec.o 00:03:35.317 CXX test/cpp_headers/json.o 00:03:35.317 LINK jsoncat 00:03:35.317 CXX test/cpp_headers/jsonrpc.o 00:03:35.317 CXX test/cpp_headers/keyring.o 00:03:35.317 LINK event_perf 00:03:35.317 LINK startup 00:03:35.317 CXX test/cpp_headers/keyring_module.o 00:03:35.317 LINK connect_stress 00:03:35.317 CXX test/cpp_headers/likely.o 00:03:35.317 CXX test/cpp_headers/log.o 00:03:35.317 CXX test/cpp_headers/lvol.o 00:03:35.317 CXX test/cpp_headers/memory.o 00:03:35.317 CXX test/cpp_headers/mmio.o 00:03:35.317 CXX test/cpp_headers/nbd.o 00:03:35.317 CXX test/cpp_headers/notify.o 00:03:35.317 LINK iscsi_tgt 00:03:35.317 CXX test/cpp_headers/nvme.o 00:03:35.317 CXX test/cpp_headers/nvme_intel.o 00:03:35.317 CXX test/cpp_headers/nvme_ocssd.o 00:03:35.317 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:35.317 CXX test/cpp_headers/nvme_spec.o 00:03:35.317 LINK verify 00:03:35.317 LINK fused_ordering 00:03:35.317 CXX test/cpp_headers/nvme_zns.o 00:03:35.317 LINK mkfs 00:03:35.317 LINK simple_copy 00:03:35.317 LINK hotplug 00:03:35.317 CXX test/cpp_headers/nvmf_cmd.o 00:03:35.317 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:35.317 LINK sgl 00:03:35.317 LINK nvme_dp 00:03:35.317 CXX test/cpp_headers/nvmf.o 00:03:35.317 CXX test/cpp_headers/nvmf_spec.o 00:03:35.317 LINK led 00:03:35.317 LINK spdk_trace_record 00:03:35.317 LINK reactor_perf 00:03:35.318 LINK poller_perf 00:03:35.318 LINK env_dpdk_post_init 00:03:35.580 LINK aer 00:03:35.580 LINK ioat_perf 00:03:35.580 LINK thread 00:03:35.580 LINK reactor 00:03:35.580 CXX test/cpp_headers/nvmf_transport.o 00:03:35.580 LINK boot_partition 00:03:35.580 CXX test/cpp_headers/opal.o 00:03:35.580 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:35.580 LINK hello_sock 00:03:35.580 CXX test/cpp_headers/opal_spec.o 00:03:35.580 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:35.580 CXX test/cpp_headers/pci_ids.o 00:03:35.580 LINK err_injection 00:03:35.580 CXX test/cpp_headers/pipe.o 00:03:35.580 CXX test/cpp_headers/reduce.o 00:03:35.580 CXX test/cpp_headers/queue.o 00:03:35.580 CXX test/cpp_headers/rpc.o 00:03:35.580 LINK scheduler 00:03:35.580 CXX test/cpp_headers/scheduler.o 00:03:35.580 CXX test/cpp_headers/scsi.o 00:03:35.580 CXX test/cpp_headers/scsi_spec.o 00:03:35.580 LINK hello_world 00:03:35.580 CXX test/cpp_headers/sock.o 00:03:35.580 CXX test/cpp_headers/stdinc.o 00:03:35.580 LINK cmb_copy 00:03:35.580 CXX test/cpp_headers/string.o 00:03:35.580 CXX test/cpp_headers/thread.o 00:03:35.580 LINK spdk_trace 00:03:35.580 CXX test/cpp_headers/trace.o 00:03:35.580 CXX test/cpp_headers/trace_parser.o 00:03:35.580 CXX test/cpp_headers/ublk.o 00:03:35.580 CXX test/cpp_headers/tree.o 00:03:35.580 LINK reserve 00:03:35.580 LINK app_repeat 00:03:35.580 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:35.580 CXX test/cpp_headers/util.o 00:03:35.580 LINK stub 00:03:35.580 CXX test/cpp_headers/uuid.o 00:03:35.580 LINK bdevio 00:03:35.580 LINK doorbell_aers 00:03:35.580 CXX test/cpp_headers/version.o 00:03:35.580 LINK nvmf 00:03:35.580 CXX test/cpp_headers/vfio_user_pci.o 00:03:35.580 LINK idxd_perf 00:03:35.580 LINK bdev_svc 00:03:35.580 LINK nvme_compliance 00:03:35.580 CXX test/cpp_headers/vfio_user_spec.o 00:03:35.580 CXX test/cpp_headers/vhost.o 00:03:35.580 CXX test/cpp_headers/vmd.o 00:03:35.580 CXX test/cpp_headers/xor.o 00:03:35.580 CXX test/cpp_headers/zipf.o 00:03:35.580 LINK pci_ut 00:03:35.839 LINK spdk_dd 00:03:35.839 LINK reset 00:03:35.839 LINK hello_bdev 00:03:35.839 LINK hello_blob 00:03:35.839 LINK overhead 00:03:35.839 LINK test_dma 00:03:35.839 LINK dif 00:03:35.839 LINK abort 00:03:35.839 LINK arbitration 00:03:35.839 LINK fdp 00:03:35.839 LINK blobcli 00:03:35.839 LINK spdk_nvme 00:03:35.839 LINK reconnect 00:03:36.097 LINK spdk_nvme_perf 00:03:36.097 LINK nvme_fuzz 00:03:36.097 LINK accel_perf 00:03:36.097 LINK spdk_bdev 00:03:36.097 LINK mem_callbacks 00:03:36.097 LINK nvme_manage 00:03:36.356 LINK bdevperf 00:03:36.356 LINK vhost_fuzz 00:03:36.615 LINK spdk_nvme_identify 00:03:36.615 LINK memory_ut 00:03:36.615 LINK spdk_top 00:03:36.615 LINK cuse 00:03:37.550 LINK iscsi_fuzz 00:03:40.840 LINK esnap 00:03:40.840 00:03:40.840 real 0m51.488s 00:03:40.840 user 8m23.856s 00:03:40.840 sys 4m39.767s 00:03:40.840 21:20:41 make -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:03:40.840 21:20:41 make -- common/autotest_common.sh@10 -- $ set +x 00:03:40.840 ************************************ 00:03:40.840 END TEST make 00:03:40.840 ************************************ 00:03:40.840 21:20:41 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:40.840 21:20:41 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:40.840 21:20:41 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:40.840 21:20:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:40.840 21:20:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:40.840 21:20:41 -- pm/common@44 -- $ pid=1123455 00:03:40.840 21:20:41 -- pm/common@50 -- $ kill -TERM 1123455 00:03:40.840 21:20:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:40.840 21:20:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:40.840 21:20:41 -- pm/common@44 -- $ pid=1123456 00:03:40.840 21:20:41 -- pm/common@50 -- $ kill -TERM 1123456 00:03:40.840 21:20:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:40.840 21:20:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:40.840 21:20:41 -- pm/common@44 -- $ pid=1123458 00:03:40.840 21:20:41 -- pm/common@50 -- $ kill -TERM 1123458 00:03:40.840 21:20:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:40.840 21:20:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:40.840 21:20:41 -- pm/common@44 -- $ pid=1123482 00:03:40.840 21:20:41 -- pm/common@50 -- $ sudo -E kill -TERM 1123482 00:03:41.100 21:20:41 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:41.100 21:20:41 -- nvmf/common.sh@7 -- # uname -s 00:03:41.100 21:20:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:41.100 21:20:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:41.100 21:20:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:41.100 21:20:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:41.100 21:20:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:41.100 21:20:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:41.100 21:20:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:41.100 21:20:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:41.100 21:20:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:41.100 21:20:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:41.100 21:20:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:03:41.100 21:20:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:03:41.100 21:20:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:41.100 21:20:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:41.100 21:20:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:41.100 21:20:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:41.100 21:20:41 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:41.100 21:20:41 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:41.100 21:20:41 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:41.100 21:20:41 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:41.100 21:20:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.100 21:20:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.100 21:20:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.100 21:20:41 -- paths/export.sh@5 -- # export PATH 00:03:41.100 21:20:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.100 21:20:41 -- nvmf/common.sh@47 -- # : 0 00:03:41.100 21:20:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:41.100 21:20:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:41.100 21:20:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:41.100 21:20:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:41.100 21:20:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:41.100 21:20:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:41.100 21:20:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:41.100 21:20:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:41.100 21:20:41 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:41.100 21:20:41 -- spdk/autotest.sh@32 -- # uname -s 00:03:41.100 21:20:41 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:41.100 21:20:41 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:41.100 21:20:41 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:41.100 21:20:41 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:41.100 21:20:41 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:41.101 21:20:41 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:41.101 21:20:41 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:41.101 21:20:41 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:41.101 21:20:41 -- spdk/autotest.sh@48 -- # udevadm_pid=1185134 00:03:41.101 21:20:41 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:41.101 21:20:41 -- pm/common@17 -- # local monitor 00:03:41.101 21:20:41 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:41.101 21:20:41 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:41.101 21:20:41 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:41.101 21:20:41 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:41.101 21:20:41 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:41.101 21:20:41 -- pm/common@25 -- # sleep 1 00:03:41.101 21:20:41 -- pm/common@21 -- # date +%s 00:03:41.101 21:20:41 -- pm/common@21 -- # date +%s 00:03:41.101 21:20:41 -- pm/common@21 -- # date +%s 00:03:41.101 21:20:41 -- pm/common@21 -- # date +%s 00:03:41.101 21:20:41 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1717788041 00:03:41.101 21:20:41 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1717788041 00:03:41.101 21:20:41 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1717788041 00:03:41.101 21:20:41 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1717788041 00:03:41.101 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1717788041_collect-vmstat.pm.log 00:03:41.101 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1717788041_collect-cpu-load.pm.log 00:03:41.101 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1717788041_collect-cpu-temp.pm.log 00:03:41.101 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1717788041_collect-bmc-pm.bmc.pm.log 00:03:42.039 21:20:42 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:42.040 21:20:42 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:42.040 21:20:42 -- common/autotest_common.sh@723 -- # xtrace_disable 00:03:42.040 21:20:42 -- common/autotest_common.sh@10 -- # set +x 00:03:42.040 21:20:42 -- spdk/autotest.sh@59 -- # create_test_list 00:03:42.040 21:20:42 -- common/autotest_common.sh@747 -- # xtrace_disable 00:03:42.040 21:20:42 -- common/autotest_common.sh@10 -- # set +x 00:03:42.040 21:20:42 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:42.040 21:20:42 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:42.040 21:20:42 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:42.040 21:20:42 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:42.040 21:20:42 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:42.040 21:20:42 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:42.040 21:20:42 -- common/autotest_common.sh@1454 -- # uname 00:03:42.040 21:20:42 -- common/autotest_common.sh@1454 -- # '[' Linux = FreeBSD ']' 00:03:42.040 21:20:42 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:42.040 21:20:42 -- common/autotest_common.sh@1474 -- # uname 00:03:42.040 21:20:42 -- common/autotest_common.sh@1474 -- # [[ Linux = FreeBSD ]] 00:03:42.040 21:20:42 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:42.040 21:20:42 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:42.040 21:20:42 -- spdk/autotest.sh@72 -- # hash lcov 00:03:42.040 21:20:42 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:42.040 21:20:42 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:42.040 --rc lcov_branch_coverage=1 00:03:42.040 --rc lcov_function_coverage=1 00:03:42.040 --rc genhtml_branch_coverage=1 00:03:42.040 --rc genhtml_function_coverage=1 00:03:42.040 --rc genhtml_legend=1 00:03:42.040 --rc geninfo_all_blocks=1 00:03:42.040 ' 00:03:42.040 21:20:42 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:42.040 --rc lcov_branch_coverage=1 00:03:42.040 --rc lcov_function_coverage=1 00:03:42.040 --rc genhtml_branch_coverage=1 00:03:42.040 --rc genhtml_function_coverage=1 00:03:42.040 --rc genhtml_legend=1 00:03:42.040 --rc geninfo_all_blocks=1 00:03:42.040 ' 00:03:42.040 21:20:42 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:42.040 --rc lcov_branch_coverage=1 00:03:42.040 --rc lcov_function_coverage=1 00:03:42.040 --rc genhtml_branch_coverage=1 00:03:42.040 --rc genhtml_function_coverage=1 00:03:42.040 --rc genhtml_legend=1 00:03:42.040 --rc geninfo_all_blocks=1 00:03:42.040 --no-external' 00:03:42.040 21:20:42 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:42.040 --rc lcov_branch_coverage=1 00:03:42.040 --rc lcov_function_coverage=1 00:03:42.040 --rc genhtml_branch_coverage=1 00:03:42.040 --rc genhtml_function_coverage=1 00:03:42.040 --rc genhtml_legend=1 00:03:42.040 --rc geninfo_all_blocks=1 00:03:42.040 --no-external' 00:03:42.040 21:20:42 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:42.299 lcov: LCOV version 1.14 00:03:42.299 21:20:42 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:57.284 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:57.284 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:15.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:15.373 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:15.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:15.373 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:15.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:15.373 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:15.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:15.373 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:15.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:15.373 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:15.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:15.373 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:15.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:15.373 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:15.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:15.373 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:15.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:15.373 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:15.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:15.373 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:15.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:15.373 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:15.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:15.373 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:15.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:15.373 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:15.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:15.373 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:15.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:15.373 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:15.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:15.373 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:15.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:15.373 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:15.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:15.373 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:15.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:15.373 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:15.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:15.373 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:15.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:15.373 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:15.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:15.373 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:15.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:15.373 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:15.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:15.373 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:15.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:15.373 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:15.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:15.373 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:15.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:15.373 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:15.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:15.373 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:15.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:15.373 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:15.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:15.373 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:15.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:15.373 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:15.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:15.373 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:15.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:15.373 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:15.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:15.373 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:15.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:15.373 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:15.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:15.373 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:15.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:15.373 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:15.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:15.373 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:15.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:15.373 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:15.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:15.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:15.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:15.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:04:15.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:15.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:15.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:15.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:04:15.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:15.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:15.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:15.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:15.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:15.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:15.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:15.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:15.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:15.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:15.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:15.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:15.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:15.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:15.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:15.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:15.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:15.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:15.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:15.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:15.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:15.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:15.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:15.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:15.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:15.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:15.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:15.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:15.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:15.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:15.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:15.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:15.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:15.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:15.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:15.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:15.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:15.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:15.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:15.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:15.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:15.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:15.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:15.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:15.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:15.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:15.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:15.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:15.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:15.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:15.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:15.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:15.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:15.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:15.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:15.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:15.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:15.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:15.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:15.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:15.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:15.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:15.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:15.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:15.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:15.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:15.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:15.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:15.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:15.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:15.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:15.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:15.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:15.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:15.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:15.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:15.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:15.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:15.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:15.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:15.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:15.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:15.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:15.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:15.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:15.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:15.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:15.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:15.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:15.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:16.752 21:21:16 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:16.752 21:21:16 -- common/autotest_common.sh@723 -- # xtrace_disable 00:04:16.752 21:21:16 -- common/autotest_common.sh@10 -- # set +x 00:04:16.752 21:21:16 -- spdk/autotest.sh@91 -- # rm -f 00:04:16.752 21:21:16 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:20.044 0000:86:00.0 (8086 0a54): Already using the nvme driver 00:04:20.044 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:04:20.044 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:04:20.044 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:04:20.044 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:04:20.044 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:04:20.044 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:04:20.044 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:04:20.044 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:04:20.045 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:04:20.045 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:04:20.045 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:04:20.045 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:04:20.045 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:04:20.045 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:04:20.303 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:04:20.303 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:04:20.303 21:21:20 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:20.303 21:21:20 -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:04:20.303 21:21:20 -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:04:20.303 21:21:20 -- common/autotest_common.sh@1669 -- # local nvme bdf 00:04:20.303 21:21:20 -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:20.304 21:21:20 -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:04:20.304 21:21:20 -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:04:20.304 21:21:20 -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:20.304 21:21:20 -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:20.304 21:21:20 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:20.304 21:21:20 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:20.304 21:21:20 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:20.304 21:21:20 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:20.304 21:21:20 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:20.304 21:21:20 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:20.304 No valid GPT data, bailing 00:04:20.304 21:21:20 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:20.304 21:21:20 -- scripts/common.sh@391 -- # pt= 00:04:20.304 21:21:20 -- scripts/common.sh@392 -- # return 1 00:04:20.304 21:21:20 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:20.304 1+0 records in 00:04:20.304 1+0 records out 00:04:20.304 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00612039 s, 171 MB/s 00:04:20.304 21:21:20 -- spdk/autotest.sh@118 -- # sync 00:04:20.304 21:21:20 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:20.304 21:21:20 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:20.304 21:21:20 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:26.872 21:21:26 -- spdk/autotest.sh@124 -- # uname -s 00:04:26.872 21:21:26 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:26.872 21:21:26 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:26.872 21:21:26 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:26.872 21:21:26 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:26.872 21:21:26 -- common/autotest_common.sh@10 -- # set +x 00:04:26.872 ************************************ 00:04:26.872 START TEST setup.sh 00:04:26.872 ************************************ 00:04:26.872 21:21:26 setup.sh -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:26.872 * Looking for test storage... 00:04:26.872 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:26.872 21:21:26 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:26.872 21:21:26 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:26.872 21:21:26 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:26.872 21:21:26 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:26.872 21:21:26 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:26.872 21:21:26 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:26.872 ************************************ 00:04:26.872 START TEST acl 00:04:26.872 ************************************ 00:04:26.872 21:21:26 setup.sh.acl -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:26.872 * Looking for test storage... 00:04:26.872 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:26.872 21:21:26 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:26.872 21:21:26 setup.sh.acl -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:04:26.872 21:21:26 setup.sh.acl -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:04:26.872 21:21:26 setup.sh.acl -- common/autotest_common.sh@1669 -- # local nvme bdf 00:04:26.872 21:21:26 setup.sh.acl -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:26.872 21:21:26 setup.sh.acl -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:04:26.872 21:21:26 setup.sh.acl -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:04:26.872 21:21:26 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:26.872 21:21:26 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:26.872 21:21:26 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:26.872 21:21:26 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:26.872 21:21:26 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:26.872 21:21:26 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:26.872 21:21:26 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:26.872 21:21:26 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:26.872 21:21:26 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:30.163 21:21:29 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:30.164 21:21:29 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:30.164 21:21:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:30.164 21:21:29 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:30.164 21:21:29 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.164 21:21:29 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:32.699 Hugepages 00:04:32.699 node hugesize free / total 00:04:32.699 21:21:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:32.699 21:21:32 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:32.699 21:21:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.699 21:21:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:32.699 21:21:32 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:32.699 21:21:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.699 21:21:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:32.699 21:21:32 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:32.699 21:21:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.699 00:04:32.699 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:32.699 21:21:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:32.699 21:21:32 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:32.699 21:21:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.699 21:21:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:32.699 21:21:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:32.699 21:21:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:32.699 21:21:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.699 21:21:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:32.699 21:21:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:32.699 21:21:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:32.699 21:21:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.699 21:21:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:32.699 21:21:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:32.699 21:21:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:32.699 21:21:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.699 21:21:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:32.699 21:21:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:32.699 21:21:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:32.699 21:21:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.699 21:21:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:86:00.0 == *:*:*.* ]] 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\6\:\0\0\.\0* ]] 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:32.700 21:21:32 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:32.700 21:21:32 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:32.700 21:21:32 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:32.700 21:21:32 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:32.960 ************************************ 00:04:32.960 START TEST denied 00:04:32.960 ************************************ 00:04:32.960 21:21:32 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # denied 00:04:32.960 21:21:32 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:86:00.0' 00:04:32.960 21:21:32 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:32.960 21:21:32 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:86:00.0' 00:04:32.960 21:21:32 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.960 21:21:32 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:36.252 0000:86:00.0 (8086 0a54): Skipping denied controller at 0000:86:00.0 00:04:36.252 21:21:36 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:86:00.0 00:04:36.252 21:21:36 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:36.252 21:21:36 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:36.252 21:21:36 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:86:00.0 ]] 00:04:36.252 21:21:36 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:86:00.0/driver 00:04:36.252 21:21:36 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:36.252 21:21:36 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:36.252 21:21:36 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:36.252 21:21:36 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:36.252 21:21:36 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:40.449 00:04:40.449 real 0m7.703s 00:04:40.449 user 0m2.484s 00:04:40.449 sys 0m4.478s 00:04:40.449 21:21:40 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:40.449 21:21:40 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:40.449 ************************************ 00:04:40.449 END TEST denied 00:04:40.449 ************************************ 00:04:40.449 21:21:40 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:40.449 21:21:40 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:40.449 21:21:40 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:40.449 21:21:40 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:40.709 ************************************ 00:04:40.709 START TEST allowed 00:04:40.709 ************************************ 00:04:40.709 21:21:40 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # allowed 00:04:40.709 21:21:40 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:86:00.0 00:04:40.709 21:21:40 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:40.709 21:21:40 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:86:00.0 .*: nvme -> .*' 00:04:40.709 21:21:40 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.709 21:21:40 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:44.903 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:04:44.903 21:21:45 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:44.903 21:21:45 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:44.903 21:21:45 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:44.903 21:21:45 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:44.903 21:21:45 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:49.098 00:04:49.098 real 0m7.839s 00:04:49.098 user 0m2.487s 00:04:49.098 sys 0m4.439s 00:04:49.098 21:21:48 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:49.098 21:21:48 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:49.098 ************************************ 00:04:49.098 END TEST allowed 00:04:49.098 ************************************ 00:04:49.098 00:04:49.098 real 0m22.010s 00:04:49.098 user 0m7.265s 00:04:49.098 sys 0m13.260s 00:04:49.098 21:21:48 setup.sh.acl -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:49.098 21:21:48 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:49.098 ************************************ 00:04:49.098 END TEST acl 00:04:49.098 ************************************ 00:04:49.098 21:21:48 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:49.098 21:21:48 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:49.098 21:21:48 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:49.098 21:21:48 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:49.098 ************************************ 00:04:49.098 START TEST hugepages 00:04:49.098 ************************************ 00:04:49.098 21:21:48 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:49.098 * Looking for test storage... 00:04:49.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:49.098 21:21:48 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:49.098 21:21:48 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:49.098 21:21:48 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:49.098 21:21:48 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:49.098 21:21:48 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:49.098 21:21:48 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:49.098 21:21:48 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:49.098 21:21:48 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:49.098 21:21:48 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:49.098 21:21:48 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:49.098 21:21:48 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.098 21:21:48 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.098 21:21:48 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.098 21:21:48 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.098 21:21:48 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.098 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.098 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.098 21:21:48 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 69397624 kB' 'MemAvailable: 72989640 kB' 'Buffers: 2704 kB' 'Cached: 14652712 kB' 'SwapCached: 0 kB' 'Active: 11633464 kB' 'Inactive: 3615828 kB' 'Active(anon): 11182712 kB' 'Inactive(anon): 0 kB' 'Active(file): 450752 kB' 'Inactive(file): 3615828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 597184 kB' 'Mapped: 192844 kB' 'Shmem: 10588836 kB' 'KReclaimable: 357664 kB' 'Slab: 1024292 kB' 'SReclaimable: 357664 kB' 'SUnreclaim: 666628 kB' 'KernelStack: 22432 kB' 'PageTables: 8956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52434728 kB' 'Committed_AS: 12660748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221480 kB' 'VmallocChunk: 0 kB' 'Percpu: 90496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3976148 kB' 'DirectMap2M: 39743488 kB' 'DirectMap1G: 57671680 kB' 00:04:49.098 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.098 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.098 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.098 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.098 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.098 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.098 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.098 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.098 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.098 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.098 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.098 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.098 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.098 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.098 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.098 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.099 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:49.100 21:21:48 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:49.100 21:21:48 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:49.100 21:21:48 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:49.100 21:21:48 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:49.100 ************************************ 00:04:49.100 START TEST default_setup 00:04:49.100 ************************************ 00:04:49.100 21:21:48 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # default_setup 00:04:49.100 21:21:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:49.100 21:21:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:49.100 21:21:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:49.100 21:21:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:49.100 21:21:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:49.100 21:21:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:49.100 21:21:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:49.100 21:21:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:49.100 21:21:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:49.100 21:21:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:49.100 21:21:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:49.100 21:21:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:49.100 21:21:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:49.100 21:21:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:49.100 21:21:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:49.100 21:21:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:49.100 21:21:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:49.100 21:21:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:49.100 21:21:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:49.100 21:21:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:49.100 21:21:48 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.100 21:21:48 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:51.636 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:51.636 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:51.636 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:51.636 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:51.636 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:51.636 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:51.636 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:51.636 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:51.636 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:51.896 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:51.896 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:51.896 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:51.896 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:51.896 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:51.896 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:51.896 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:52.903 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 71553624 kB' 'MemAvailable: 75145608 kB' 'Buffers: 2704 kB' 'Cached: 14652832 kB' 'SwapCached: 0 kB' 'Active: 11651704 kB' 'Inactive: 3615828 kB' 'Active(anon): 11200952 kB' 'Inactive(anon): 0 kB' 'Active(file): 450752 kB' 'Inactive(file): 3615828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 615132 kB' 'Mapped: 193056 kB' 'Shmem: 10588956 kB' 'KReclaimable: 357600 kB' 'Slab: 1021796 kB' 'SReclaimable: 357600 kB' 'SUnreclaim: 664196 kB' 'KernelStack: 22672 kB' 'PageTables: 9128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 12681012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221720 kB' 'VmallocChunk: 0 kB' 'Percpu: 90496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3976148 kB' 'DirectMap2M: 39743488 kB' 'DirectMap1G: 57671680 kB' 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.903 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.904 21:21:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 71557892 kB' 'MemAvailable: 75149876 kB' 'Buffers: 2704 kB' 'Cached: 14652836 kB' 'SwapCached: 0 kB' 'Active: 11651192 kB' 'Inactive: 3615828 kB' 'Active(anon): 11200440 kB' 'Inactive(anon): 0 kB' 'Active(file): 450752 kB' 'Inactive(file): 3615828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 614872 kB' 'Mapped: 193036 kB' 'Shmem: 10588960 kB' 'KReclaimable: 357600 kB' 'Slab: 1021760 kB' 'SReclaimable: 357600 kB' 'SUnreclaim: 664160 kB' 'KernelStack: 22560 kB' 'PageTables: 9176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 12681032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221672 kB' 'VmallocChunk: 0 kB' 'Percpu: 90496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3976148 kB' 'DirectMap2M: 39743488 kB' 'DirectMap1G: 57671680 kB' 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.904 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 71558260 kB' 'MemAvailable: 75150244 kB' 'Buffers: 2704 kB' 'Cached: 14652848 kB' 'SwapCached: 0 kB' 'Active: 11651272 kB' 'Inactive: 3615828 kB' 'Active(anon): 11200520 kB' 'Inactive(anon): 0 kB' 'Active(file): 450752 kB' 'Inactive(file): 3615828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 614856 kB' 'Mapped: 193036 kB' 'Shmem: 10588972 kB' 'KReclaimable: 357600 kB' 'Slab: 1021824 kB' 'SReclaimable: 357600 kB' 'SUnreclaim: 664224 kB' 'KernelStack: 22608 kB' 'PageTables: 9176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 12680900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221640 kB' 'VmallocChunk: 0 kB' 'Percpu: 90496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3976148 kB' 'DirectMap2M: 39743488 kB' 'DirectMap1G: 57671680 kB' 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.905 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:52.906 nr_hugepages=1024 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:52.906 resv_hugepages=0 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:52.906 surplus_hugepages=0 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:52.906 anon_hugepages=0 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 71557004 kB' 'MemAvailable: 75148988 kB' 'Buffers: 2704 kB' 'Cached: 14652876 kB' 'SwapCached: 0 kB' 'Active: 11651676 kB' 'Inactive: 3615828 kB' 'Active(anon): 11200924 kB' 'Inactive(anon): 0 kB' 'Active(file): 450752 kB' 'Inactive(file): 3615828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 615192 kB' 'Mapped: 193036 kB' 'Shmem: 10589000 kB' 'KReclaimable: 357600 kB' 'Slab: 1021824 kB' 'SReclaimable: 357600 kB' 'SUnreclaim: 664224 kB' 'KernelStack: 22512 kB' 'PageTables: 8624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 12681072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221656 kB' 'VmallocChunk: 0 kB' 'Percpu: 90496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3976148 kB' 'DirectMap2M: 39743488 kB' 'DirectMap1G: 57671680 kB' 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.906 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 35566468 kB' 'MemUsed: 12501928 kB' 'SwapCached: 0 kB' 'Active: 8599736 kB' 'Inactive: 268784 kB' 'Active(anon): 8428948 kB' 'Inactive(anon): 0 kB' 'Active(file): 170788 kB' 'Inactive(file): 268784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8620296 kB' 'Mapped: 76436 kB' 'AnonPages: 251476 kB' 'Shmem: 8180724 kB' 'KernelStack: 11032 kB' 'PageTables: 3980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 116656 kB' 'Slab: 403228 kB' 'SReclaimable: 116656 kB' 'SUnreclaim: 286572 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.907 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:52.908 node0=1024 expecting 1024 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:52.908 00:04:52.908 real 0m4.268s 00:04:52.908 user 0m1.298s 00:04:52.908 sys 0m2.161s 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:52.908 21:21:53 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:52.908 ************************************ 00:04:52.908 END TEST default_setup 00:04:52.908 ************************************ 00:04:53.167 21:21:53 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:53.167 21:21:53 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:53.167 21:21:53 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:53.167 21:21:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:53.167 ************************************ 00:04:53.167 START TEST per_node_1G_alloc 00:04:53.167 ************************************ 00:04:53.167 21:21:53 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # per_node_1G_alloc 00:04:53.167 21:21:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:53.167 21:21:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:53.167 21:21:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:53.167 21:21:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:53.167 21:21:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:53.167 21:21:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:53.167 21:21:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:53.167 21:21:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:53.167 21:21:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:53.167 21:21:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:53.167 21:21:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:53.167 21:21:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:53.167 21:21:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:53.167 21:21:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:53.167 21:21:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:53.167 21:21:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:53.167 21:21:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:53.167 21:21:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:53.167 21:21:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:53.167 21:21:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:53.167 21:21:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:53.167 21:21:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:53.167 21:21:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:53.167 21:21:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:53.167 21:21:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:53.167 21:21:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.167 21:21:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:56.467 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:56.467 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:56.467 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:56.467 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:56.467 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:56.467 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:56.467 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:56.467 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:56.467 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:56.467 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:56.468 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:56.468 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:56.468 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:56.468 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:56.468 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:56.468 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:56.468 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 71544580 kB' 'MemAvailable: 75136564 kB' 'Buffers: 2704 kB' 'Cached: 14652968 kB' 'SwapCached: 0 kB' 'Active: 11651420 kB' 'Inactive: 3615828 kB' 'Active(anon): 11200668 kB' 'Inactive(anon): 0 kB' 'Active(file): 450752 kB' 'Inactive(file): 3615828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 614288 kB' 'Mapped: 193080 kB' 'Shmem: 10589092 kB' 'KReclaimable: 357600 kB' 'Slab: 1021752 kB' 'SReclaimable: 357600 kB' 'SUnreclaim: 664152 kB' 'KernelStack: 22416 kB' 'PageTables: 8572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 12678640 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221720 kB' 'VmallocChunk: 0 kB' 'Percpu: 90496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3976148 kB' 'DirectMap2M: 39743488 kB' 'DirectMap1G: 57671680 kB' 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.468 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 71544272 kB' 'MemAvailable: 75136256 kB' 'Buffers: 2704 kB' 'Cached: 14652984 kB' 'SwapCached: 0 kB' 'Active: 11651904 kB' 'Inactive: 3615828 kB' 'Active(anon): 11201152 kB' 'Inactive(anon): 0 kB' 'Active(file): 450752 kB' 'Inactive(file): 3615828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 615304 kB' 'Mapped: 193044 kB' 'Shmem: 10589108 kB' 'KReclaimable: 357600 kB' 'Slab: 1021832 kB' 'SReclaimable: 357600 kB' 'SUnreclaim: 664232 kB' 'KernelStack: 22496 kB' 'PageTables: 8900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 12679032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221688 kB' 'VmallocChunk: 0 kB' 'Percpu: 90496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3976148 kB' 'DirectMap2M: 39743488 kB' 'DirectMap1G: 57671680 kB' 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.470 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.471 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 71544900 kB' 'MemAvailable: 75136884 kB' 'Buffers: 2704 kB' 'Cached: 14653000 kB' 'SwapCached: 0 kB' 'Active: 11651916 kB' 'Inactive: 3615828 kB' 'Active(anon): 11201164 kB' 'Inactive(anon): 0 kB' 'Active(file): 450752 kB' 'Inactive(file): 3615828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 615324 kB' 'Mapped: 193044 kB' 'Shmem: 10589124 kB' 'KReclaimable: 357600 kB' 'Slab: 1021832 kB' 'SReclaimable: 357600 kB' 'SUnreclaim: 664232 kB' 'KernelStack: 22496 kB' 'PageTables: 8876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 12679052 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221688 kB' 'VmallocChunk: 0 kB' 'Percpu: 90496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3976148 kB' 'DirectMap2M: 39743488 kB' 'DirectMap1G: 57671680 kB' 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.472 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.473 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:56.474 nr_hugepages=1024 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:56.474 resv_hugepages=0 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:56.474 surplus_hugepages=0 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:56.474 anon_hugepages=0 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 71545552 kB' 'MemAvailable: 75137536 kB' 'Buffers: 2704 kB' 'Cached: 14653024 kB' 'SwapCached: 0 kB' 'Active: 11651948 kB' 'Inactive: 3615828 kB' 'Active(anon): 11201196 kB' 'Inactive(anon): 0 kB' 'Active(file): 450752 kB' 'Inactive(file): 3615828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 615324 kB' 'Mapped: 193044 kB' 'Shmem: 10589148 kB' 'KReclaimable: 357600 kB' 'Slab: 1021832 kB' 'SReclaimable: 357600 kB' 'SUnreclaim: 664232 kB' 'KernelStack: 22496 kB' 'PageTables: 8876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 12679076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221688 kB' 'VmallocChunk: 0 kB' 'Percpu: 90496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3976148 kB' 'DirectMap2M: 39743488 kB' 'DirectMap1G: 57671680 kB' 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.474 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.475 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 36610584 kB' 'MemUsed: 11457812 kB' 'SwapCached: 0 kB' 'Active: 8600376 kB' 'Inactive: 268784 kB' 'Active(anon): 8429588 kB' 'Inactive(anon): 0 kB' 'Active(file): 170788 kB' 'Inactive(file): 268784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8620364 kB' 'Mapped: 76444 kB' 'AnonPages: 251976 kB' 'Shmem: 8180792 kB' 'KernelStack: 11048 kB' 'PageTables: 3996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 116656 kB' 'Slab: 403212 kB' 'SReclaimable: 116656 kB' 'SUnreclaim: 286556 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.476 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.477 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44218156 kB' 'MemFree: 34935876 kB' 'MemUsed: 9282280 kB' 'SwapCached: 0 kB' 'Active: 3051612 kB' 'Inactive: 3347044 kB' 'Active(anon): 2771648 kB' 'Inactive(anon): 0 kB' 'Active(file): 279964 kB' 'Inactive(file): 3347044 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6035408 kB' 'Mapped: 116600 kB' 'AnonPages: 363356 kB' 'Shmem: 2408400 kB' 'KernelStack: 11448 kB' 'PageTables: 4880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 240944 kB' 'Slab: 618620 kB' 'SReclaimable: 240944 kB' 'SUnreclaim: 377676 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.478 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:56.479 node0=512 expecting 512 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:56.479 node1=512 expecting 512 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:56.479 00:04:56.479 real 0m3.273s 00:04:56.479 user 0m1.220s 00:04:56.479 sys 0m2.051s 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:56.479 21:21:56 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:56.479 ************************************ 00:04:56.479 END TEST per_node_1G_alloc 00:04:56.479 ************************************ 00:04:56.479 21:21:56 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:56.479 21:21:56 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:56.479 21:21:56 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:56.479 21:21:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:56.479 ************************************ 00:04:56.479 START TEST even_2G_alloc 00:04:56.479 ************************************ 00:04:56.479 21:21:56 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # even_2G_alloc 00:04:56.479 21:21:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:56.479 21:21:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:56.479 21:21:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:56.479 21:21:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:56.479 21:21:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:56.479 21:21:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:56.479 21:21:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:56.479 21:21:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:56.479 21:21:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:56.479 21:21:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:56.479 21:21:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:56.479 21:21:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:56.479 21:21:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:56.479 21:21:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:56.479 21:21:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:56.479 21:21:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:56.479 21:21:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:56.479 21:21:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:56.479 21:21:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:56.479 21:21:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:56.479 21:21:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:56.479 21:21:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:56.479 21:21:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:56.479 21:21:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:56.479 21:21:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:56.479 21:21:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:56.479 21:21:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:56.479 21:21:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:59.776 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:59.776 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:59.776 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:59.776 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:59.776 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:59.776 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:59.776 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:59.776 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:59.776 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:59.776 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:59.776 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:59.776 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:59.776 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:59.776 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:59.776 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:59.776 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:59.776 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:59.776 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:59.776 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:59.776 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:59.776 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:59.776 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:59.776 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:59.776 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:59.776 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:59.776 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:59.776 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:59.776 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:59.776 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:59.776 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.776 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.776 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.776 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.776 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.776 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.776 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.776 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.776 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 71547208 kB' 'MemAvailable: 75139192 kB' 'Buffers: 2704 kB' 'Cached: 14653132 kB' 'SwapCached: 0 kB' 'Active: 11650576 kB' 'Inactive: 3615828 kB' 'Active(anon): 11199824 kB' 'Inactive(anon): 0 kB' 'Active(file): 450752 kB' 'Inactive(file): 3615828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 613304 kB' 'Mapped: 192032 kB' 'Shmem: 10589256 kB' 'KReclaimable: 357600 kB' 'Slab: 1021812 kB' 'SReclaimable: 357600 kB' 'SUnreclaim: 664212 kB' 'KernelStack: 22448 kB' 'PageTables: 8612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 12668112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221624 kB' 'VmallocChunk: 0 kB' 'Percpu: 90496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3976148 kB' 'DirectMap2M: 39743488 kB' 'DirectMap1G: 57671680 kB' 00:04:59.776 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.776 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.776 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.776 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.776 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.776 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.776 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.776 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.776 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.776 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.777 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 71547328 kB' 'MemAvailable: 75139312 kB' 'Buffers: 2704 kB' 'Cached: 14653136 kB' 'SwapCached: 0 kB' 'Active: 11649796 kB' 'Inactive: 3615828 kB' 'Active(anon): 11199044 kB' 'Inactive(anon): 0 kB' 'Active(file): 450752 kB' 'Inactive(file): 3615828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 613020 kB' 'Mapped: 191940 kB' 'Shmem: 10589260 kB' 'KReclaimable: 357600 kB' 'Slab: 1021824 kB' 'SReclaimable: 357600 kB' 'SUnreclaim: 664224 kB' 'KernelStack: 22432 kB' 'PageTables: 8532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 12668136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221624 kB' 'VmallocChunk: 0 kB' 'Percpu: 90496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3976148 kB' 'DirectMap2M: 39743488 kB' 'DirectMap1G: 57671680 kB' 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.778 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.779 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 71546572 kB' 'MemAvailable: 75138556 kB' 'Buffers: 2704 kB' 'Cached: 14653152 kB' 'SwapCached: 0 kB' 'Active: 11650032 kB' 'Inactive: 3615828 kB' 'Active(anon): 11199280 kB' 'Inactive(anon): 0 kB' 'Active(file): 450752 kB' 'Inactive(file): 3615828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 613288 kB' 'Mapped: 191940 kB' 'Shmem: 10589276 kB' 'KReclaimable: 357600 kB' 'Slab: 1021824 kB' 'SReclaimable: 357600 kB' 'SUnreclaim: 664224 kB' 'KernelStack: 22464 kB' 'PageTables: 8628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 12668164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221592 kB' 'VmallocChunk: 0 kB' 'Percpu: 90496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3976148 kB' 'DirectMap2M: 39743488 kB' 'DirectMap1G: 57671680 kB' 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.780 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.781 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:59.782 nr_hugepages=1024 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:59.782 resv_hugepages=0 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:59.782 surplus_hugepages=0 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:59.782 anon_hugepages=0 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.782 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 71546764 kB' 'MemAvailable: 75138748 kB' 'Buffers: 2704 kB' 'Cached: 14653180 kB' 'SwapCached: 0 kB' 'Active: 11649812 kB' 'Inactive: 3615828 kB' 'Active(anon): 11199060 kB' 'Inactive(anon): 0 kB' 'Active(file): 450752 kB' 'Inactive(file): 3615828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 613024 kB' 'Mapped: 191940 kB' 'Shmem: 10589304 kB' 'KReclaimable: 357600 kB' 'Slab: 1021840 kB' 'SReclaimable: 357600 kB' 'SUnreclaim: 664240 kB' 'KernelStack: 22432 kB' 'PageTables: 8528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 12668320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221576 kB' 'VmallocChunk: 0 kB' 'Percpu: 90496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3976148 kB' 'DirectMap2M: 39743488 kB' 'DirectMap1G: 57671680 kB' 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.783 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 36618084 kB' 'MemUsed: 11450312 kB' 'SwapCached: 0 kB' 'Active: 8597460 kB' 'Inactive: 268784 kB' 'Active(anon): 8426672 kB' 'Inactive(anon): 0 kB' 'Active(file): 170788 kB' 'Inactive(file): 268784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8620464 kB' 'Mapped: 76064 kB' 'AnonPages: 248944 kB' 'Shmem: 8180892 kB' 'KernelStack: 10984 kB' 'PageTables: 3736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 116656 kB' 'Slab: 403312 kB' 'SReclaimable: 116656 kB' 'SUnreclaim: 286656 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.784 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.785 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44218156 kB' 'MemFree: 34928680 kB' 'MemUsed: 9289476 kB' 'SwapCached: 0 kB' 'Active: 3052944 kB' 'Inactive: 3347044 kB' 'Active(anon): 2772980 kB' 'Inactive(anon): 0 kB' 'Active(file): 279964 kB' 'Inactive(file): 3347044 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6035452 kB' 'Mapped: 115876 kB' 'AnonPages: 364672 kB' 'Shmem: 2408444 kB' 'KernelStack: 11480 kB' 'PageTables: 4920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 240944 kB' 'Slab: 618528 kB' 'SReclaimable: 240944 kB' 'SUnreclaim: 377584 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.786 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:59.787 node0=512 expecting 512 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:59.787 node1=512 expecting 512 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:59.787 00:04:59.787 real 0m3.282s 00:04:59.787 user 0m1.252s 00:04:59.787 sys 0m2.066s 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:59.787 21:21:59 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:59.787 ************************************ 00:04:59.787 END TEST even_2G_alloc 00:04:59.787 ************************************ 00:04:59.787 21:21:59 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:59.787 21:21:59 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:59.787 21:21:59 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:59.787 21:21:59 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:59.787 ************************************ 00:04:59.787 START TEST odd_alloc 00:04:59.787 ************************************ 00:04:59.787 21:21:59 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # odd_alloc 00:04:59.787 21:21:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:59.787 21:21:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:59.787 21:21:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:59.787 21:21:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:59.787 21:21:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:59.787 21:21:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:59.787 21:21:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:59.787 21:21:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:59.787 21:21:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:59.787 21:21:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:59.787 21:21:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:59.787 21:21:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:59.787 21:21:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:59.787 21:21:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:59.787 21:21:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:59.787 21:21:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:59.787 21:21:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:59.787 21:21:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:59.788 21:21:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:59.788 21:21:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:59.788 21:21:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:59.788 21:21:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:59.788 21:21:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:59.788 21:21:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:59.788 21:21:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:59.788 21:21:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:59.788 21:21:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.788 21:21:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:03.083 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:03.083 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:03.083 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:03.083 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:03.083 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:03.083 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:03.083 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:03.083 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:03.083 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:03.083 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:03.083 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:03.083 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:03.083 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:03.083 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:03.083 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:03.083 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:03.083 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:03.083 21:22:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:03.083 21:22:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:03.083 21:22:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:03.083 21:22:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:03.083 21:22:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:03.083 21:22:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:03.083 21:22:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:03.083 21:22:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:03.083 21:22:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:03.083 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:03.083 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:03.083 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:03.083 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.083 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.083 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.083 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.083 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.083 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.083 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.083 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 71548476 kB' 'MemAvailable: 75140460 kB' 'Buffers: 2704 kB' 'Cached: 14653296 kB' 'SwapCached: 0 kB' 'Active: 11649424 kB' 'Inactive: 3615828 kB' 'Active(anon): 11198672 kB' 'Inactive(anon): 0 kB' 'Active(file): 450752 kB' 'Inactive(file): 3615828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 612476 kB' 'Mapped: 191988 kB' 'Shmem: 10589420 kB' 'KReclaimable: 357600 kB' 'Slab: 1021636 kB' 'SReclaimable: 357600 kB' 'SUnreclaim: 664036 kB' 'KernelStack: 22416 kB' 'PageTables: 8516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482280 kB' 'Committed_AS: 12668800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221656 kB' 'VmallocChunk: 0 kB' 'Percpu: 90496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3976148 kB' 'DirectMap2M: 39743488 kB' 'DirectMap1G: 57671680 kB' 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 71549488 kB' 'MemAvailable: 75141472 kB' 'Buffers: 2704 kB' 'Cached: 14653300 kB' 'SwapCached: 0 kB' 'Active: 11649764 kB' 'Inactive: 3615828 kB' 'Active(anon): 11199012 kB' 'Inactive(anon): 0 kB' 'Active(file): 450752 kB' 'Inactive(file): 3615828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 612852 kB' 'Mapped: 191952 kB' 'Shmem: 10589424 kB' 'KReclaimable: 357600 kB' 'Slab: 1021660 kB' 'SReclaimable: 357600 kB' 'SUnreclaim: 664060 kB' 'KernelStack: 22448 kB' 'PageTables: 8600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482280 kB' 'Committed_AS: 12668824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221608 kB' 'VmallocChunk: 0 kB' 'Percpu: 90496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3976148 kB' 'DirectMap2M: 39743488 kB' 'DirectMap1G: 57671680 kB' 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.084 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.085 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 71550380 kB' 'MemAvailable: 75142364 kB' 'Buffers: 2704 kB' 'Cached: 14653328 kB' 'SwapCached: 0 kB' 'Active: 11649984 kB' 'Inactive: 3615828 kB' 'Active(anon): 11199232 kB' 'Inactive(anon): 0 kB' 'Active(file): 450752 kB' 'Inactive(file): 3615828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 613024 kB' 'Mapped: 191952 kB' 'Shmem: 10589452 kB' 'KReclaimable: 357600 kB' 'Slab: 1021732 kB' 'SReclaimable: 357600 kB' 'SUnreclaim: 664132 kB' 'KernelStack: 22464 kB' 'PageTables: 8688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482280 kB' 'Committed_AS: 12669344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221624 kB' 'VmallocChunk: 0 kB' 'Percpu: 90496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3976148 kB' 'DirectMap2M: 39743488 kB' 'DirectMap1G: 57671680 kB' 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.086 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:03.087 nr_hugepages=1025 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:03.087 resv_hugepages=0 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:03.087 surplus_hugepages=0 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:03.087 anon_hugepages=0 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 71551248 kB' 'MemAvailable: 75143232 kB' 'Buffers: 2704 kB' 'Cached: 14653328 kB' 'SwapCached: 0 kB' 'Active: 11650148 kB' 'Inactive: 3615828 kB' 'Active(anon): 11199396 kB' 'Inactive(anon): 0 kB' 'Active(file): 450752 kB' 'Inactive(file): 3615828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 613244 kB' 'Mapped: 191952 kB' 'Shmem: 10589452 kB' 'KReclaimable: 357600 kB' 'Slab: 1021732 kB' 'SReclaimable: 357600 kB' 'SUnreclaim: 664132 kB' 'KernelStack: 22464 kB' 'PageTables: 8668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482280 kB' 'Committed_AS: 12670612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221624 kB' 'VmallocChunk: 0 kB' 'Percpu: 90496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3976148 kB' 'DirectMap2M: 39743488 kB' 'DirectMap1G: 57671680 kB' 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.087 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 36626020 kB' 'MemUsed: 11442376 kB' 'SwapCached: 0 kB' 'Active: 8599428 kB' 'Inactive: 268784 kB' 'Active(anon): 8428640 kB' 'Inactive(anon): 0 kB' 'Active(file): 170788 kB' 'Inactive(file): 268784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8620616 kB' 'Mapped: 76076 kB' 'AnonPages: 250776 kB' 'Shmem: 8181044 kB' 'KernelStack: 11016 kB' 'PageTables: 3828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 116656 kB' 'Slab: 403012 kB' 'SReclaimable: 116656 kB' 'SUnreclaim: 286356 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.088 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44218156 kB' 'MemFree: 34925648 kB' 'MemUsed: 9292508 kB' 'SwapCached: 0 kB' 'Active: 3051344 kB' 'Inactive: 3347044 kB' 'Active(anon): 2771380 kB' 'Inactive(anon): 0 kB' 'Active(file): 279964 kB' 'Inactive(file): 3347044 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6035456 kB' 'Mapped: 115876 kB' 'AnonPages: 362944 kB' 'Shmem: 2408448 kB' 'KernelStack: 11592 kB' 'PageTables: 5000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 240944 kB' 'Slab: 618720 kB' 'SReclaimable: 240944 kB' 'SUnreclaim: 377776 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.089 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.090 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.090 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.090 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.090 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.090 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.090 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.090 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.090 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.090 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.090 21:22:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:03.090 21:22:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:03.090 21:22:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:03.090 21:22:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:03.090 21:22:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:03.090 21:22:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:05:03.090 node0=512 expecting 513 00:05:03.090 21:22:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:03.090 21:22:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:03.090 21:22:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:03.090 21:22:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:05:03.090 node1=513 expecting 512 00:05:03.090 21:22:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:05:03.090 00:05:03.090 real 0m3.366s 00:05:03.090 user 0m1.331s 00:05:03.090 sys 0m2.077s 00:05:03.090 21:22:03 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:03.090 21:22:03 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:03.090 ************************************ 00:05:03.090 END TEST odd_alloc 00:05:03.090 ************************************ 00:05:03.090 21:22:03 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:03.090 21:22:03 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:03.090 21:22:03 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:03.090 21:22:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:03.090 ************************************ 00:05:03.090 START TEST custom_alloc 00:05:03.090 ************************************ 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # custom_alloc 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:03.090 21:22:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:06.382 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:06.382 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:06.382 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:06.382 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:06.382 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:06.382 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:06.382 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:06.382 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:06.382 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:06.382 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:06.382 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:06.382 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:06.382 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:06.382 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:06.382 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:06.382 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:06.382 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:06.382 21:22:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:05:06.382 21:22:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:06.382 21:22:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:06.382 21:22:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:06.382 21:22:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:06.382 21:22:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:06.382 21:22:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:06.382 21:22:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:06.382 21:22:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:06.382 21:22:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:06.382 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:06.382 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:06.382 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:06.382 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.382 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.382 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.382 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.382 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.382 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.382 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.382 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 70507140 kB' 'MemAvailable: 74099124 kB' 'Buffers: 2704 kB' 'Cached: 14653476 kB' 'SwapCached: 0 kB' 'Active: 11651228 kB' 'Inactive: 3615828 kB' 'Active(anon): 11200476 kB' 'Inactive(anon): 0 kB' 'Active(file): 450752 kB' 'Inactive(file): 3615828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 614152 kB' 'Mapped: 191936 kB' 'Shmem: 10589600 kB' 'KReclaimable: 357600 kB' 'Slab: 1021696 kB' 'SReclaimable: 357600 kB' 'SUnreclaim: 664096 kB' 'KernelStack: 22432 kB' 'PageTables: 8556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52959016 kB' 'Committed_AS: 12669988 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221704 kB' 'VmallocChunk: 0 kB' 'Percpu: 90496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3976148 kB' 'DirectMap2M: 39743488 kB' 'DirectMap1G: 57671680 kB' 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.383 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.384 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.384 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.384 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.384 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.648 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.648 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.648 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.648 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.648 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.648 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.648 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.648 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.648 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.648 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.648 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.648 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.648 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.648 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 70508504 kB' 'MemAvailable: 74100488 kB' 'Buffers: 2704 kB' 'Cached: 14653480 kB' 'SwapCached: 0 kB' 'Active: 11650664 kB' 'Inactive: 3615828 kB' 'Active(anon): 11199912 kB' 'Inactive(anon): 0 kB' 'Active(file): 450752 kB' 'Inactive(file): 3615828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 613584 kB' 'Mapped: 191964 kB' 'Shmem: 10589604 kB' 'KReclaimable: 357600 kB' 'Slab: 1021760 kB' 'SReclaimable: 357600 kB' 'SUnreclaim: 664160 kB' 'KernelStack: 22464 kB' 'PageTables: 8648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52959016 kB' 'Committed_AS: 12670008 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221656 kB' 'VmallocChunk: 0 kB' 'Percpu: 90496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3976148 kB' 'DirectMap2M: 39743488 kB' 'DirectMap1G: 57671680 kB' 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.649 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.650 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 70508868 kB' 'MemAvailable: 74100852 kB' 'Buffers: 2704 kB' 'Cached: 14653496 kB' 'SwapCached: 0 kB' 'Active: 11650984 kB' 'Inactive: 3615828 kB' 'Active(anon): 11200232 kB' 'Inactive(anon): 0 kB' 'Active(file): 450752 kB' 'Inactive(file): 3615828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 613920 kB' 'Mapped: 191964 kB' 'Shmem: 10589620 kB' 'KReclaimable: 357600 kB' 'Slab: 1021760 kB' 'SReclaimable: 357600 kB' 'SUnreclaim: 664160 kB' 'KernelStack: 22464 kB' 'PageTables: 8652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52959016 kB' 'Committed_AS: 12670028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221640 kB' 'VmallocChunk: 0 kB' 'Percpu: 90496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3976148 kB' 'DirectMap2M: 39743488 kB' 'DirectMap1G: 57671680 kB' 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.651 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:06.652 21:22:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:06.653 21:22:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:05:06.653 nr_hugepages=1536 00:05:06.653 21:22:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:06.653 resv_hugepages=0 00:05:06.653 21:22:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:06.653 surplus_hugepages=0 00:05:06.653 21:22:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:06.653 anon_hugepages=0 00:05:06.653 21:22:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:06.653 21:22:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:05:06.653 21:22:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:06.653 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:06.653 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:06.653 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:06.653 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.653 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.653 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.653 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.653 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.653 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.653 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.653 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.653 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 70509652 kB' 'MemAvailable: 74101636 kB' 'Buffers: 2704 kB' 'Cached: 14653516 kB' 'SwapCached: 0 kB' 'Active: 11650740 kB' 'Inactive: 3615828 kB' 'Active(anon): 11199988 kB' 'Inactive(anon): 0 kB' 'Active(file): 450752 kB' 'Inactive(file): 3615828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 613592 kB' 'Mapped: 191964 kB' 'Shmem: 10589640 kB' 'KReclaimable: 357600 kB' 'Slab: 1021760 kB' 'SReclaimable: 357600 kB' 'SUnreclaim: 664160 kB' 'KernelStack: 22464 kB' 'PageTables: 8648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52959016 kB' 'Committed_AS: 12670048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221624 kB' 'VmallocChunk: 0 kB' 'Percpu: 90496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3976148 kB' 'DirectMap2M: 39743488 kB' 'DirectMap1G: 57671680 kB' 00:05:06.653 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.653 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.653 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.653 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.653 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.653 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.653 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.653 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.653 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.653 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.653 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.653 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.653 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.653 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.654 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:06.655 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 36621508 kB' 'MemUsed: 11446888 kB' 'SwapCached: 0 kB' 'Active: 8599880 kB' 'Inactive: 268784 kB' 'Active(anon): 8429092 kB' 'Inactive(anon): 0 kB' 'Active(file): 170788 kB' 'Inactive(file): 268784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8620736 kB' 'Mapped: 76088 kB' 'AnonPages: 251124 kB' 'Shmem: 8181164 kB' 'KernelStack: 11016 kB' 'PageTables: 3820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 116656 kB' 'Slab: 403088 kB' 'SReclaimable: 116656 kB' 'SUnreclaim: 286432 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.656 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44218156 kB' 'MemFree: 33887828 kB' 'MemUsed: 10330328 kB' 'SwapCached: 0 kB' 'Active: 3050752 kB' 'Inactive: 3347044 kB' 'Active(anon): 2770788 kB' 'Inactive(anon): 0 kB' 'Active(file): 279964 kB' 'Inactive(file): 3347044 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6035528 kB' 'Mapped: 115876 kB' 'AnonPages: 362372 kB' 'Shmem: 2408520 kB' 'KernelStack: 11448 kB' 'PageTables: 4780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 240944 kB' 'Slab: 618672 kB' 'SReclaimable: 240944 kB' 'SUnreclaim: 377728 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.657 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:06.658 node0=512 expecting 512 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:05:06.658 node1=1024 expecting 1024 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:05:06.658 00:05:06.658 real 0m3.496s 00:05:06.658 user 0m1.363s 00:05:06.658 sys 0m2.186s 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:06.658 21:22:06 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:06.658 ************************************ 00:05:06.658 END TEST custom_alloc 00:05:06.658 ************************************ 00:05:06.658 21:22:06 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:06.659 21:22:06 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:06.659 21:22:06 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:06.659 21:22:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:06.659 ************************************ 00:05:06.659 START TEST no_shrink_alloc 00:05:06.659 ************************************ 00:05:06.659 21:22:06 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # no_shrink_alloc 00:05:06.659 21:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:06.659 21:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:06.659 21:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:06.659 21:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:06.659 21:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:06.659 21:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:06.659 21:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:06.659 21:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:06.659 21:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:06.659 21:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:06.659 21:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:06.659 21:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:06.659 21:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:06.659 21:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:06.659 21:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:06.659 21:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:06.659 21:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:06.659 21:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:06.659 21:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:06.659 21:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:06.659 21:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:06.659 21:22:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:09.955 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:09.955 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:09.955 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:09.955 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:09.955 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:09.955 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:09.955 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:09.955 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:09.955 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:09.955 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:09.955 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:09.955 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:09.955 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:09.955 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:09.955 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:09.955 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:09.955 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:09.955 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:09.955 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:09.955 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:09.955 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:09.955 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:09.955 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:09.955 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:09.955 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:09.955 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:09.955 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:09.955 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:09.955 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:09.955 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.955 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.955 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.955 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.955 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.955 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.955 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.955 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.955 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 71514564 kB' 'MemAvailable: 75106548 kB' 'Buffers: 2704 kB' 'Cached: 14653632 kB' 'SwapCached: 0 kB' 'Active: 11652432 kB' 'Inactive: 3615828 kB' 'Active(anon): 11201680 kB' 'Inactive(anon): 0 kB' 'Active(file): 450752 kB' 'Inactive(file): 3615828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 615584 kB' 'Mapped: 191924 kB' 'Shmem: 10589756 kB' 'KReclaimable: 357600 kB' 'Slab: 1020884 kB' 'SReclaimable: 357600 kB' 'SUnreclaim: 663284 kB' 'KernelStack: 22464 kB' 'PageTables: 8680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 12670824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221704 kB' 'VmallocChunk: 0 kB' 'Percpu: 90496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3976148 kB' 'DirectMap2M: 39743488 kB' 'DirectMap1G: 57671680 kB' 00:05:09.955 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.955 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.955 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.955 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.955 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.955 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.955 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.955 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.955 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.955 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.955 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.955 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.955 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.955 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.955 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.955 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.955 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.955 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.955 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.955 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.955 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.955 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.955 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.955 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.956 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 71515616 kB' 'MemAvailable: 75107600 kB' 'Buffers: 2704 kB' 'Cached: 14653636 kB' 'SwapCached: 0 kB' 'Active: 11651700 kB' 'Inactive: 3615828 kB' 'Active(anon): 11200948 kB' 'Inactive(anon): 0 kB' 'Active(file): 450752 kB' 'Inactive(file): 3615828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 614888 kB' 'Mapped: 191976 kB' 'Shmem: 10589760 kB' 'KReclaimable: 357600 kB' 'Slab: 1020992 kB' 'SReclaimable: 357600 kB' 'SUnreclaim: 663392 kB' 'KernelStack: 22464 kB' 'PageTables: 8660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 12670840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221672 kB' 'VmallocChunk: 0 kB' 'Percpu: 90496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3976148 kB' 'DirectMap2M: 39743488 kB' 'DirectMap1G: 57671680 kB' 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.957 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.958 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 71516200 kB' 'MemAvailable: 75108184 kB' 'Buffers: 2704 kB' 'Cached: 14653656 kB' 'SwapCached: 0 kB' 'Active: 11651584 kB' 'Inactive: 3615828 kB' 'Active(anon): 11200832 kB' 'Inactive(anon): 0 kB' 'Active(file): 450752 kB' 'Inactive(file): 3615828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 614712 kB' 'Mapped: 191976 kB' 'Shmem: 10589780 kB' 'KReclaimable: 357600 kB' 'Slab: 1020992 kB' 'SReclaimable: 357600 kB' 'SUnreclaim: 663392 kB' 'KernelStack: 22448 kB' 'PageTables: 8612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 12670864 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221672 kB' 'VmallocChunk: 0 kB' 'Percpu: 90496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3976148 kB' 'DirectMap2M: 39743488 kB' 'DirectMap1G: 57671680 kB' 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.959 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.960 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.960 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.960 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.960 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.960 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.960 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.960 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.960 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.960 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.960 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.960 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.960 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.960 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.960 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.960 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.960 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.960 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.960 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.960 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.960 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.960 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.960 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.960 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.960 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.960 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.960 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.960 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.960 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.960 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.960 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.960 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.960 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.960 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.960 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.960 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.960 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:09.961 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:09.962 nr_hugepages=1024 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:09.962 resv_hugepages=0 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:09.962 surplus_hugepages=0 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:09.962 anon_hugepages=0 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 71515696 kB' 'MemAvailable: 75107680 kB' 'Buffers: 2704 kB' 'Cached: 14653696 kB' 'SwapCached: 0 kB' 'Active: 11651408 kB' 'Inactive: 3615828 kB' 'Active(anon): 11200656 kB' 'Inactive(anon): 0 kB' 'Active(file): 450752 kB' 'Inactive(file): 3615828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 614496 kB' 'Mapped: 191976 kB' 'Shmem: 10589820 kB' 'KReclaimable: 357600 kB' 'Slab: 1020992 kB' 'SReclaimable: 357600 kB' 'SUnreclaim: 663392 kB' 'KernelStack: 22448 kB' 'PageTables: 8612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 12670884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221672 kB' 'VmallocChunk: 0 kB' 'Percpu: 90496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3976148 kB' 'DirectMap2M: 39743488 kB' 'DirectMap1G: 57671680 kB' 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.962 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.224 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 35581936 kB' 'MemUsed: 12486460 kB' 'SwapCached: 0 kB' 'Active: 8601224 kB' 'Inactive: 268784 kB' 'Active(anon): 8430436 kB' 'Inactive(anon): 0 kB' 'Active(file): 170788 kB' 'Inactive(file): 268784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8620756 kB' 'Mapped: 76604 kB' 'AnonPages: 252548 kB' 'Shmem: 8181184 kB' 'KernelStack: 11000 kB' 'PageTables: 3780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 116656 kB' 'Slab: 402452 kB' 'SReclaimable: 116656 kB' 'SUnreclaim: 285796 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.225 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:10.226 node0=1024 expecting 1024 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:10.226 21:22:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:13.522 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:13.522 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:13.522 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:13.522 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:13.522 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:13.522 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:13.522 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:13.522 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:13.522 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:13.522 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:13.522 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:13.522 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:13.522 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:13.522 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:13.522 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:13.522 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:13.522 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:13.522 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 71532480 kB' 'MemAvailable: 75124464 kB' 'Buffers: 2704 kB' 'Cached: 14653764 kB' 'SwapCached: 0 kB' 'Active: 11655548 kB' 'Inactive: 3615828 kB' 'Active(anon): 11204796 kB' 'Inactive(anon): 0 kB' 'Active(file): 450752 kB' 'Inactive(file): 3615828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 618096 kB' 'Mapped: 192528 kB' 'Shmem: 10589888 kB' 'KReclaimable: 357600 kB' 'Slab: 1021228 kB' 'SReclaimable: 357600 kB' 'SUnreclaim: 663628 kB' 'KernelStack: 22464 kB' 'PageTables: 8664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 12674404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221752 kB' 'VmallocChunk: 0 kB' 'Percpu: 90496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3976148 kB' 'DirectMap2M: 39743488 kB' 'DirectMap1G: 57671680 kB' 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.522 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.523 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 71529000 kB' 'MemAvailable: 75120984 kB' 'Buffers: 2704 kB' 'Cached: 14653764 kB' 'SwapCached: 0 kB' 'Active: 11657760 kB' 'Inactive: 3615828 kB' 'Active(anon): 11207008 kB' 'Inactive(anon): 0 kB' 'Active(file): 450752 kB' 'Inactive(file): 3615828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 620836 kB' 'Mapped: 192556 kB' 'Shmem: 10589888 kB' 'KReclaimable: 357600 kB' 'Slab: 1021196 kB' 'SReclaimable: 357600 kB' 'SUnreclaim: 663596 kB' 'KernelStack: 22480 kB' 'PageTables: 8684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 12680332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221724 kB' 'VmallocChunk: 0 kB' 'Percpu: 90496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3976148 kB' 'DirectMap2M: 39743488 kB' 'DirectMap1G: 57671680 kB' 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.524 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.525 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 71531220 kB' 'MemAvailable: 75123204 kB' 'Buffers: 2704 kB' 'Cached: 14653776 kB' 'SwapCached: 0 kB' 'Active: 11653732 kB' 'Inactive: 3615828 kB' 'Active(anon): 11202980 kB' 'Inactive(anon): 0 kB' 'Active(file): 450752 kB' 'Inactive(file): 3615828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 615904 kB' 'Mapped: 192408 kB' 'Shmem: 10589900 kB' 'KReclaimable: 357600 kB' 'Slab: 1021220 kB' 'SReclaimable: 357600 kB' 'SUnreclaim: 663620 kB' 'KernelStack: 22496 kB' 'PageTables: 8812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 12691328 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221688 kB' 'VmallocChunk: 0 kB' 'Percpu: 90496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3976148 kB' 'DirectMap2M: 39743488 kB' 'DirectMap1G: 57671680 kB' 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.526 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.527 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:13.528 nr_hugepages=1024 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:13.528 resv_hugepages=0 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:13.528 surplus_hugepages=0 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:13.528 anon_hugepages=0 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286552 kB' 'MemFree: 71532328 kB' 'MemAvailable: 75124312 kB' 'Buffers: 2704 kB' 'Cached: 14653820 kB' 'SwapCached: 0 kB' 'Active: 11653036 kB' 'Inactive: 3615828 kB' 'Active(anon): 11202284 kB' 'Inactive(anon): 0 kB' 'Active(file): 450752 kB' 'Inactive(file): 3615828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 615716 kB' 'Mapped: 191996 kB' 'Shmem: 10589944 kB' 'KReclaimable: 357600 kB' 'Slab: 1021224 kB' 'SReclaimable: 357600 kB' 'SUnreclaim: 663624 kB' 'KernelStack: 22496 kB' 'PageTables: 8868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 12674628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221672 kB' 'VmallocChunk: 0 kB' 'Percpu: 90496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3976148 kB' 'DirectMap2M: 39743488 kB' 'DirectMap1G: 57671680 kB' 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.528 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.529 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 35591776 kB' 'MemUsed: 12476620 kB' 'SwapCached: 0 kB' 'Active: 8601244 kB' 'Inactive: 268784 kB' 'Active(anon): 8430456 kB' 'Inactive(anon): 0 kB' 'Active(file): 170788 kB' 'Inactive(file): 268784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8620784 kB' 'Mapped: 76112 kB' 'AnonPages: 252944 kB' 'Shmem: 8181212 kB' 'KernelStack: 11160 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 116656 kB' 'Slab: 402716 kB' 'SReclaimable: 116656 kB' 'SUnreclaim: 286060 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.530 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:13.531 node0=1024 expecting 1024 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:13.531 00:05:13.531 real 0m6.764s 00:05:13.531 user 0m2.601s 00:05:13.531 sys 0m4.259s 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:13.531 21:22:13 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:13.531 ************************************ 00:05:13.531 END TEST no_shrink_alloc 00:05:13.531 ************************************ 00:05:13.531 21:22:13 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:13.531 21:22:13 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:13.531 21:22:13 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:13.531 21:22:13 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:13.531 21:22:13 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:13.531 21:22:13 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:13.531 21:22:13 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:13.531 21:22:13 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:13.531 21:22:13 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:13.531 21:22:13 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:13.531 21:22:13 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:13.531 21:22:13 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:13.531 21:22:13 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:13.531 21:22:13 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:13.531 00:05:13.531 real 0m25.004s 00:05:13.531 user 0m9.305s 00:05:13.531 sys 0m15.154s 00:05:13.531 21:22:13 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:13.531 21:22:13 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:13.531 ************************************ 00:05:13.531 END TEST hugepages 00:05:13.531 ************************************ 00:05:13.531 21:22:13 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:13.532 21:22:13 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:13.532 21:22:13 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:13.532 21:22:13 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:13.532 ************************************ 00:05:13.532 START TEST driver 00:05:13.532 ************************************ 00:05:13.532 21:22:13 setup.sh.driver -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:13.791 * Looking for test storage... 00:05:13.791 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:13.791 21:22:13 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:13.791 21:22:13 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:13.791 21:22:13 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:19.067 21:22:18 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:19.067 21:22:18 setup.sh.driver -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:19.067 21:22:18 setup.sh.driver -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:19.067 21:22:18 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:19.067 ************************************ 00:05:19.067 START TEST guess_driver 00:05:19.067 ************************************ 00:05:19.067 21:22:18 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # guess_driver 00:05:19.067 21:22:18 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:19.067 21:22:18 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:19.067 21:22:18 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:19.067 21:22:18 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:19.067 21:22:18 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:19.067 21:22:18 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:19.067 21:22:18 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:19.067 21:22:18 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:19.067 21:22:18 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:19.067 21:22:18 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 223 > 0 )) 00:05:19.067 21:22:18 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:19.067 21:22:18 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:05:19.067 21:22:18 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:05:19.067 21:22:18 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:19.067 21:22:18 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:19.067 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:19.067 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:19.067 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:19.067 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:19.067 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:19.067 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:19.067 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:19.067 21:22:18 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:05:19.067 21:22:18 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:05:19.067 21:22:18 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:19.067 21:22:18 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:19.067 21:22:18 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:19.067 Looking for driver=vfio-pci 00:05:19.067 21:22:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:19.067 21:22:18 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:19.067 21:22:18 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:19.067 21:22:18 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:21.604 21:22:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:21.604 21:22:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:21.604 21:22:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:21.604 21:22:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:21.604 21:22:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:21.604 21:22:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:21.604 21:22:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:21.604 21:22:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:21.604 21:22:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:21.604 21:22:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:21.604 21:22:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:21.604 21:22:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:21.604 21:22:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:21.604 21:22:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:21.604 21:22:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:21.604 21:22:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:21.604 21:22:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:21.604 21:22:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:21.604 21:22:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:21.604 21:22:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:21.604 21:22:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:21.604 21:22:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:21.604 21:22:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:21.604 21:22:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:21.604 21:22:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:21.604 21:22:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:21.604 21:22:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:21.604 21:22:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:21.604 21:22:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:21.604 21:22:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:21.604 21:22:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:21.604 21:22:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:21.604 21:22:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:21.604 21:22:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:21.604 21:22:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:21.604 21:22:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:21.604 21:22:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:21.604 21:22:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:21.604 21:22:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:21.604 21:22:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:21.604 21:22:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:21.604 21:22:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:21.604 21:22:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:21.604 21:22:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:21.604 21:22:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:21.604 21:22:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:21.604 21:22:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:21.604 21:22:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:22.541 21:22:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:22.541 21:22:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:22.541 21:22:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:22.541 21:22:22 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:22.541 21:22:22 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:22.541 21:22:22 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:22.541 21:22:22 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:27.816 00:05:27.816 real 0m8.750s 00:05:27.816 user 0m2.563s 00:05:27.816 sys 0m4.594s 00:05:27.816 21:22:27 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:27.816 21:22:27 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:27.816 ************************************ 00:05:27.816 END TEST guess_driver 00:05:27.816 ************************************ 00:05:27.816 00:05:27.816 real 0m13.397s 00:05:27.816 user 0m3.962s 00:05:27.816 sys 0m7.016s 00:05:27.816 21:22:27 setup.sh.driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:27.816 21:22:27 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:27.816 ************************************ 00:05:27.816 END TEST driver 00:05:27.816 ************************************ 00:05:27.816 21:22:27 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:27.816 21:22:27 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:27.816 21:22:27 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:27.816 21:22:27 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:27.816 ************************************ 00:05:27.816 START TEST devices 00:05:27.816 ************************************ 00:05:27.816 21:22:27 setup.sh.devices -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:27.816 * Looking for test storage... 00:05:27.816 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:27.816 21:22:27 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:27.816 21:22:27 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:27.816 21:22:27 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:27.816 21:22:27 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:31.108 21:22:30 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:31.108 21:22:30 setup.sh.devices -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:05:31.108 21:22:30 setup.sh.devices -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:05:31.108 21:22:30 setup.sh.devices -- common/autotest_common.sh@1669 -- # local nvme bdf 00:05:31.108 21:22:30 setup.sh.devices -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:05:31.108 21:22:30 setup.sh.devices -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:05:31.108 21:22:30 setup.sh.devices -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:05:31.108 21:22:30 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:31.108 21:22:30 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:05:31.108 21:22:30 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:31.108 21:22:30 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:31.108 21:22:30 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:31.108 21:22:30 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:31.108 21:22:30 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:31.108 21:22:30 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:31.108 21:22:30 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:31.108 21:22:30 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:31.108 21:22:30 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:86:00.0 00:05:31.108 21:22:30 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\6\:\0\0\.\0* ]] 00:05:31.108 21:22:30 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:31.108 21:22:30 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:31.108 21:22:30 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:31.108 No valid GPT data, bailing 00:05:31.108 21:22:30 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:31.108 21:22:30 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:31.108 21:22:30 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:31.108 21:22:30 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:31.108 21:22:30 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:31.108 21:22:30 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:31.108 21:22:30 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:05:31.108 21:22:30 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:05:31.108 21:22:30 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:31.108 21:22:30 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:86:00.0 00:05:31.108 21:22:30 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:31.108 21:22:30 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:31.108 21:22:30 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:31.108 21:22:30 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:31.108 21:22:30 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:31.108 21:22:30 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:31.108 ************************************ 00:05:31.109 START TEST nvme_mount 00:05:31.109 ************************************ 00:05:31.109 21:22:30 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # nvme_mount 00:05:31.109 21:22:30 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:31.109 21:22:30 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:31.109 21:22:30 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:31.109 21:22:30 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:31.109 21:22:30 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:31.109 21:22:30 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:31.109 21:22:30 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:31.109 21:22:30 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:31.109 21:22:30 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:31.109 21:22:30 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:31.109 21:22:30 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:31.109 21:22:30 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:31.109 21:22:30 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:31.109 21:22:30 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:31.109 21:22:30 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:31.109 21:22:30 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:31.109 21:22:30 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:31.109 21:22:30 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:31.109 21:22:30 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:31.678 Creating new GPT entries in memory. 00:05:31.678 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:31.678 other utilities. 00:05:31.678 21:22:31 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:31.678 21:22:31 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:31.678 21:22:31 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:31.678 21:22:31 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:31.678 21:22:31 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:33.088 Creating new GPT entries in memory. 00:05:33.088 The operation has completed successfully. 00:05:33.088 21:22:32 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:33.088 21:22:32 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:33.088 21:22:32 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1223813 00:05:33.088 21:22:32 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:33.088 21:22:32 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:33.088 21:22:32 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:33.088 21:22:32 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:33.088 21:22:32 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:33.088 21:22:33 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:33.088 21:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:86:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:33.088 21:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:05:33.088 21:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:33.088 21:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:33.088 21:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:33.088 21:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:33.088 21:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:33.088 21:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:33.088 21:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:33.088 21:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.088 21:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:05:33.088 21:22:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:33.088 21:22:33 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:33.088 21:22:33 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:36.377 21:22:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:36.377 21:22:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:36.377 21:22:35 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:36.377 21:22:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.377 21:22:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:36.377 21:22:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.377 21:22:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:36.377 21:22:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.377 21:22:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:36.377 21:22:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.377 21:22:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:36.377 21:22:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.377 21:22:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:36.377 21:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.377 21:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:36.377 21:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.377 21:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:36.377 21:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.377 21:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:36.377 21:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.377 21:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:36.377 21:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.377 21:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:36.377 21:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.377 21:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:36.377 21:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.377 21:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:36.377 21:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.377 21:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:36.377 21:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.377 21:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:36.377 21:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.377 21:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:36.377 21:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.377 21:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:36.377 21:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.377 21:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:36.377 21:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:36.377 21:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:36.377 21:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:36.377 21:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:36.377 21:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:36.377 21:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:36.377 21:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:36.377 21:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:36.377 21:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:36.377 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:36.377 21:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:36.377 21:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:36.377 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:36.377 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:36.377 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:36.377 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:36.377 21:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:36.377 21:22:36 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:36.377 21:22:36 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:36.377 21:22:36 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:36.377 21:22:36 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:36.377 21:22:36 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:36.377 21:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:86:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:36.377 21:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:05:36.377 21:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:36.378 21:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:36.378 21:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:36.378 21:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:36.378 21:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:36.378 21:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:36.378 21:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:36.378 21:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.378 21:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:05:36.378 21:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:36.378 21:22:36 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:36.378 21:22:36 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:39.677 21:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:39.677 21:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:39.677 21:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:39.677 21:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.677 21:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:39.677 21:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.677 21:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:39.677 21:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.677 21:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:39.677 21:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.677 21:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:39.677 21:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.677 21:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:39.677 21:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.677 21:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:39.677 21:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.677 21:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:39.677 21:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.677 21:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:39.677 21:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.677 21:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:39.677 21:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.677 21:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:39.677 21:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.677 21:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:39.677 21:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.677 21:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:39.677 21:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.677 21:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:39.677 21:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.677 21:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:39.677 21:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.677 21:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:39.677 21:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.677 21:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:39.677 21:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.677 21:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:39.677 21:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:39.677 21:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:39.677 21:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:39.678 21:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:39.678 21:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:39.678 21:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:86:00.0 data@nvme0n1 '' '' 00:05:39.678 21:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:05:39.678 21:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:39.678 21:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:39.678 21:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:39.678 21:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:39.678 21:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:39.678 21:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:39.678 21:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.678 21:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:05:39.678 21:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:39.678 21:22:39 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:39.678 21:22:39 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:42.969 21:22:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:42.969 21:22:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:42.969 21:22:42 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:42.969 21:22:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.969 21:22:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:42.969 21:22:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.969 21:22:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:42.969 21:22:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.969 21:22:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:42.969 21:22:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.969 21:22:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:42.969 21:22:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.969 21:22:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:42.969 21:22:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.969 21:22:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:42.969 21:22:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.969 21:22:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:42.969 21:22:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.969 21:22:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:42.969 21:22:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.969 21:22:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:42.969 21:22:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.969 21:22:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:42.969 21:22:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.969 21:22:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:42.969 21:22:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.969 21:22:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:42.969 21:22:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.969 21:22:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:42.969 21:22:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.969 21:22:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:42.969 21:22:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.969 21:22:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:42.969 21:22:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.969 21:22:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:42.969 21:22:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.969 21:22:42 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:42.969 21:22:42 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:42.969 21:22:42 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:42.969 21:22:42 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:42.969 21:22:42 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:42.969 21:22:42 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:42.969 21:22:42 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:42.969 21:22:42 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:42.969 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:42.969 00:05:42.969 real 0m12.023s 00:05:42.969 user 0m3.608s 00:05:42.969 sys 0m6.247s 00:05:42.969 21:22:42 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:42.969 21:22:42 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:42.969 ************************************ 00:05:42.969 END TEST nvme_mount 00:05:42.969 ************************************ 00:05:42.969 21:22:42 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:42.969 21:22:42 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:42.969 21:22:42 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:42.969 21:22:42 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:42.969 ************************************ 00:05:42.969 START TEST dm_mount 00:05:42.969 ************************************ 00:05:42.969 21:22:42 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # dm_mount 00:05:42.969 21:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:42.969 21:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:42.969 21:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:42.969 21:22:42 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:42.969 21:22:42 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:42.969 21:22:42 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:42.969 21:22:42 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:42.969 21:22:42 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:42.969 21:22:42 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:42.969 21:22:42 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:42.969 21:22:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:42.969 21:22:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:42.969 21:22:42 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:42.969 21:22:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:42.969 21:22:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:42.969 21:22:42 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:42.969 21:22:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:42.969 21:22:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:42.969 21:22:42 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:42.969 21:22:42 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:42.969 21:22:42 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:43.907 Creating new GPT entries in memory. 00:05:43.907 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:43.907 other utilities. 00:05:43.907 21:22:44 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:43.907 21:22:44 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:43.907 21:22:44 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:43.907 21:22:44 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:43.907 21:22:44 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:44.845 Creating new GPT entries in memory. 00:05:44.845 The operation has completed successfully. 00:05:44.845 21:22:45 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:44.845 21:22:45 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:44.845 21:22:45 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:44.845 21:22:45 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:44.845 21:22:45 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:46.224 The operation has completed successfully. 00:05:46.224 21:22:46 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:46.224 21:22:46 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:46.224 21:22:46 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1228563 00:05:46.224 21:22:46 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:46.224 21:22:46 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:46.224 21:22:46 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:46.224 21:22:46 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:46.224 21:22:46 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:46.224 21:22:46 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:46.224 21:22:46 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:46.224 21:22:46 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:46.224 21:22:46 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:46.224 21:22:46 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:46.224 21:22:46 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:46.224 21:22:46 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:46.224 21:22:46 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:46.224 21:22:46 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:46.224 21:22:46 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:46.224 21:22:46 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:46.224 21:22:46 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:46.224 21:22:46 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:46.224 21:22:46 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:46.224 21:22:46 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:86:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:46.224 21:22:46 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:05:46.224 21:22:46 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:46.224 21:22:46 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:46.224 21:22:46 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:46.224 21:22:46 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:46.224 21:22:46 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:46.224 21:22:46 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:46.224 21:22:46 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:46.224 21:22:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.224 21:22:46 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:05:46.224 21:22:46 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:46.224 21:22:46 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:46.224 21:22:46 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:86:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:49.517 21:22:49 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:52.053 21:22:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:52.053 21:22:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:52.053 21:22:52 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:52.053 21:22:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.053 21:22:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:52.053 21:22:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.053 21:22:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:52.053 21:22:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.053 21:22:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:52.053 21:22:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.053 21:22:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:52.053 21:22:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.053 21:22:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:52.053 21:22:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.053 21:22:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:52.053 21:22:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.053 21:22:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:52.053 21:22:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.053 21:22:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:52.053 21:22:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.053 21:22:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:52.053 21:22:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.053 21:22:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:52.053 21:22:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.053 21:22:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:52.053 21:22:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.054 21:22:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:52.054 21:22:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.054 21:22:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:52.054 21:22:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.054 21:22:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:52.054 21:22:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.054 21:22:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:52.054 21:22:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.054 21:22:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:05:52.054 21:22:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.313 21:22:52 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:52.313 21:22:52 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:52.313 21:22:52 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:52.313 21:22:52 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:52.313 21:22:52 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:52.313 21:22:52 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:52.313 21:22:52 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:52.313 21:22:52 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:52.313 21:22:52 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:52.313 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:52.313 21:22:52 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:52.313 21:22:52 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:52.313 00:05:52.313 real 0m9.453s 00:05:52.313 user 0m2.236s 00:05:52.313 sys 0m4.182s 00:05:52.313 21:22:52 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:52.313 21:22:52 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:52.313 ************************************ 00:05:52.313 END TEST dm_mount 00:05:52.313 ************************************ 00:05:52.313 21:22:52 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:52.313 21:22:52 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:52.313 21:22:52 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:52.313 21:22:52 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:52.313 21:22:52 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:52.313 21:22:52 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:52.313 21:22:52 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:52.573 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:52.573 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:52.573 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:52.573 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:52.573 21:22:52 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:52.573 21:22:52 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:52.573 21:22:52 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:52.573 21:22:52 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:52.573 21:22:52 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:52.573 21:22:52 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:52.573 21:22:52 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:52.573 00:05:52.573 real 0m25.523s 00:05:52.573 user 0m7.308s 00:05:52.573 sys 0m12.894s 00:05:52.573 21:22:52 setup.sh.devices -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:52.573 21:22:52 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:52.573 ************************************ 00:05:52.573 END TEST devices 00:05:52.573 ************************************ 00:05:52.573 00:05:52.573 real 1m26.304s 00:05:52.573 user 0m27.981s 00:05:52.573 sys 0m48.580s 00:05:52.573 21:22:52 setup.sh -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:52.573 21:22:52 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:52.573 ************************************ 00:05:52.573 END TEST setup.sh 00:05:52.573 ************************************ 00:05:52.574 21:22:52 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:55.864 Hugepages 00:05:55.864 node hugesize free / total 00:05:55.864 node0 1048576kB 0 / 0 00:05:55.864 node0 2048kB 2048 / 2048 00:05:55.864 node1 1048576kB 0 / 0 00:05:55.864 node1 2048kB 0 / 0 00:05:55.864 00:05:55.864 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:55.864 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:55.864 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:55.864 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:55.864 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:55.864 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:55.864 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:55.864 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:55.864 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:55.864 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:55.864 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:55.864 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:55.864 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:55.864 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:55.864 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:55.864 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:55.864 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:55.864 NVMe 0000:86:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:56.123 21:22:56 -- spdk/autotest.sh@130 -- # uname -s 00:05:56.123 21:22:56 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:56.123 21:22:56 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:56.123 21:22:56 -- common/autotest_common.sh@1530 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:59.413 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:59.413 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:59.413 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:59.413 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:59.413 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:59.413 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:59.413 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:59.413 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:59.413 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:59.413 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:59.413 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:59.413 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:59.413 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:59.413 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:59.413 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:59.413 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:59.980 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:06:00.239 21:23:00 -- common/autotest_common.sh@1531 -- # sleep 1 00:06:01.173 21:23:01 -- common/autotest_common.sh@1532 -- # bdfs=() 00:06:01.173 21:23:01 -- common/autotest_common.sh@1532 -- # local bdfs 00:06:01.173 21:23:01 -- common/autotest_common.sh@1533 -- # bdfs=($(get_nvme_bdfs)) 00:06:01.173 21:23:01 -- common/autotest_common.sh@1533 -- # get_nvme_bdfs 00:06:01.173 21:23:01 -- common/autotest_common.sh@1512 -- # bdfs=() 00:06:01.173 21:23:01 -- common/autotest_common.sh@1512 -- # local bdfs 00:06:01.173 21:23:01 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:01.173 21:23:01 -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:01.173 21:23:01 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:06:01.173 21:23:01 -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:06:01.173 21:23:01 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:86:00.0 00:06:01.173 21:23:01 -- common/autotest_common.sh@1535 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:04.461 Waiting for block devices as requested 00:06:04.461 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:06:04.461 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:06:04.461 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:06:04.720 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:06:04.720 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:06:04.721 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:06:04.980 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:06:04.980 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:06:04.980 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:06:04.980 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:06:05.239 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:06:05.239 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:06:05.239 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:06:05.498 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:06:05.498 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:06:05.498 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:06:05.498 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:06:05.758 21:23:05 -- common/autotest_common.sh@1537 -- # for bdf in "${bdfs[@]}" 00:06:05.758 21:23:05 -- common/autotest_common.sh@1538 -- # get_nvme_ctrlr_from_bdf 0000:86:00.0 00:06:05.758 21:23:05 -- common/autotest_common.sh@1501 -- # readlink -f /sys/class/nvme/nvme0 00:06:05.758 21:23:05 -- common/autotest_common.sh@1501 -- # grep 0000:86:00.0/nvme/nvme 00:06:05.758 21:23:05 -- common/autotest_common.sh@1501 -- # bdf_sysfs_path=/sys/devices/pci0000:85/0000:85:00.0/0000:86:00.0/nvme/nvme0 00:06:05.758 21:23:05 -- common/autotest_common.sh@1502 -- # [[ -z /sys/devices/pci0000:85/0000:85:00.0/0000:86:00.0/nvme/nvme0 ]] 00:06:05.758 21:23:05 -- common/autotest_common.sh@1506 -- # basename /sys/devices/pci0000:85/0000:85:00.0/0000:86:00.0/nvme/nvme0 00:06:05.758 21:23:05 -- common/autotest_common.sh@1506 -- # printf '%s\n' nvme0 00:06:05.758 21:23:05 -- common/autotest_common.sh@1538 -- # nvme_ctrlr=/dev/nvme0 00:06:05.758 21:23:05 -- common/autotest_common.sh@1539 -- # [[ -z /dev/nvme0 ]] 00:06:05.758 21:23:05 -- common/autotest_common.sh@1544 -- # nvme id-ctrl /dev/nvme0 00:06:05.758 21:23:05 -- common/autotest_common.sh@1544 -- # grep oacs 00:06:05.758 21:23:05 -- common/autotest_common.sh@1544 -- # cut -d: -f2 00:06:05.758 21:23:05 -- common/autotest_common.sh@1544 -- # oacs=' 0xe' 00:06:05.758 21:23:05 -- common/autotest_common.sh@1545 -- # oacs_ns_manage=8 00:06:05.758 21:23:05 -- common/autotest_common.sh@1547 -- # [[ 8 -ne 0 ]] 00:06:05.758 21:23:05 -- common/autotest_common.sh@1553 -- # nvme id-ctrl /dev/nvme0 00:06:05.758 21:23:05 -- common/autotest_common.sh@1553 -- # grep unvmcap 00:06:05.758 21:23:05 -- common/autotest_common.sh@1553 -- # cut -d: -f2 00:06:05.758 21:23:05 -- common/autotest_common.sh@1553 -- # unvmcap=' 0' 00:06:05.758 21:23:05 -- common/autotest_common.sh@1554 -- # [[ 0 -eq 0 ]] 00:06:05.758 21:23:05 -- common/autotest_common.sh@1556 -- # continue 00:06:05.758 21:23:05 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:06:05.758 21:23:05 -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:05.758 21:23:05 -- common/autotest_common.sh@10 -- # set +x 00:06:05.758 21:23:05 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:06:05.758 21:23:05 -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:05.758 21:23:05 -- common/autotest_common.sh@10 -- # set +x 00:06:05.758 21:23:05 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:09.051 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:09.051 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:09.051 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:09.051 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:09.051 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:09.051 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:09.051 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:09.051 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:09.051 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:09.051 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:09.051 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:09.051 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:09.051 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:09.051 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:09.051 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:09.051 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:09.989 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:06:09.989 21:23:10 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:06:09.989 21:23:10 -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:09.989 21:23:10 -- common/autotest_common.sh@10 -- # set +x 00:06:09.989 21:23:10 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:06:09.989 21:23:10 -- common/autotest_common.sh@1590 -- # mapfile -t bdfs 00:06:09.989 21:23:10 -- common/autotest_common.sh@1590 -- # get_nvme_bdfs_by_id 0x0a54 00:06:09.989 21:23:10 -- common/autotest_common.sh@1576 -- # bdfs=() 00:06:09.989 21:23:10 -- common/autotest_common.sh@1576 -- # local bdfs 00:06:09.989 21:23:10 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs 00:06:09.989 21:23:10 -- common/autotest_common.sh@1512 -- # bdfs=() 00:06:09.989 21:23:10 -- common/autotest_common.sh@1512 -- # local bdfs 00:06:09.989 21:23:10 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:09.989 21:23:10 -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:09.989 21:23:10 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:06:10.248 21:23:10 -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:06:10.248 21:23:10 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:86:00.0 00:06:10.248 21:23:10 -- common/autotest_common.sh@1578 -- # for bdf in $(get_nvme_bdfs) 00:06:10.248 21:23:10 -- common/autotest_common.sh@1579 -- # cat /sys/bus/pci/devices/0000:86:00.0/device 00:06:10.248 21:23:10 -- common/autotest_common.sh@1579 -- # device=0x0a54 00:06:10.248 21:23:10 -- common/autotest_common.sh@1580 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:06:10.248 21:23:10 -- common/autotest_common.sh@1581 -- # bdfs+=($bdf) 00:06:10.248 21:23:10 -- common/autotest_common.sh@1585 -- # printf '%s\n' 0000:86:00.0 00:06:10.248 21:23:10 -- common/autotest_common.sh@1591 -- # [[ -z 0000:86:00.0 ]] 00:06:10.248 21:23:10 -- common/autotest_common.sh@1596 -- # spdk_tgt_pid=1238735 00:06:10.248 21:23:10 -- common/autotest_common.sh@1597 -- # waitforlisten 1238735 00:06:10.248 21:23:10 -- common/autotest_common.sh@1595 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:10.248 21:23:10 -- common/autotest_common.sh@830 -- # '[' -z 1238735 ']' 00:06:10.248 21:23:10 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.248 21:23:10 -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:10.248 21:23:10 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.248 21:23:10 -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:10.248 21:23:10 -- common/autotest_common.sh@10 -- # set +x 00:06:10.248 [2024-06-07 21:23:10.370897] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:06:10.248 [2024-06-07 21:23:10.370954] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1238735 ] 00:06:10.248 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.248 [2024-06-07 21:23:10.457667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.508 [2024-06-07 21:23:10.551916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.076 21:23:11 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:11.076 21:23:11 -- common/autotest_common.sh@863 -- # return 0 00:06:11.076 21:23:11 -- common/autotest_common.sh@1599 -- # bdf_id=0 00:06:11.076 21:23:11 -- common/autotest_common.sh@1600 -- # for bdf in "${bdfs[@]}" 00:06:11.076 21:23:11 -- common/autotest_common.sh@1601 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:86:00.0 00:06:14.365 nvme0n1 00:06:14.365 21:23:14 -- common/autotest_common.sh@1603 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:06:14.365 [2024-06-07 21:23:14.618461] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:06:14.365 request: 00:06:14.365 { 00:06:14.365 "nvme_ctrlr_name": "nvme0", 00:06:14.365 "password": "test", 00:06:14.365 "method": "bdev_nvme_opal_revert", 00:06:14.366 "req_id": 1 00:06:14.366 } 00:06:14.366 Got JSON-RPC error response 00:06:14.366 response: 00:06:14.366 { 00:06:14.366 "code": -32602, 00:06:14.366 "message": "Invalid parameters" 00:06:14.366 } 00:06:14.625 21:23:14 -- common/autotest_common.sh@1603 -- # true 00:06:14.625 21:23:14 -- common/autotest_common.sh@1604 -- # (( ++bdf_id )) 00:06:14.625 21:23:14 -- common/autotest_common.sh@1607 -- # killprocess 1238735 00:06:14.625 21:23:14 -- common/autotest_common.sh@949 -- # '[' -z 1238735 ']' 00:06:14.625 21:23:14 -- common/autotest_common.sh@953 -- # kill -0 1238735 00:06:14.625 21:23:14 -- common/autotest_common.sh@954 -- # uname 00:06:14.625 21:23:14 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:14.625 21:23:14 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1238735 00:06:14.625 21:23:14 -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:14.625 21:23:14 -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:14.625 21:23:14 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1238735' 00:06:14.625 killing process with pid 1238735 00:06:14.625 21:23:14 -- common/autotest_common.sh@968 -- # kill 1238735 00:06:14.625 21:23:14 -- common/autotest_common.sh@973 -- # wait 1238735 00:06:16.591 21:23:16 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:06:16.591 21:23:16 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:06:16.591 21:23:16 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:16.591 21:23:16 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:16.591 21:23:16 -- spdk/autotest.sh@162 -- # timing_enter lib 00:06:16.591 21:23:16 -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:16.591 21:23:16 -- common/autotest_common.sh@10 -- # set +x 00:06:16.591 21:23:16 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:06:16.591 21:23:16 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:16.591 21:23:16 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:16.591 21:23:16 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:16.591 21:23:16 -- common/autotest_common.sh@10 -- # set +x 00:06:16.591 ************************************ 00:06:16.591 START TEST env 00:06:16.591 ************************************ 00:06:16.591 21:23:16 env -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:16.591 * Looking for test storage... 00:06:16.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:06:16.591 21:23:16 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:16.591 21:23:16 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:16.591 21:23:16 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:16.591 21:23:16 env -- common/autotest_common.sh@10 -- # set +x 00:06:16.591 ************************************ 00:06:16.591 START TEST env_memory 00:06:16.591 ************************************ 00:06:16.591 21:23:16 env.env_memory -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:16.591 00:06:16.591 00:06:16.591 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.591 http://cunit.sourceforge.net/ 00:06:16.591 00:06:16.591 00:06:16.591 Suite: memory 00:06:16.591 Test: alloc and free memory map ...[2024-06-07 21:23:16.555732] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:16.591 passed 00:06:16.591 Test: mem map translation ...[2024-06-07 21:23:16.584721] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:16.591 [2024-06-07 21:23:16.584743] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:16.591 [2024-06-07 21:23:16.584797] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:16.591 [2024-06-07 21:23:16.584806] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:16.591 passed 00:06:16.591 Test: mem map registration ...[2024-06-07 21:23:16.644536] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:16.591 [2024-06-07 21:23:16.644556] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:16.591 passed 00:06:16.591 Test: mem map adjacent registrations ...passed 00:06:16.591 00:06:16.591 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.591 suites 1 1 n/a 0 0 00:06:16.591 tests 4 4 4 0 0 00:06:16.591 asserts 152 152 152 0 n/a 00:06:16.591 00:06:16.591 Elapsed time = 0.203 seconds 00:06:16.591 00:06:16.591 real 0m0.211s 00:06:16.591 user 0m0.199s 00:06:16.591 sys 0m0.011s 00:06:16.591 21:23:16 env.env_memory -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:16.591 21:23:16 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:16.591 ************************************ 00:06:16.591 END TEST env_memory 00:06:16.591 ************************************ 00:06:16.591 21:23:16 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:16.591 21:23:16 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:16.591 21:23:16 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:16.591 21:23:16 env -- common/autotest_common.sh@10 -- # set +x 00:06:16.591 ************************************ 00:06:16.592 START TEST env_vtophys 00:06:16.592 ************************************ 00:06:16.592 21:23:16 env.env_vtophys -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:16.592 EAL: lib.eal log level changed from notice to debug 00:06:16.592 EAL: Detected lcore 0 as core 0 on socket 0 00:06:16.592 EAL: Detected lcore 1 as core 1 on socket 0 00:06:16.592 EAL: Detected lcore 2 as core 2 on socket 0 00:06:16.592 EAL: Detected lcore 3 as core 3 on socket 0 00:06:16.592 EAL: Detected lcore 4 as core 4 on socket 0 00:06:16.592 EAL: Detected lcore 5 as core 5 on socket 0 00:06:16.592 EAL: Detected lcore 6 as core 6 on socket 0 00:06:16.592 EAL: Detected lcore 7 as core 8 on socket 0 00:06:16.592 EAL: Detected lcore 8 as core 9 on socket 0 00:06:16.592 EAL: Detected lcore 9 as core 10 on socket 0 00:06:16.592 EAL: Detected lcore 10 as core 11 on socket 0 00:06:16.592 EAL: Detected lcore 11 as core 12 on socket 0 00:06:16.592 EAL: Detected lcore 12 as core 13 on socket 0 00:06:16.592 EAL: Detected lcore 13 as core 14 on socket 0 00:06:16.592 EAL: Detected lcore 14 as core 16 on socket 0 00:06:16.592 EAL: Detected lcore 15 as core 17 on socket 0 00:06:16.592 EAL: Detected lcore 16 as core 18 on socket 0 00:06:16.592 EAL: Detected lcore 17 as core 19 on socket 0 00:06:16.592 EAL: Detected lcore 18 as core 20 on socket 0 00:06:16.592 EAL: Detected lcore 19 as core 21 on socket 0 00:06:16.592 EAL: Detected lcore 20 as core 22 on socket 0 00:06:16.592 EAL: Detected lcore 21 as core 24 on socket 0 00:06:16.592 EAL: Detected lcore 22 as core 25 on socket 0 00:06:16.592 EAL: Detected lcore 23 as core 26 on socket 0 00:06:16.592 EAL: Detected lcore 24 as core 27 on socket 0 00:06:16.592 EAL: Detected lcore 25 as core 28 on socket 0 00:06:16.592 EAL: Detected lcore 26 as core 29 on socket 0 00:06:16.592 EAL: Detected lcore 27 as core 30 on socket 0 00:06:16.592 EAL: Detected lcore 28 as core 0 on socket 1 00:06:16.592 EAL: Detected lcore 29 as core 1 on socket 1 00:06:16.592 EAL: Detected lcore 30 as core 2 on socket 1 00:06:16.592 EAL: Detected lcore 31 as core 3 on socket 1 00:06:16.592 EAL: Detected lcore 32 as core 4 on socket 1 00:06:16.592 EAL: Detected lcore 33 as core 5 on socket 1 00:06:16.592 EAL: Detected lcore 34 as core 6 on socket 1 00:06:16.592 EAL: Detected lcore 35 as core 8 on socket 1 00:06:16.592 EAL: Detected lcore 36 as core 9 on socket 1 00:06:16.592 EAL: Detected lcore 37 as core 10 on socket 1 00:06:16.592 EAL: Detected lcore 38 as core 11 on socket 1 00:06:16.592 EAL: Detected lcore 39 as core 12 on socket 1 00:06:16.592 EAL: Detected lcore 40 as core 13 on socket 1 00:06:16.592 EAL: Detected lcore 41 as core 14 on socket 1 00:06:16.592 EAL: Detected lcore 42 as core 16 on socket 1 00:06:16.592 EAL: Detected lcore 43 as core 17 on socket 1 00:06:16.592 EAL: Detected lcore 44 as core 18 on socket 1 00:06:16.592 EAL: Detected lcore 45 as core 19 on socket 1 00:06:16.592 EAL: Detected lcore 46 as core 20 on socket 1 00:06:16.592 EAL: Detected lcore 47 as core 21 on socket 1 00:06:16.592 EAL: Detected lcore 48 as core 22 on socket 1 00:06:16.592 EAL: Detected lcore 49 as core 24 on socket 1 00:06:16.592 EAL: Detected lcore 50 as core 25 on socket 1 00:06:16.592 EAL: Detected lcore 51 as core 26 on socket 1 00:06:16.592 EAL: Detected lcore 52 as core 27 on socket 1 00:06:16.592 EAL: Detected lcore 53 as core 28 on socket 1 00:06:16.592 EAL: Detected lcore 54 as core 29 on socket 1 00:06:16.592 EAL: Detected lcore 55 as core 30 on socket 1 00:06:16.592 EAL: Detected lcore 56 as core 0 on socket 0 00:06:16.592 EAL: Detected lcore 57 as core 1 on socket 0 00:06:16.592 EAL: Detected lcore 58 as core 2 on socket 0 00:06:16.592 EAL: Detected lcore 59 as core 3 on socket 0 00:06:16.592 EAL: Detected lcore 60 as core 4 on socket 0 00:06:16.592 EAL: Detected lcore 61 as core 5 on socket 0 00:06:16.592 EAL: Detected lcore 62 as core 6 on socket 0 00:06:16.592 EAL: Detected lcore 63 as core 8 on socket 0 00:06:16.592 EAL: Detected lcore 64 as core 9 on socket 0 00:06:16.592 EAL: Detected lcore 65 as core 10 on socket 0 00:06:16.592 EAL: Detected lcore 66 as core 11 on socket 0 00:06:16.592 EAL: Detected lcore 67 as core 12 on socket 0 00:06:16.592 EAL: Detected lcore 68 as core 13 on socket 0 00:06:16.592 EAL: Detected lcore 69 as core 14 on socket 0 00:06:16.592 EAL: Detected lcore 70 as core 16 on socket 0 00:06:16.592 EAL: Detected lcore 71 as core 17 on socket 0 00:06:16.592 EAL: Detected lcore 72 as core 18 on socket 0 00:06:16.592 EAL: Detected lcore 73 as core 19 on socket 0 00:06:16.592 EAL: Detected lcore 74 as core 20 on socket 0 00:06:16.592 EAL: Detected lcore 75 as core 21 on socket 0 00:06:16.592 EAL: Detected lcore 76 as core 22 on socket 0 00:06:16.592 EAL: Detected lcore 77 as core 24 on socket 0 00:06:16.592 EAL: Detected lcore 78 as core 25 on socket 0 00:06:16.592 EAL: Detected lcore 79 as core 26 on socket 0 00:06:16.592 EAL: Detected lcore 80 as core 27 on socket 0 00:06:16.592 EAL: Detected lcore 81 as core 28 on socket 0 00:06:16.592 EAL: Detected lcore 82 as core 29 on socket 0 00:06:16.592 EAL: Detected lcore 83 as core 30 on socket 0 00:06:16.592 EAL: Detected lcore 84 as core 0 on socket 1 00:06:16.592 EAL: Detected lcore 85 as core 1 on socket 1 00:06:16.592 EAL: Detected lcore 86 as core 2 on socket 1 00:06:16.592 EAL: Detected lcore 87 as core 3 on socket 1 00:06:16.592 EAL: Detected lcore 88 as core 4 on socket 1 00:06:16.592 EAL: Detected lcore 89 as core 5 on socket 1 00:06:16.592 EAL: Detected lcore 90 as core 6 on socket 1 00:06:16.592 EAL: Detected lcore 91 as core 8 on socket 1 00:06:16.592 EAL: Detected lcore 92 as core 9 on socket 1 00:06:16.592 EAL: Detected lcore 93 as core 10 on socket 1 00:06:16.592 EAL: Detected lcore 94 as core 11 on socket 1 00:06:16.592 EAL: Detected lcore 95 as core 12 on socket 1 00:06:16.592 EAL: Detected lcore 96 as core 13 on socket 1 00:06:16.592 EAL: Detected lcore 97 as core 14 on socket 1 00:06:16.592 EAL: Detected lcore 98 as core 16 on socket 1 00:06:16.592 EAL: Detected lcore 99 as core 17 on socket 1 00:06:16.592 EAL: Detected lcore 100 as core 18 on socket 1 00:06:16.592 EAL: Detected lcore 101 as core 19 on socket 1 00:06:16.592 EAL: Detected lcore 102 as core 20 on socket 1 00:06:16.592 EAL: Detected lcore 103 as core 21 on socket 1 00:06:16.592 EAL: Detected lcore 104 as core 22 on socket 1 00:06:16.592 EAL: Detected lcore 105 as core 24 on socket 1 00:06:16.592 EAL: Detected lcore 106 as core 25 on socket 1 00:06:16.592 EAL: Detected lcore 107 as core 26 on socket 1 00:06:16.592 EAL: Detected lcore 108 as core 27 on socket 1 00:06:16.592 EAL: Detected lcore 109 as core 28 on socket 1 00:06:16.592 EAL: Detected lcore 110 as core 29 on socket 1 00:06:16.592 EAL: Detected lcore 111 as core 30 on socket 1 00:06:16.592 EAL: Maximum logical cores by configuration: 128 00:06:16.592 EAL: Detected CPU lcores: 112 00:06:16.592 EAL: Detected NUMA nodes: 2 00:06:16.592 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:16.592 EAL: Detected shared linkage of DPDK 00:06:16.592 EAL: No shared files mode enabled, IPC will be disabled 00:06:16.592 EAL: Bus pci wants IOVA as 'DC' 00:06:16.592 EAL: Buses did not request a specific IOVA mode. 00:06:16.592 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:16.592 EAL: Selected IOVA mode 'VA' 00:06:16.592 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.592 EAL: Probing VFIO support... 00:06:16.592 EAL: IOMMU type 1 (Type 1) is supported 00:06:16.592 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:16.592 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:16.592 EAL: VFIO support initialized 00:06:16.592 EAL: Ask a virtual area of 0x2e000 bytes 00:06:16.592 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:16.592 EAL: Setting up physically contiguous memory... 00:06:16.592 EAL: Setting maximum number of open files to 524288 00:06:16.592 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:16.592 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:16.592 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:16.592 EAL: Ask a virtual area of 0x61000 bytes 00:06:16.592 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:16.592 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:16.592 EAL: Ask a virtual area of 0x400000000 bytes 00:06:16.592 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:16.592 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:16.592 EAL: Ask a virtual area of 0x61000 bytes 00:06:16.592 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:16.592 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:16.592 EAL: Ask a virtual area of 0x400000000 bytes 00:06:16.592 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:16.592 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:16.592 EAL: Ask a virtual area of 0x61000 bytes 00:06:16.592 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:16.592 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:16.592 EAL: Ask a virtual area of 0x400000000 bytes 00:06:16.592 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:16.592 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:16.592 EAL: Ask a virtual area of 0x61000 bytes 00:06:16.592 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:16.592 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:16.592 EAL: Ask a virtual area of 0x400000000 bytes 00:06:16.592 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:16.592 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:16.592 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:16.592 EAL: Ask a virtual area of 0x61000 bytes 00:06:16.592 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:16.592 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:16.592 EAL: Ask a virtual area of 0x400000000 bytes 00:06:16.592 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:16.592 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:16.592 EAL: Ask a virtual area of 0x61000 bytes 00:06:16.592 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:16.592 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:16.592 EAL: Ask a virtual area of 0x400000000 bytes 00:06:16.592 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:16.592 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:16.592 EAL: Ask a virtual area of 0x61000 bytes 00:06:16.592 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:16.592 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:16.592 EAL: Ask a virtual area of 0x400000000 bytes 00:06:16.592 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:16.592 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:16.592 EAL: Ask a virtual area of 0x61000 bytes 00:06:16.592 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:16.592 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:16.592 EAL: Ask a virtual area of 0x400000000 bytes 00:06:16.593 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:16.593 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:16.593 EAL: Hugepages will be freed exactly as allocated. 00:06:16.593 EAL: No shared files mode enabled, IPC is disabled 00:06:16.593 EAL: No shared files mode enabled, IPC is disabled 00:06:16.593 EAL: TSC frequency is ~2200000 KHz 00:06:16.593 EAL: Main lcore 0 is ready (tid=7fc2f98f9a00;cpuset=[0]) 00:06:16.593 EAL: Trying to obtain current memory policy. 00:06:16.593 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:16.593 EAL: Restoring previous memory policy: 0 00:06:16.593 EAL: request: mp_malloc_sync 00:06:16.593 EAL: No shared files mode enabled, IPC is disabled 00:06:16.593 EAL: Heap on socket 0 was expanded by 2MB 00:06:16.593 EAL: No shared files mode enabled, IPC is disabled 00:06:16.853 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:16.853 EAL: Mem event callback 'spdk:(nil)' registered 00:06:16.853 00:06:16.853 00:06:16.853 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.853 http://cunit.sourceforge.net/ 00:06:16.853 00:06:16.853 00:06:16.853 Suite: components_suite 00:06:16.853 Test: vtophys_malloc_test ...passed 00:06:16.853 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:16.853 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:16.853 EAL: Restoring previous memory policy: 4 00:06:16.853 EAL: Calling mem event callback 'spdk:(nil)' 00:06:16.853 EAL: request: mp_malloc_sync 00:06:16.853 EAL: No shared files mode enabled, IPC is disabled 00:06:16.853 EAL: Heap on socket 0 was expanded by 4MB 00:06:16.853 EAL: Calling mem event callback 'spdk:(nil)' 00:06:16.853 EAL: request: mp_malloc_sync 00:06:16.853 EAL: No shared files mode enabled, IPC is disabled 00:06:16.853 EAL: Heap on socket 0 was shrunk by 4MB 00:06:16.853 EAL: Trying to obtain current memory policy. 00:06:16.853 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:16.853 EAL: Restoring previous memory policy: 4 00:06:16.853 EAL: Calling mem event callback 'spdk:(nil)' 00:06:16.853 EAL: request: mp_malloc_sync 00:06:16.853 EAL: No shared files mode enabled, IPC is disabled 00:06:16.853 EAL: Heap on socket 0 was expanded by 6MB 00:06:16.853 EAL: Calling mem event callback 'spdk:(nil)' 00:06:16.853 EAL: request: mp_malloc_sync 00:06:16.853 EAL: No shared files mode enabled, IPC is disabled 00:06:16.853 EAL: Heap on socket 0 was shrunk by 6MB 00:06:16.853 EAL: Trying to obtain current memory policy. 00:06:16.853 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:16.853 EAL: Restoring previous memory policy: 4 00:06:16.853 EAL: Calling mem event callback 'spdk:(nil)' 00:06:16.853 EAL: request: mp_malloc_sync 00:06:16.853 EAL: No shared files mode enabled, IPC is disabled 00:06:16.853 EAL: Heap on socket 0 was expanded by 10MB 00:06:16.853 EAL: Calling mem event callback 'spdk:(nil)' 00:06:16.853 EAL: request: mp_malloc_sync 00:06:16.853 EAL: No shared files mode enabled, IPC is disabled 00:06:16.853 EAL: Heap on socket 0 was shrunk by 10MB 00:06:16.853 EAL: Trying to obtain current memory policy. 00:06:16.853 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:16.853 EAL: Restoring previous memory policy: 4 00:06:16.853 EAL: Calling mem event callback 'spdk:(nil)' 00:06:16.853 EAL: request: mp_malloc_sync 00:06:16.853 EAL: No shared files mode enabled, IPC is disabled 00:06:16.853 EAL: Heap on socket 0 was expanded by 18MB 00:06:16.853 EAL: Calling mem event callback 'spdk:(nil)' 00:06:16.853 EAL: request: mp_malloc_sync 00:06:16.853 EAL: No shared files mode enabled, IPC is disabled 00:06:16.853 EAL: Heap on socket 0 was shrunk by 18MB 00:06:16.853 EAL: Trying to obtain current memory policy. 00:06:16.853 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:16.853 EAL: Restoring previous memory policy: 4 00:06:16.853 EAL: Calling mem event callback 'spdk:(nil)' 00:06:16.853 EAL: request: mp_malloc_sync 00:06:16.853 EAL: No shared files mode enabled, IPC is disabled 00:06:16.853 EAL: Heap on socket 0 was expanded by 34MB 00:06:16.853 EAL: Calling mem event callback 'spdk:(nil)' 00:06:16.853 EAL: request: mp_malloc_sync 00:06:16.853 EAL: No shared files mode enabled, IPC is disabled 00:06:16.853 EAL: Heap on socket 0 was shrunk by 34MB 00:06:16.853 EAL: Trying to obtain current memory policy. 00:06:16.853 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:16.853 EAL: Restoring previous memory policy: 4 00:06:16.853 EAL: Calling mem event callback 'spdk:(nil)' 00:06:16.853 EAL: request: mp_malloc_sync 00:06:16.853 EAL: No shared files mode enabled, IPC is disabled 00:06:16.853 EAL: Heap on socket 0 was expanded by 66MB 00:06:16.853 EAL: Calling mem event callback 'spdk:(nil)' 00:06:16.853 EAL: request: mp_malloc_sync 00:06:16.853 EAL: No shared files mode enabled, IPC is disabled 00:06:16.853 EAL: Heap on socket 0 was shrunk by 66MB 00:06:16.853 EAL: Trying to obtain current memory policy. 00:06:16.853 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:16.853 EAL: Restoring previous memory policy: 4 00:06:16.853 EAL: Calling mem event callback 'spdk:(nil)' 00:06:16.853 EAL: request: mp_malloc_sync 00:06:16.853 EAL: No shared files mode enabled, IPC is disabled 00:06:16.853 EAL: Heap on socket 0 was expanded by 130MB 00:06:16.853 EAL: Calling mem event callback 'spdk:(nil)' 00:06:16.853 EAL: request: mp_malloc_sync 00:06:16.853 EAL: No shared files mode enabled, IPC is disabled 00:06:16.853 EAL: Heap on socket 0 was shrunk by 130MB 00:06:16.853 EAL: Trying to obtain current memory policy. 00:06:16.853 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:16.853 EAL: Restoring previous memory policy: 4 00:06:16.853 EAL: Calling mem event callback 'spdk:(nil)' 00:06:16.853 EAL: request: mp_malloc_sync 00:06:16.853 EAL: No shared files mode enabled, IPC is disabled 00:06:16.853 EAL: Heap on socket 0 was expanded by 258MB 00:06:16.853 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.112 EAL: request: mp_malloc_sync 00:06:17.112 EAL: No shared files mode enabled, IPC is disabled 00:06:17.112 EAL: Heap on socket 0 was shrunk by 258MB 00:06:17.112 EAL: Trying to obtain current memory policy. 00:06:17.112 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:17.112 EAL: Restoring previous memory policy: 4 00:06:17.112 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.112 EAL: request: mp_malloc_sync 00:06:17.112 EAL: No shared files mode enabled, IPC is disabled 00:06:17.112 EAL: Heap on socket 0 was expanded by 514MB 00:06:17.112 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.370 EAL: request: mp_malloc_sync 00:06:17.370 EAL: No shared files mode enabled, IPC is disabled 00:06:17.370 EAL: Heap on socket 0 was shrunk by 514MB 00:06:17.370 EAL: Trying to obtain current memory policy. 00:06:17.370 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:17.370 EAL: Restoring previous memory policy: 4 00:06:17.370 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.370 EAL: request: mp_malloc_sync 00:06:17.370 EAL: No shared files mode enabled, IPC is disabled 00:06:17.370 EAL: Heap on socket 0 was expanded by 1026MB 00:06:17.628 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.887 EAL: request: mp_malloc_sync 00:06:17.887 EAL: No shared files mode enabled, IPC is disabled 00:06:17.887 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:17.887 passed 00:06:17.887 00:06:17.887 Run Summary: Type Total Ran Passed Failed Inactive 00:06:17.887 suites 1 1 n/a 0 0 00:06:17.887 tests 2 2 2 0 0 00:06:17.887 asserts 497 497 497 0 n/a 00:06:17.887 00:06:17.887 Elapsed time = 1.015 seconds 00:06:17.887 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.887 EAL: request: mp_malloc_sync 00:06:17.887 EAL: No shared files mode enabled, IPC is disabled 00:06:17.887 EAL: Heap on socket 0 was shrunk by 2MB 00:06:17.887 EAL: No shared files mode enabled, IPC is disabled 00:06:17.887 EAL: No shared files mode enabled, IPC is disabled 00:06:17.887 EAL: No shared files mode enabled, IPC is disabled 00:06:17.887 00:06:17.887 real 0m1.166s 00:06:17.887 user 0m0.670s 00:06:17.887 sys 0m0.461s 00:06:17.887 21:23:17 env.env_vtophys -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:17.887 21:23:17 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:17.887 ************************************ 00:06:17.887 END TEST env_vtophys 00:06:17.887 ************************************ 00:06:17.887 21:23:17 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:17.887 21:23:17 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:17.887 21:23:17 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:17.887 21:23:17 env -- common/autotest_common.sh@10 -- # set +x 00:06:17.887 ************************************ 00:06:17.887 START TEST env_pci 00:06:17.887 ************************************ 00:06:17.887 21:23:18 env.env_pci -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:17.888 00:06:17.888 00:06:17.888 CUnit - A unit testing framework for C - Version 2.1-3 00:06:17.888 http://cunit.sourceforge.net/ 00:06:17.888 00:06:17.888 00:06:17.888 Suite: pci 00:06:17.888 Test: pci_hook ...[2024-06-07 21:23:18.039917] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1240233 has claimed it 00:06:17.888 EAL: Cannot find device (10000:00:01.0) 00:06:17.888 EAL: Failed to attach device on primary process 00:06:17.888 passed 00:06:17.888 00:06:17.888 Run Summary: Type Total Ran Passed Failed Inactive 00:06:17.888 suites 1 1 n/a 0 0 00:06:17.888 tests 1 1 1 0 0 00:06:17.888 asserts 25 25 25 0 n/a 00:06:17.888 00:06:17.888 Elapsed time = 0.029 seconds 00:06:17.888 00:06:17.888 real 0m0.048s 00:06:17.888 user 0m0.013s 00:06:17.888 sys 0m0.034s 00:06:17.888 21:23:18 env.env_pci -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:17.888 21:23:18 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:17.888 ************************************ 00:06:17.888 END TEST env_pci 00:06:17.888 ************************************ 00:06:17.888 21:23:18 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:17.888 21:23:18 env -- env/env.sh@15 -- # uname 00:06:17.888 21:23:18 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:17.888 21:23:18 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:17.888 21:23:18 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:17.888 21:23:18 env -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:06:17.888 21:23:18 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:17.888 21:23:18 env -- common/autotest_common.sh@10 -- # set +x 00:06:17.888 ************************************ 00:06:17.888 START TEST env_dpdk_post_init 00:06:17.888 ************************************ 00:06:17.888 21:23:18 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:18.147 EAL: Detected CPU lcores: 112 00:06:18.147 EAL: Detected NUMA nodes: 2 00:06:18.147 EAL: Detected shared linkage of DPDK 00:06:18.147 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:18.147 EAL: Selected IOVA mode 'VA' 00:06:18.147 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.147 EAL: VFIO support initialized 00:06:18.147 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:18.147 EAL: Using IOMMU type 1 (Type 1) 00:06:18.147 EAL: Ignore mapping IO port bar(1) 00:06:18.147 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:06:18.147 EAL: Ignore mapping IO port bar(1) 00:06:18.147 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:06:18.147 EAL: Ignore mapping IO port bar(1) 00:06:18.147 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:06:18.147 EAL: Ignore mapping IO port bar(1) 00:06:18.147 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:06:18.147 EAL: Ignore mapping IO port bar(1) 00:06:18.147 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:06:18.147 EAL: Ignore mapping IO port bar(1) 00:06:18.147 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:06:18.147 EAL: Ignore mapping IO port bar(1) 00:06:18.147 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:06:18.147 EAL: Ignore mapping IO port bar(1) 00:06:18.147 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:06:18.147 EAL: Ignore mapping IO port bar(1) 00:06:18.147 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:06:18.407 EAL: Ignore mapping IO port bar(1) 00:06:18.407 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:06:18.407 EAL: Ignore mapping IO port bar(1) 00:06:18.407 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:06:18.407 EAL: Ignore mapping IO port bar(1) 00:06:18.407 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:06:18.407 EAL: Ignore mapping IO port bar(1) 00:06:18.407 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:06:18.407 EAL: Ignore mapping IO port bar(1) 00:06:18.407 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:06:18.407 EAL: Ignore mapping IO port bar(1) 00:06:18.407 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:06:18.407 EAL: Ignore mapping IO port bar(1) 00:06:18.407 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:06:18.976 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:86:00.0 (socket 1) 00:06:22.262 EAL: Releasing PCI mapped resource for 0000:86:00.0 00:06:22.262 EAL: Calling pci_unmap_resource for 0000:86:00.0 at 0x202001040000 00:06:22.522 Starting DPDK initialization... 00:06:22.522 Starting SPDK post initialization... 00:06:22.522 SPDK NVMe probe 00:06:22.522 Attaching to 0000:86:00.0 00:06:22.522 Attached to 0000:86:00.0 00:06:22.522 Cleaning up... 00:06:22.522 00:06:22.522 real 0m4.500s 00:06:22.522 user 0m3.405s 00:06:22.522 sys 0m0.151s 00:06:22.522 21:23:22 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:22.522 21:23:22 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:22.522 ************************************ 00:06:22.522 END TEST env_dpdk_post_init 00:06:22.522 ************************************ 00:06:22.522 21:23:22 env -- env/env.sh@26 -- # uname 00:06:22.522 21:23:22 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:22.522 21:23:22 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:22.522 21:23:22 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:22.522 21:23:22 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:22.522 21:23:22 env -- common/autotest_common.sh@10 -- # set +x 00:06:22.522 ************************************ 00:06:22.522 START TEST env_mem_callbacks 00:06:22.522 ************************************ 00:06:22.522 21:23:22 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:22.522 EAL: Detected CPU lcores: 112 00:06:22.522 EAL: Detected NUMA nodes: 2 00:06:22.522 EAL: Detected shared linkage of DPDK 00:06:22.522 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:22.522 EAL: Selected IOVA mode 'VA' 00:06:22.522 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.522 EAL: VFIO support initialized 00:06:22.522 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:22.522 00:06:22.522 00:06:22.522 CUnit - A unit testing framework for C - Version 2.1-3 00:06:22.522 http://cunit.sourceforge.net/ 00:06:22.522 00:06:22.522 00:06:22.522 Suite: memory 00:06:22.522 Test: test ... 00:06:22.522 register 0x200000200000 2097152 00:06:22.522 malloc 3145728 00:06:22.522 register 0x200000400000 4194304 00:06:22.522 buf 0x200000500000 len 3145728 PASSED 00:06:22.522 malloc 64 00:06:22.522 buf 0x2000004fff40 len 64 PASSED 00:06:22.522 malloc 4194304 00:06:22.522 register 0x200000800000 6291456 00:06:22.522 buf 0x200000a00000 len 4194304 PASSED 00:06:22.522 free 0x200000500000 3145728 00:06:22.522 free 0x2000004fff40 64 00:06:22.522 unregister 0x200000400000 4194304 PASSED 00:06:22.522 free 0x200000a00000 4194304 00:06:22.522 unregister 0x200000800000 6291456 PASSED 00:06:22.522 malloc 8388608 00:06:22.522 register 0x200000400000 10485760 00:06:22.522 buf 0x200000600000 len 8388608 PASSED 00:06:22.522 free 0x200000600000 8388608 00:06:22.522 unregister 0x200000400000 10485760 PASSED 00:06:22.522 passed 00:06:22.522 00:06:22.522 Run Summary: Type Total Ran Passed Failed Inactive 00:06:22.522 suites 1 1 n/a 0 0 00:06:22.522 tests 1 1 1 0 0 00:06:22.522 asserts 15 15 15 0 n/a 00:06:22.522 00:06:22.522 Elapsed time = 0.007 seconds 00:06:22.522 00:06:22.522 real 0m0.066s 00:06:22.522 user 0m0.021s 00:06:22.522 sys 0m0.045s 00:06:22.522 21:23:22 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:22.522 21:23:22 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:22.522 ************************************ 00:06:22.522 END TEST env_mem_callbacks 00:06:22.522 ************************************ 00:06:22.781 00:06:22.781 real 0m6.418s 00:06:22.781 user 0m4.465s 00:06:22.781 sys 0m1.002s 00:06:22.781 21:23:22 env -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:22.781 21:23:22 env -- common/autotest_common.sh@10 -- # set +x 00:06:22.781 ************************************ 00:06:22.781 END TEST env 00:06:22.781 ************************************ 00:06:22.781 21:23:22 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:22.781 21:23:22 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:22.781 21:23:22 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:22.781 21:23:22 -- common/autotest_common.sh@10 -- # set +x 00:06:22.781 ************************************ 00:06:22.781 START TEST rpc 00:06:22.781 ************************************ 00:06:22.781 21:23:22 rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:22.781 * Looking for test storage... 00:06:22.781 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:22.781 21:23:22 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1241152 00:06:22.781 21:23:22 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:22.781 21:23:22 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1241152 00:06:22.781 21:23:22 rpc -- common/autotest_common.sh@830 -- # '[' -z 1241152 ']' 00:06:22.781 21:23:22 rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.781 21:23:22 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:22.781 21:23:22 rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:22.781 21:23:22 rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.781 21:23:22 rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:22.781 21:23:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.781 [2024-06-07 21:23:23.035194] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:06:22.781 [2024-06-07 21:23:23.035255] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1241152 ] 00:06:23.048 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.048 [2024-06-07 21:23:23.126868] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.048 [2024-06-07 21:23:23.217580] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:23.048 [2024-06-07 21:23:23.217621] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1241152' to capture a snapshot of events at runtime. 00:06:23.048 [2024-06-07 21:23:23.217632] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:23.048 [2024-06-07 21:23:23.217641] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:23.048 [2024-06-07 21:23:23.217649] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1241152 for offline analysis/debug. 00:06:23.048 [2024-06-07 21:23:23.217677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.984 21:23:23 rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:23.984 21:23:23 rpc -- common/autotest_common.sh@863 -- # return 0 00:06:23.984 21:23:23 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:23.984 21:23:23 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:23.984 21:23:23 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:23.984 21:23:23 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:23.984 21:23:23 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:23.984 21:23:23 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:23.984 21:23:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.984 ************************************ 00:06:23.984 START TEST rpc_integrity 00:06:23.984 ************************************ 00:06:23.984 21:23:23 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:06:23.984 21:23:23 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:23.984 21:23:23 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:23.984 21:23:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.984 21:23:23 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:23.984 21:23:23 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:23.984 21:23:23 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:23.984 21:23:24 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:23.984 21:23:24 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:23.984 21:23:24 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:23.984 21:23:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.984 21:23:24 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:23.984 21:23:24 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:23.984 21:23:24 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:23.984 21:23:24 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:23.984 21:23:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.984 21:23:24 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:23.984 21:23:24 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:23.984 { 00:06:23.984 "name": "Malloc0", 00:06:23.984 "aliases": [ 00:06:23.984 "3c034f2d-28ed-419e-b465-71c27f7f0245" 00:06:23.984 ], 00:06:23.984 "product_name": "Malloc disk", 00:06:23.984 "block_size": 512, 00:06:23.984 "num_blocks": 16384, 00:06:23.984 "uuid": "3c034f2d-28ed-419e-b465-71c27f7f0245", 00:06:23.984 "assigned_rate_limits": { 00:06:23.984 "rw_ios_per_sec": 0, 00:06:23.984 "rw_mbytes_per_sec": 0, 00:06:23.984 "r_mbytes_per_sec": 0, 00:06:23.984 "w_mbytes_per_sec": 0 00:06:23.984 }, 00:06:23.984 "claimed": false, 00:06:23.984 "zoned": false, 00:06:23.984 "supported_io_types": { 00:06:23.984 "read": true, 00:06:23.984 "write": true, 00:06:23.984 "unmap": true, 00:06:23.984 "write_zeroes": true, 00:06:23.984 "flush": true, 00:06:23.984 "reset": true, 00:06:23.984 "compare": false, 00:06:23.984 "compare_and_write": false, 00:06:23.984 "abort": true, 00:06:23.984 "nvme_admin": false, 00:06:23.984 "nvme_io": false 00:06:23.984 }, 00:06:23.984 "memory_domains": [ 00:06:23.984 { 00:06:23.984 "dma_device_id": "system", 00:06:23.984 "dma_device_type": 1 00:06:23.984 }, 00:06:23.984 { 00:06:23.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:23.984 "dma_device_type": 2 00:06:23.984 } 00:06:23.984 ], 00:06:23.984 "driver_specific": {} 00:06:23.984 } 00:06:23.984 ]' 00:06:23.984 21:23:24 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:23.984 21:23:24 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:23.984 21:23:24 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:23.984 21:23:24 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:23.984 21:23:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.984 [2024-06-07 21:23:24.123111] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:23.984 [2024-06-07 21:23:24.123146] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:23.985 [2024-06-07 21:23:24.123163] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b33630 00:06:23.985 [2024-06-07 21:23:24.123172] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:23.985 [2024-06-07 21:23:24.124711] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:23.985 [2024-06-07 21:23:24.124738] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:23.985 Passthru0 00:06:23.985 21:23:24 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:23.985 21:23:24 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:23.985 21:23:24 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:23.985 21:23:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.985 21:23:24 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:23.985 21:23:24 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:23.985 { 00:06:23.985 "name": "Malloc0", 00:06:23.985 "aliases": [ 00:06:23.985 "3c034f2d-28ed-419e-b465-71c27f7f0245" 00:06:23.985 ], 00:06:23.985 "product_name": "Malloc disk", 00:06:23.985 "block_size": 512, 00:06:23.985 "num_blocks": 16384, 00:06:23.985 "uuid": "3c034f2d-28ed-419e-b465-71c27f7f0245", 00:06:23.985 "assigned_rate_limits": { 00:06:23.985 "rw_ios_per_sec": 0, 00:06:23.985 "rw_mbytes_per_sec": 0, 00:06:23.985 "r_mbytes_per_sec": 0, 00:06:23.985 "w_mbytes_per_sec": 0 00:06:23.985 }, 00:06:23.985 "claimed": true, 00:06:23.985 "claim_type": "exclusive_write", 00:06:23.985 "zoned": false, 00:06:23.985 "supported_io_types": { 00:06:23.985 "read": true, 00:06:23.985 "write": true, 00:06:23.985 "unmap": true, 00:06:23.985 "write_zeroes": true, 00:06:23.985 "flush": true, 00:06:23.985 "reset": true, 00:06:23.985 "compare": false, 00:06:23.985 "compare_and_write": false, 00:06:23.985 "abort": true, 00:06:23.985 "nvme_admin": false, 00:06:23.985 "nvme_io": false 00:06:23.985 }, 00:06:23.985 "memory_domains": [ 00:06:23.985 { 00:06:23.985 "dma_device_id": "system", 00:06:23.985 "dma_device_type": 1 00:06:23.985 }, 00:06:23.985 { 00:06:23.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:23.985 "dma_device_type": 2 00:06:23.985 } 00:06:23.985 ], 00:06:23.985 "driver_specific": {} 00:06:23.985 }, 00:06:23.985 { 00:06:23.985 "name": "Passthru0", 00:06:23.985 "aliases": [ 00:06:23.985 "30513145-b8fa-58df-84d3-9a87145a3e1c" 00:06:23.985 ], 00:06:23.985 "product_name": "passthru", 00:06:23.985 "block_size": 512, 00:06:23.985 "num_blocks": 16384, 00:06:23.985 "uuid": "30513145-b8fa-58df-84d3-9a87145a3e1c", 00:06:23.985 "assigned_rate_limits": { 00:06:23.985 "rw_ios_per_sec": 0, 00:06:23.985 "rw_mbytes_per_sec": 0, 00:06:23.985 "r_mbytes_per_sec": 0, 00:06:23.985 "w_mbytes_per_sec": 0 00:06:23.985 }, 00:06:23.985 "claimed": false, 00:06:23.985 "zoned": false, 00:06:23.985 "supported_io_types": { 00:06:23.985 "read": true, 00:06:23.985 "write": true, 00:06:23.985 "unmap": true, 00:06:23.985 "write_zeroes": true, 00:06:23.985 "flush": true, 00:06:23.985 "reset": true, 00:06:23.985 "compare": false, 00:06:23.985 "compare_and_write": false, 00:06:23.985 "abort": true, 00:06:23.985 "nvme_admin": false, 00:06:23.985 "nvme_io": false 00:06:23.985 }, 00:06:23.985 "memory_domains": [ 00:06:23.985 { 00:06:23.985 "dma_device_id": "system", 00:06:23.985 "dma_device_type": 1 00:06:23.985 }, 00:06:23.985 { 00:06:23.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:23.985 "dma_device_type": 2 00:06:23.985 } 00:06:23.985 ], 00:06:23.985 "driver_specific": { 00:06:23.985 "passthru": { 00:06:23.985 "name": "Passthru0", 00:06:23.985 "base_bdev_name": "Malloc0" 00:06:23.985 } 00:06:23.985 } 00:06:23.985 } 00:06:23.985 ]' 00:06:23.985 21:23:24 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:23.985 21:23:24 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:23.985 21:23:24 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:23.985 21:23:24 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:23.985 21:23:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.985 21:23:24 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:23.985 21:23:24 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:23.985 21:23:24 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:23.985 21:23:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.985 21:23:24 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:23.985 21:23:24 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:23.985 21:23:24 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:23.985 21:23:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.985 21:23:24 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:23.985 21:23:24 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:23.985 21:23:24 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:24.243 21:23:24 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:24.243 00:06:24.243 real 0m0.279s 00:06:24.243 user 0m0.190s 00:06:24.243 sys 0m0.024s 00:06:24.243 21:23:24 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:24.243 21:23:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.243 ************************************ 00:06:24.243 END TEST rpc_integrity 00:06:24.243 ************************************ 00:06:24.243 21:23:24 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:24.243 21:23:24 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:24.243 21:23:24 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:24.243 21:23:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.243 ************************************ 00:06:24.243 START TEST rpc_plugins 00:06:24.243 ************************************ 00:06:24.243 21:23:24 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # rpc_plugins 00:06:24.243 21:23:24 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:24.243 21:23:24 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:24.243 21:23:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:24.243 21:23:24 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:24.243 21:23:24 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:24.243 21:23:24 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:24.243 21:23:24 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:24.243 21:23:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:24.243 21:23:24 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:24.243 21:23:24 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:24.243 { 00:06:24.243 "name": "Malloc1", 00:06:24.243 "aliases": [ 00:06:24.243 "42e0bac2-52ac-4d95-903a-0d1d6590fe6d" 00:06:24.243 ], 00:06:24.243 "product_name": "Malloc disk", 00:06:24.243 "block_size": 4096, 00:06:24.243 "num_blocks": 256, 00:06:24.243 "uuid": "42e0bac2-52ac-4d95-903a-0d1d6590fe6d", 00:06:24.243 "assigned_rate_limits": { 00:06:24.243 "rw_ios_per_sec": 0, 00:06:24.243 "rw_mbytes_per_sec": 0, 00:06:24.243 "r_mbytes_per_sec": 0, 00:06:24.243 "w_mbytes_per_sec": 0 00:06:24.243 }, 00:06:24.243 "claimed": false, 00:06:24.243 "zoned": false, 00:06:24.243 "supported_io_types": { 00:06:24.243 "read": true, 00:06:24.243 "write": true, 00:06:24.243 "unmap": true, 00:06:24.243 "write_zeroes": true, 00:06:24.243 "flush": true, 00:06:24.243 "reset": true, 00:06:24.243 "compare": false, 00:06:24.243 "compare_and_write": false, 00:06:24.243 "abort": true, 00:06:24.243 "nvme_admin": false, 00:06:24.243 "nvme_io": false 00:06:24.243 }, 00:06:24.243 "memory_domains": [ 00:06:24.243 { 00:06:24.243 "dma_device_id": "system", 00:06:24.243 "dma_device_type": 1 00:06:24.243 }, 00:06:24.243 { 00:06:24.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:24.243 "dma_device_type": 2 00:06:24.243 } 00:06:24.243 ], 00:06:24.243 "driver_specific": {} 00:06:24.243 } 00:06:24.243 ]' 00:06:24.243 21:23:24 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:24.243 21:23:24 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:24.243 21:23:24 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:24.243 21:23:24 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:24.243 21:23:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:24.243 21:23:24 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:24.243 21:23:24 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:24.243 21:23:24 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:24.243 21:23:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:24.243 21:23:24 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:24.243 21:23:24 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:24.243 21:23:24 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:24.243 21:23:24 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:24.243 00:06:24.243 real 0m0.141s 00:06:24.243 user 0m0.086s 00:06:24.243 sys 0m0.020s 00:06:24.243 21:23:24 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:24.243 21:23:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:24.243 ************************************ 00:06:24.243 END TEST rpc_plugins 00:06:24.243 ************************************ 00:06:24.243 21:23:24 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:24.243 21:23:24 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:24.243 21:23:24 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:24.243 21:23:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.502 ************************************ 00:06:24.502 START TEST rpc_trace_cmd_test 00:06:24.502 ************************************ 00:06:24.502 21:23:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # rpc_trace_cmd_test 00:06:24.502 21:23:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:24.502 21:23:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:24.502 21:23:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:24.502 21:23:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.502 21:23:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:24.502 21:23:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:24.502 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1241152", 00:06:24.502 "tpoint_group_mask": "0x8", 00:06:24.502 "iscsi_conn": { 00:06:24.502 "mask": "0x2", 00:06:24.502 "tpoint_mask": "0x0" 00:06:24.502 }, 00:06:24.502 "scsi": { 00:06:24.502 "mask": "0x4", 00:06:24.502 "tpoint_mask": "0x0" 00:06:24.502 }, 00:06:24.502 "bdev": { 00:06:24.502 "mask": "0x8", 00:06:24.502 "tpoint_mask": "0xffffffffffffffff" 00:06:24.502 }, 00:06:24.502 "nvmf_rdma": { 00:06:24.502 "mask": "0x10", 00:06:24.502 "tpoint_mask": "0x0" 00:06:24.502 }, 00:06:24.502 "nvmf_tcp": { 00:06:24.502 "mask": "0x20", 00:06:24.502 "tpoint_mask": "0x0" 00:06:24.502 }, 00:06:24.502 "ftl": { 00:06:24.502 "mask": "0x40", 00:06:24.502 "tpoint_mask": "0x0" 00:06:24.502 }, 00:06:24.502 "blobfs": { 00:06:24.502 "mask": "0x80", 00:06:24.502 "tpoint_mask": "0x0" 00:06:24.502 }, 00:06:24.502 "dsa": { 00:06:24.502 "mask": "0x200", 00:06:24.502 "tpoint_mask": "0x0" 00:06:24.502 }, 00:06:24.502 "thread": { 00:06:24.502 "mask": "0x400", 00:06:24.502 "tpoint_mask": "0x0" 00:06:24.502 }, 00:06:24.502 "nvme_pcie": { 00:06:24.502 "mask": "0x800", 00:06:24.502 "tpoint_mask": "0x0" 00:06:24.502 }, 00:06:24.502 "iaa": { 00:06:24.502 "mask": "0x1000", 00:06:24.502 "tpoint_mask": "0x0" 00:06:24.502 }, 00:06:24.502 "nvme_tcp": { 00:06:24.502 "mask": "0x2000", 00:06:24.502 "tpoint_mask": "0x0" 00:06:24.502 }, 00:06:24.502 "bdev_nvme": { 00:06:24.502 "mask": "0x4000", 00:06:24.502 "tpoint_mask": "0x0" 00:06:24.502 }, 00:06:24.502 "sock": { 00:06:24.502 "mask": "0x8000", 00:06:24.502 "tpoint_mask": "0x0" 00:06:24.502 } 00:06:24.502 }' 00:06:24.502 21:23:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:24.502 21:23:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:24.502 21:23:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:24.502 21:23:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:24.502 21:23:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:24.502 21:23:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:24.502 21:23:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:24.502 21:23:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:24.502 21:23:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:24.762 21:23:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:24.762 00:06:24.762 real 0m0.242s 00:06:24.762 user 0m0.206s 00:06:24.762 sys 0m0.029s 00:06:24.762 21:23:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:24.762 21:23:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.762 ************************************ 00:06:24.762 END TEST rpc_trace_cmd_test 00:06:24.762 ************************************ 00:06:24.762 21:23:24 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:24.762 21:23:24 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:24.762 21:23:24 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:24.762 21:23:24 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:24.762 21:23:24 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:24.762 21:23:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.762 ************************************ 00:06:24.762 START TEST rpc_daemon_integrity 00:06:24.762 ************************************ 00:06:24.762 21:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:06:24.762 21:23:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:24.762 21:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:24.762 21:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.762 21:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:24.762 21:23:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:24.762 21:23:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:24.762 21:23:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:24.762 21:23:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:24.762 21:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:24.762 21:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.762 21:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:24.762 21:23:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:24.762 21:23:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:24.762 21:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:24.762 21:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.762 21:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:24.762 21:23:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:24.762 { 00:06:24.762 "name": "Malloc2", 00:06:24.762 "aliases": [ 00:06:24.762 "edb1e759-2428-4ba3-b587-e9b421cff82e" 00:06:24.762 ], 00:06:24.762 "product_name": "Malloc disk", 00:06:24.762 "block_size": 512, 00:06:24.762 "num_blocks": 16384, 00:06:24.762 "uuid": "edb1e759-2428-4ba3-b587-e9b421cff82e", 00:06:24.762 "assigned_rate_limits": { 00:06:24.762 "rw_ios_per_sec": 0, 00:06:24.762 "rw_mbytes_per_sec": 0, 00:06:24.762 "r_mbytes_per_sec": 0, 00:06:24.762 "w_mbytes_per_sec": 0 00:06:24.762 }, 00:06:24.762 "claimed": false, 00:06:24.762 "zoned": false, 00:06:24.762 "supported_io_types": { 00:06:24.762 "read": true, 00:06:24.762 "write": true, 00:06:24.762 "unmap": true, 00:06:24.762 "write_zeroes": true, 00:06:24.762 "flush": true, 00:06:24.762 "reset": true, 00:06:24.762 "compare": false, 00:06:24.762 "compare_and_write": false, 00:06:24.762 "abort": true, 00:06:24.762 "nvme_admin": false, 00:06:24.762 "nvme_io": false 00:06:24.762 }, 00:06:24.762 "memory_domains": [ 00:06:24.762 { 00:06:24.762 "dma_device_id": "system", 00:06:24.762 "dma_device_type": 1 00:06:24.762 }, 00:06:24.762 { 00:06:24.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:24.762 "dma_device_type": 2 00:06:24.762 } 00:06:24.762 ], 00:06:24.762 "driver_specific": {} 00:06:24.762 } 00:06:24.762 ]' 00:06:24.762 21:23:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:24.762 21:23:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:24.762 21:23:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:24.762 21:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:24.762 21:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.762 [2024-06-07 21:23:24.993670] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:24.762 [2024-06-07 21:23:24.993705] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:24.762 [2024-06-07 21:23:24.993724] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b2aea0 00:06:24.762 [2024-06-07 21:23:24.993733] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:24.762 [2024-06-07 21:23:24.995119] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:24.762 [2024-06-07 21:23:24.995143] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:24.762 Passthru0 00:06:24.762 21:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:24.762 21:23:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:24.762 21:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:24.762 21:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.762 21:23:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:24.762 21:23:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:24.762 { 00:06:24.762 "name": "Malloc2", 00:06:24.762 "aliases": [ 00:06:24.762 "edb1e759-2428-4ba3-b587-e9b421cff82e" 00:06:24.762 ], 00:06:24.762 "product_name": "Malloc disk", 00:06:24.762 "block_size": 512, 00:06:24.762 "num_blocks": 16384, 00:06:24.762 "uuid": "edb1e759-2428-4ba3-b587-e9b421cff82e", 00:06:24.762 "assigned_rate_limits": { 00:06:24.762 "rw_ios_per_sec": 0, 00:06:24.762 "rw_mbytes_per_sec": 0, 00:06:24.762 "r_mbytes_per_sec": 0, 00:06:24.762 "w_mbytes_per_sec": 0 00:06:24.762 }, 00:06:24.762 "claimed": true, 00:06:24.762 "claim_type": "exclusive_write", 00:06:24.762 "zoned": false, 00:06:24.762 "supported_io_types": { 00:06:24.762 "read": true, 00:06:24.762 "write": true, 00:06:24.762 "unmap": true, 00:06:24.762 "write_zeroes": true, 00:06:24.762 "flush": true, 00:06:24.762 "reset": true, 00:06:24.762 "compare": false, 00:06:24.762 "compare_and_write": false, 00:06:24.762 "abort": true, 00:06:24.762 "nvme_admin": false, 00:06:24.762 "nvme_io": false 00:06:24.762 }, 00:06:24.762 "memory_domains": [ 00:06:24.762 { 00:06:24.762 "dma_device_id": "system", 00:06:24.762 "dma_device_type": 1 00:06:24.762 }, 00:06:24.762 { 00:06:24.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:24.762 "dma_device_type": 2 00:06:24.762 } 00:06:24.762 ], 00:06:24.762 "driver_specific": {} 00:06:24.762 }, 00:06:24.762 { 00:06:24.762 "name": "Passthru0", 00:06:24.762 "aliases": [ 00:06:24.762 "d1ba3442-7e9e-5178-80aa-de9b2438b668" 00:06:24.762 ], 00:06:24.762 "product_name": "passthru", 00:06:24.762 "block_size": 512, 00:06:24.762 "num_blocks": 16384, 00:06:24.762 "uuid": "d1ba3442-7e9e-5178-80aa-de9b2438b668", 00:06:24.762 "assigned_rate_limits": { 00:06:24.762 "rw_ios_per_sec": 0, 00:06:24.762 "rw_mbytes_per_sec": 0, 00:06:24.762 "r_mbytes_per_sec": 0, 00:06:24.762 "w_mbytes_per_sec": 0 00:06:24.762 }, 00:06:24.762 "claimed": false, 00:06:24.762 "zoned": false, 00:06:24.762 "supported_io_types": { 00:06:24.762 "read": true, 00:06:24.762 "write": true, 00:06:24.762 "unmap": true, 00:06:24.762 "write_zeroes": true, 00:06:24.762 "flush": true, 00:06:24.762 "reset": true, 00:06:24.762 "compare": false, 00:06:24.762 "compare_and_write": false, 00:06:24.762 "abort": true, 00:06:24.762 "nvme_admin": false, 00:06:24.762 "nvme_io": false 00:06:24.762 }, 00:06:24.762 "memory_domains": [ 00:06:24.762 { 00:06:24.762 "dma_device_id": "system", 00:06:24.762 "dma_device_type": 1 00:06:24.762 }, 00:06:24.762 { 00:06:24.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:24.762 "dma_device_type": 2 00:06:24.762 } 00:06:24.762 ], 00:06:24.762 "driver_specific": { 00:06:24.762 "passthru": { 00:06:24.762 "name": "Passthru0", 00:06:24.762 "base_bdev_name": "Malloc2" 00:06:24.762 } 00:06:24.762 } 00:06:24.762 } 00:06:24.762 ]' 00:06:24.763 21:23:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:25.023 21:23:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:25.023 21:23:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:25.023 21:23:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:25.023 21:23:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:25.023 21:23:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:25.023 21:23:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:25.023 21:23:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:25.023 21:23:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:25.023 21:23:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:25.023 21:23:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:25.023 21:23:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:25.023 21:23:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:25.023 21:23:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:25.023 21:23:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:25.023 21:23:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:25.023 21:23:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:25.023 00:06:25.023 real 0m0.292s 00:06:25.023 user 0m0.201s 00:06:25.023 sys 0m0.026s 00:06:25.023 21:23:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:25.023 21:23:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:25.023 ************************************ 00:06:25.023 END TEST rpc_daemon_integrity 00:06:25.023 ************************************ 00:06:25.023 21:23:25 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:25.023 21:23:25 rpc -- rpc/rpc.sh@84 -- # killprocess 1241152 00:06:25.023 21:23:25 rpc -- common/autotest_common.sh@949 -- # '[' -z 1241152 ']' 00:06:25.023 21:23:25 rpc -- common/autotest_common.sh@953 -- # kill -0 1241152 00:06:25.023 21:23:25 rpc -- common/autotest_common.sh@954 -- # uname 00:06:25.023 21:23:25 rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:25.023 21:23:25 rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1241152 00:06:25.023 21:23:25 rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:25.023 21:23:25 rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:25.023 21:23:25 rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1241152' 00:06:25.023 killing process with pid 1241152 00:06:25.023 21:23:25 rpc -- common/autotest_common.sh@968 -- # kill 1241152 00:06:25.023 21:23:25 rpc -- common/autotest_common.sh@973 -- # wait 1241152 00:06:25.590 00:06:25.590 real 0m2.675s 00:06:25.590 user 0m3.527s 00:06:25.590 sys 0m0.716s 00:06:25.590 21:23:25 rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:25.590 21:23:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.590 ************************************ 00:06:25.590 END TEST rpc 00:06:25.590 ************************************ 00:06:25.590 21:23:25 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:25.590 21:23:25 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:25.590 21:23:25 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:25.590 21:23:25 -- common/autotest_common.sh@10 -- # set +x 00:06:25.590 ************************************ 00:06:25.590 START TEST skip_rpc 00:06:25.590 ************************************ 00:06:25.590 21:23:25 skip_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:25.590 * Looking for test storage... 00:06:25.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:25.590 21:23:25 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:25.590 21:23:25 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:25.590 21:23:25 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:25.590 21:23:25 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:25.590 21:23:25 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:25.590 21:23:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.590 ************************************ 00:06:25.590 START TEST skip_rpc 00:06:25.590 ************************************ 00:06:25.590 21:23:25 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # test_skip_rpc 00:06:25.590 21:23:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1241858 00:06:25.590 21:23:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:25.590 21:23:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:25.590 21:23:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:25.590 [2024-06-07 21:23:25.806164] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:06:25.590 [2024-06-07 21:23:25.806218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1241858 ] 00:06:25.590 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.849 [2024-06-07 21:23:25.895314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.849 [2024-06-07 21:23:25.984913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.121 21:23:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:31.121 21:23:30 skip_rpc.skip_rpc -- common/autotest_common.sh@649 -- # local es=0 00:06:31.121 21:23:30 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:31.121 21:23:30 skip_rpc.skip_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:06:31.121 21:23:30 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:31.121 21:23:30 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:06:31.121 21:23:30 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:31.121 21:23:30 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # rpc_cmd spdk_get_version 00:06:31.121 21:23:30 skip_rpc.skip_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:31.121 21:23:30 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.121 21:23:30 skip_rpc.skip_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:06:31.121 21:23:30 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # es=1 00:06:31.121 21:23:30 skip_rpc.skip_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:31.121 21:23:30 skip_rpc.skip_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:31.121 21:23:30 skip_rpc.skip_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:31.121 21:23:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:31.121 21:23:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1241858 00:06:31.121 21:23:30 skip_rpc.skip_rpc -- common/autotest_common.sh@949 -- # '[' -z 1241858 ']' 00:06:31.121 21:23:30 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # kill -0 1241858 00:06:31.121 21:23:30 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # uname 00:06:31.121 21:23:30 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:31.121 21:23:30 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1241858 00:06:31.121 21:23:30 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:31.121 21:23:30 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:31.121 21:23:30 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1241858' 00:06:31.121 killing process with pid 1241858 00:06:31.121 21:23:30 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # kill 1241858 00:06:31.121 21:23:30 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # wait 1241858 00:06:31.121 00:06:31.121 real 0m5.398s 00:06:31.121 user 0m5.111s 00:06:31.121 sys 0m0.306s 00:06:31.121 21:23:31 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:31.121 21:23:31 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.121 ************************************ 00:06:31.121 END TEST skip_rpc 00:06:31.121 ************************************ 00:06:31.121 21:23:31 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:31.121 21:23:31 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:31.121 21:23:31 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:31.121 21:23:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.121 ************************************ 00:06:31.121 START TEST skip_rpc_with_json 00:06:31.121 ************************************ 00:06:31.121 21:23:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_json 00:06:31.121 21:23:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:31.121 21:23:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1242928 00:06:31.121 21:23:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:31.121 21:23:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:31.121 21:23:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1242928 00:06:31.121 21:23:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@830 -- # '[' -z 1242928 ']' 00:06:31.121 21:23:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.121 21:23:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:31.121 21:23:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.121 21:23:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:31.121 21:23:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:31.121 [2024-06-07 21:23:31.275172] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:06:31.121 [2024-06-07 21:23:31.275227] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1242928 ] 00:06:31.121 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.121 [2024-06-07 21:23:31.365407] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.381 [2024-06-07 21:23:31.456649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.949 21:23:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:31.949 21:23:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@863 -- # return 0 00:06:31.949 21:23:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:31.949 21:23:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:31.949 21:23:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:31.949 [2024-06-07 21:23:32.214702] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:32.209 request: 00:06:32.209 { 00:06:32.209 "trtype": "tcp", 00:06:32.209 "method": "nvmf_get_transports", 00:06:32.209 "req_id": 1 00:06:32.209 } 00:06:32.209 Got JSON-RPC error response 00:06:32.209 response: 00:06:32.209 { 00:06:32.209 "code": -19, 00:06:32.209 "message": "No such device" 00:06:32.209 } 00:06:32.209 21:23:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:06:32.209 21:23:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:32.209 21:23:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:32.209 21:23:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:32.209 [2024-06-07 21:23:32.222821] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:32.209 21:23:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:32.209 21:23:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:32.209 21:23:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:32.209 21:23:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:32.209 21:23:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:32.209 21:23:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:32.209 { 00:06:32.209 "subsystems": [ 00:06:32.209 { 00:06:32.209 "subsystem": "vfio_user_target", 00:06:32.209 "config": null 00:06:32.209 }, 00:06:32.209 { 00:06:32.209 "subsystem": "keyring", 00:06:32.209 "config": [] 00:06:32.209 }, 00:06:32.209 { 00:06:32.209 "subsystem": "iobuf", 00:06:32.209 "config": [ 00:06:32.209 { 00:06:32.209 "method": "iobuf_set_options", 00:06:32.209 "params": { 00:06:32.209 "small_pool_count": 8192, 00:06:32.209 "large_pool_count": 1024, 00:06:32.209 "small_bufsize": 8192, 00:06:32.209 "large_bufsize": 135168 00:06:32.209 } 00:06:32.209 } 00:06:32.209 ] 00:06:32.209 }, 00:06:32.209 { 00:06:32.209 "subsystem": "sock", 00:06:32.209 "config": [ 00:06:32.209 { 00:06:32.209 "method": "sock_set_default_impl", 00:06:32.209 "params": { 00:06:32.209 "impl_name": "posix" 00:06:32.209 } 00:06:32.209 }, 00:06:32.209 { 00:06:32.209 "method": "sock_impl_set_options", 00:06:32.209 "params": { 00:06:32.209 "impl_name": "ssl", 00:06:32.209 "recv_buf_size": 4096, 00:06:32.209 "send_buf_size": 4096, 00:06:32.209 "enable_recv_pipe": true, 00:06:32.209 "enable_quickack": false, 00:06:32.209 "enable_placement_id": 0, 00:06:32.209 "enable_zerocopy_send_server": true, 00:06:32.209 "enable_zerocopy_send_client": false, 00:06:32.209 "zerocopy_threshold": 0, 00:06:32.209 "tls_version": 0, 00:06:32.209 "enable_ktls": false 00:06:32.209 } 00:06:32.209 }, 00:06:32.209 { 00:06:32.209 "method": "sock_impl_set_options", 00:06:32.209 "params": { 00:06:32.209 "impl_name": "posix", 00:06:32.209 "recv_buf_size": 2097152, 00:06:32.209 "send_buf_size": 2097152, 00:06:32.209 "enable_recv_pipe": true, 00:06:32.209 "enable_quickack": false, 00:06:32.209 "enable_placement_id": 0, 00:06:32.209 "enable_zerocopy_send_server": true, 00:06:32.209 "enable_zerocopy_send_client": false, 00:06:32.209 "zerocopy_threshold": 0, 00:06:32.209 "tls_version": 0, 00:06:32.209 "enable_ktls": false 00:06:32.209 } 00:06:32.209 } 00:06:32.209 ] 00:06:32.209 }, 00:06:32.209 { 00:06:32.209 "subsystem": "vmd", 00:06:32.209 "config": [] 00:06:32.209 }, 00:06:32.209 { 00:06:32.209 "subsystem": "accel", 00:06:32.209 "config": [ 00:06:32.209 { 00:06:32.209 "method": "accel_set_options", 00:06:32.209 "params": { 00:06:32.209 "small_cache_size": 128, 00:06:32.209 "large_cache_size": 16, 00:06:32.209 "task_count": 2048, 00:06:32.209 "sequence_count": 2048, 00:06:32.209 "buf_count": 2048 00:06:32.209 } 00:06:32.209 } 00:06:32.209 ] 00:06:32.209 }, 00:06:32.209 { 00:06:32.209 "subsystem": "bdev", 00:06:32.209 "config": [ 00:06:32.209 { 00:06:32.209 "method": "bdev_set_options", 00:06:32.209 "params": { 00:06:32.209 "bdev_io_pool_size": 65535, 00:06:32.209 "bdev_io_cache_size": 256, 00:06:32.209 "bdev_auto_examine": true, 00:06:32.209 "iobuf_small_cache_size": 128, 00:06:32.209 "iobuf_large_cache_size": 16 00:06:32.209 } 00:06:32.209 }, 00:06:32.209 { 00:06:32.209 "method": "bdev_raid_set_options", 00:06:32.209 "params": { 00:06:32.209 "process_window_size_kb": 1024 00:06:32.209 } 00:06:32.209 }, 00:06:32.209 { 00:06:32.209 "method": "bdev_iscsi_set_options", 00:06:32.209 "params": { 00:06:32.209 "timeout_sec": 30 00:06:32.209 } 00:06:32.209 }, 00:06:32.209 { 00:06:32.209 "method": "bdev_nvme_set_options", 00:06:32.209 "params": { 00:06:32.209 "action_on_timeout": "none", 00:06:32.209 "timeout_us": 0, 00:06:32.209 "timeout_admin_us": 0, 00:06:32.209 "keep_alive_timeout_ms": 10000, 00:06:32.209 "arbitration_burst": 0, 00:06:32.209 "low_priority_weight": 0, 00:06:32.209 "medium_priority_weight": 0, 00:06:32.209 "high_priority_weight": 0, 00:06:32.209 "nvme_adminq_poll_period_us": 10000, 00:06:32.209 "nvme_ioq_poll_period_us": 0, 00:06:32.209 "io_queue_requests": 0, 00:06:32.209 "delay_cmd_submit": true, 00:06:32.209 "transport_retry_count": 4, 00:06:32.209 "bdev_retry_count": 3, 00:06:32.209 "transport_ack_timeout": 0, 00:06:32.209 "ctrlr_loss_timeout_sec": 0, 00:06:32.209 "reconnect_delay_sec": 0, 00:06:32.209 "fast_io_fail_timeout_sec": 0, 00:06:32.209 "disable_auto_failback": false, 00:06:32.209 "generate_uuids": false, 00:06:32.209 "transport_tos": 0, 00:06:32.209 "nvme_error_stat": false, 00:06:32.209 "rdma_srq_size": 0, 00:06:32.209 "io_path_stat": false, 00:06:32.209 "allow_accel_sequence": false, 00:06:32.209 "rdma_max_cq_size": 0, 00:06:32.209 "rdma_cm_event_timeout_ms": 0, 00:06:32.209 "dhchap_digests": [ 00:06:32.209 "sha256", 00:06:32.209 "sha384", 00:06:32.209 "sha512" 00:06:32.209 ], 00:06:32.209 "dhchap_dhgroups": [ 00:06:32.209 "null", 00:06:32.209 "ffdhe2048", 00:06:32.209 "ffdhe3072", 00:06:32.209 "ffdhe4096", 00:06:32.209 "ffdhe6144", 00:06:32.209 "ffdhe8192" 00:06:32.209 ] 00:06:32.209 } 00:06:32.209 }, 00:06:32.209 { 00:06:32.209 "method": "bdev_nvme_set_hotplug", 00:06:32.209 "params": { 00:06:32.209 "period_us": 100000, 00:06:32.209 "enable": false 00:06:32.209 } 00:06:32.209 }, 00:06:32.209 { 00:06:32.209 "method": "bdev_wait_for_examine" 00:06:32.209 } 00:06:32.209 ] 00:06:32.209 }, 00:06:32.209 { 00:06:32.209 "subsystem": "scsi", 00:06:32.209 "config": null 00:06:32.209 }, 00:06:32.209 { 00:06:32.209 "subsystem": "scheduler", 00:06:32.209 "config": [ 00:06:32.209 { 00:06:32.209 "method": "framework_set_scheduler", 00:06:32.209 "params": { 00:06:32.209 "name": "static" 00:06:32.209 } 00:06:32.209 } 00:06:32.209 ] 00:06:32.209 }, 00:06:32.209 { 00:06:32.209 "subsystem": "vhost_scsi", 00:06:32.209 "config": [] 00:06:32.209 }, 00:06:32.209 { 00:06:32.209 "subsystem": "vhost_blk", 00:06:32.209 "config": [] 00:06:32.209 }, 00:06:32.209 { 00:06:32.209 "subsystem": "ublk", 00:06:32.209 "config": [] 00:06:32.209 }, 00:06:32.209 { 00:06:32.209 "subsystem": "nbd", 00:06:32.209 "config": [] 00:06:32.209 }, 00:06:32.209 { 00:06:32.209 "subsystem": "nvmf", 00:06:32.209 "config": [ 00:06:32.209 { 00:06:32.209 "method": "nvmf_set_config", 00:06:32.209 "params": { 00:06:32.209 "discovery_filter": "match_any", 00:06:32.209 "admin_cmd_passthru": { 00:06:32.209 "identify_ctrlr": false 00:06:32.209 } 00:06:32.209 } 00:06:32.209 }, 00:06:32.209 { 00:06:32.209 "method": "nvmf_set_max_subsystems", 00:06:32.209 "params": { 00:06:32.209 "max_subsystems": 1024 00:06:32.209 } 00:06:32.209 }, 00:06:32.209 { 00:06:32.209 "method": "nvmf_set_crdt", 00:06:32.209 "params": { 00:06:32.209 "crdt1": 0, 00:06:32.209 "crdt2": 0, 00:06:32.209 "crdt3": 0 00:06:32.209 } 00:06:32.209 }, 00:06:32.209 { 00:06:32.209 "method": "nvmf_create_transport", 00:06:32.209 "params": { 00:06:32.209 "trtype": "TCP", 00:06:32.210 "max_queue_depth": 128, 00:06:32.210 "max_io_qpairs_per_ctrlr": 127, 00:06:32.210 "in_capsule_data_size": 4096, 00:06:32.210 "max_io_size": 131072, 00:06:32.210 "io_unit_size": 131072, 00:06:32.210 "max_aq_depth": 128, 00:06:32.210 "num_shared_buffers": 511, 00:06:32.210 "buf_cache_size": 4294967295, 00:06:32.210 "dif_insert_or_strip": false, 00:06:32.210 "zcopy": false, 00:06:32.210 "c2h_success": true, 00:06:32.210 "sock_priority": 0, 00:06:32.210 "abort_timeout_sec": 1, 00:06:32.210 "ack_timeout": 0, 00:06:32.210 "data_wr_pool_size": 0 00:06:32.210 } 00:06:32.210 } 00:06:32.210 ] 00:06:32.210 }, 00:06:32.210 { 00:06:32.210 "subsystem": "iscsi", 00:06:32.210 "config": [ 00:06:32.210 { 00:06:32.210 "method": "iscsi_set_options", 00:06:32.210 "params": { 00:06:32.210 "node_base": "iqn.2016-06.io.spdk", 00:06:32.210 "max_sessions": 128, 00:06:32.210 "max_connections_per_session": 2, 00:06:32.210 "max_queue_depth": 64, 00:06:32.210 "default_time2wait": 2, 00:06:32.210 "default_time2retain": 20, 00:06:32.210 "first_burst_length": 8192, 00:06:32.210 "immediate_data": true, 00:06:32.210 "allow_duplicated_isid": false, 00:06:32.210 "error_recovery_level": 0, 00:06:32.210 "nop_timeout": 60, 00:06:32.210 "nop_in_interval": 30, 00:06:32.210 "disable_chap": false, 00:06:32.210 "require_chap": false, 00:06:32.210 "mutual_chap": false, 00:06:32.210 "chap_group": 0, 00:06:32.210 "max_large_datain_per_connection": 64, 00:06:32.210 "max_r2t_per_connection": 4, 00:06:32.210 "pdu_pool_size": 36864, 00:06:32.210 "immediate_data_pool_size": 16384, 00:06:32.210 "data_out_pool_size": 2048 00:06:32.210 } 00:06:32.210 } 00:06:32.210 ] 00:06:32.210 } 00:06:32.210 ] 00:06:32.210 } 00:06:32.210 21:23:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:32.210 21:23:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1242928 00:06:32.210 21:23:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 1242928 ']' 00:06:32.210 21:23:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 1242928 00:06:32.210 21:23:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:06:32.210 21:23:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:32.210 21:23:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1242928 00:06:32.210 21:23:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:32.210 21:23:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:32.210 21:23:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1242928' 00:06:32.210 killing process with pid 1242928 00:06:32.210 21:23:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 1242928 00:06:32.210 21:23:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 1242928 00:06:32.779 21:23:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1243202 00:06:32.779 21:23:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:32.779 21:23:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:38.049 21:23:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1243202 00:06:38.049 21:23:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 1243202 ']' 00:06:38.049 21:23:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 1243202 00:06:38.049 21:23:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:06:38.049 21:23:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:38.049 21:23:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1243202 00:06:38.049 21:23:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:38.049 21:23:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:38.049 21:23:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1243202' 00:06:38.049 killing process with pid 1243202 00:06:38.049 21:23:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 1243202 00:06:38.049 21:23:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 1243202 00:06:38.049 21:23:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:38.049 21:23:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:38.049 00:06:38.049 real 0m6.926s 00:06:38.049 user 0m6.810s 00:06:38.049 sys 0m0.664s 00:06:38.049 21:23:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:38.049 21:23:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:38.049 ************************************ 00:06:38.049 END TEST skip_rpc_with_json 00:06:38.049 ************************************ 00:06:38.049 21:23:38 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:38.049 21:23:38 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:38.049 21:23:38 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:38.049 21:23:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.049 ************************************ 00:06:38.049 START TEST skip_rpc_with_delay 00:06:38.049 ************************************ 00:06:38.049 21:23:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_delay 00:06:38.049 21:23:38 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:38.049 21:23:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # local es=0 00:06:38.049 21:23:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:38.049 21:23:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:38.049 21:23:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:38.049 21:23:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:38.049 21:23:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:38.049 21:23:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:38.049 21:23:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:38.049 21:23:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:38.049 21:23:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:38.049 21:23:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:38.049 [2024-06-07 21:23:38.272899] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:38.049 [2024-06-07 21:23:38.272973] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:38.049 21:23:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # es=1 00:06:38.049 21:23:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:38.049 21:23:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:38.049 21:23:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:38.049 00:06:38.049 real 0m0.076s 00:06:38.049 user 0m0.049s 00:06:38.049 sys 0m0.027s 00:06:38.049 21:23:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:38.049 21:23:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:38.049 ************************************ 00:06:38.049 END TEST skip_rpc_with_delay 00:06:38.049 ************************************ 00:06:38.309 21:23:38 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:38.309 21:23:38 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:38.309 21:23:38 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:38.309 21:23:38 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:38.309 21:23:38 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:38.309 21:23:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.309 ************************************ 00:06:38.309 START TEST exit_on_failed_rpc_init 00:06:38.309 ************************************ 00:06:38.309 21:23:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # test_exit_on_failed_rpc_init 00:06:38.309 21:23:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1244301 00:06:38.309 21:23:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1244301 00:06:38.309 21:23:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:38.309 21:23:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@830 -- # '[' -z 1244301 ']' 00:06:38.309 21:23:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.309 21:23:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:38.309 21:23:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.309 21:23:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:38.309 21:23:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:38.309 [2024-06-07 21:23:38.416083] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:06:38.309 [2024-06-07 21:23:38.416137] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1244301 ] 00:06:38.309 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.309 [2024-06-07 21:23:38.506071] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.568 [2024-06-07 21:23:38.595085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.135 21:23:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:39.135 21:23:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@863 -- # return 0 00:06:39.135 21:23:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:39.135 21:23:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:39.135 21:23:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # local es=0 00:06:39.135 21:23:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:39.135 21:23:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:39.135 21:23:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:39.135 21:23:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:39.135 21:23:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:39.135 21:23:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:39.135 21:23:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:39.136 21:23:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:39.136 21:23:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:39.136 21:23:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:39.395 [2024-06-07 21:23:39.409515] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:06:39.395 [2024-06-07 21:23:39.409575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1244566 ] 00:06:39.395 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.395 [2024-06-07 21:23:39.491607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.395 [2024-06-07 21:23:39.580366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.395 [2024-06-07 21:23:39.580443] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:39.395 [2024-06-07 21:23:39.580458] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:39.395 [2024-06-07 21:23:39.580467] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:39.654 21:23:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # es=234 00:06:39.654 21:23:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:39.654 21:23:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # es=106 00:06:39.654 21:23:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # case "$es" in 00:06:39.654 21:23:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@669 -- # es=1 00:06:39.654 21:23:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:39.654 21:23:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:39.654 21:23:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1244301 00:06:39.654 21:23:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@949 -- # '[' -z 1244301 ']' 00:06:39.654 21:23:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # kill -0 1244301 00:06:39.654 21:23:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # uname 00:06:39.654 21:23:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:39.654 21:23:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1244301 00:06:39.654 21:23:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:39.654 21:23:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:39.654 21:23:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1244301' 00:06:39.654 killing process with pid 1244301 00:06:39.654 21:23:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # kill 1244301 00:06:39.654 21:23:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # wait 1244301 00:06:39.914 00:06:39.914 real 0m1.695s 00:06:39.914 user 0m2.043s 00:06:39.914 sys 0m0.478s 00:06:39.914 21:23:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:39.914 21:23:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:39.914 ************************************ 00:06:39.914 END TEST exit_on_failed_rpc_init 00:06:39.914 ************************************ 00:06:39.914 21:23:40 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:39.914 00:06:39.914 real 0m14.459s 00:06:39.914 user 0m14.147s 00:06:39.914 sys 0m1.728s 00:06:39.914 21:23:40 skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:39.914 21:23:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.914 ************************************ 00:06:39.914 END TEST skip_rpc 00:06:39.914 ************************************ 00:06:39.914 21:23:40 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:39.914 21:23:40 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:39.914 21:23:40 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:39.914 21:23:40 -- common/autotest_common.sh@10 -- # set +x 00:06:39.914 ************************************ 00:06:39.914 START TEST rpc_client 00:06:39.914 ************************************ 00:06:39.914 21:23:40 rpc_client -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:40.173 * Looking for test storage... 00:06:40.173 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:40.173 21:23:40 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:40.173 OK 00:06:40.173 21:23:40 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:40.173 00:06:40.173 real 0m0.106s 00:06:40.173 user 0m0.044s 00:06:40.173 sys 0m0.071s 00:06:40.173 21:23:40 rpc_client -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:40.173 21:23:40 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:40.173 ************************************ 00:06:40.173 END TEST rpc_client 00:06:40.173 ************************************ 00:06:40.173 21:23:40 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:40.173 21:23:40 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:40.173 21:23:40 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:40.173 21:23:40 -- common/autotest_common.sh@10 -- # set +x 00:06:40.173 ************************************ 00:06:40.173 START TEST json_config 00:06:40.173 ************************************ 00:06:40.173 21:23:40 json_config -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:40.173 21:23:40 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:40.173 21:23:40 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:40.173 21:23:40 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:40.173 21:23:40 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:40.173 21:23:40 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:40.173 21:23:40 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:40.173 21:23:40 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:40.173 21:23:40 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:40.173 21:23:40 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:40.173 21:23:40 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:40.173 21:23:40 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:40.173 21:23:40 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:40.173 21:23:40 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:06:40.173 21:23:40 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:06:40.173 21:23:40 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:40.173 21:23:40 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:40.173 21:23:40 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:40.173 21:23:40 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:40.173 21:23:40 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:40.173 21:23:40 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:40.173 21:23:40 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:40.173 21:23:40 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:40.173 21:23:40 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.173 21:23:40 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.174 21:23:40 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.174 21:23:40 json_config -- paths/export.sh@5 -- # export PATH 00:06:40.174 21:23:40 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.174 21:23:40 json_config -- nvmf/common.sh@47 -- # : 0 00:06:40.174 21:23:40 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:40.174 21:23:40 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:40.174 21:23:40 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:40.174 21:23:40 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:40.174 21:23:40 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:40.174 21:23:40 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:40.174 21:23:40 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:40.174 21:23:40 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:40.174 21:23:40 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:40.174 21:23:40 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:40.174 21:23:40 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:40.174 21:23:40 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:40.174 21:23:40 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:40.174 21:23:40 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:40.174 21:23:40 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:40.174 21:23:40 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:40.174 21:23:40 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:40.174 21:23:40 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:40.174 21:23:40 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:40.174 21:23:40 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:40.174 21:23:40 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:40.174 21:23:40 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:40.174 21:23:40 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:40.174 21:23:40 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:06:40.174 INFO: JSON configuration test init 00:06:40.174 21:23:40 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:06:40.174 21:23:40 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:06:40.174 21:23:40 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:40.174 21:23:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:40.174 21:23:40 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:06:40.174 21:23:40 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:40.174 21:23:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:40.174 21:23:40 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:06:40.174 21:23:40 json_config -- json_config/common.sh@9 -- # local app=target 00:06:40.174 21:23:40 json_config -- json_config/common.sh@10 -- # shift 00:06:40.174 21:23:40 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:40.174 21:23:40 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:40.174 21:23:40 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:40.174 21:23:40 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:40.174 21:23:40 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:40.174 21:23:40 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1244758 00:06:40.174 21:23:40 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:40.174 Waiting for target to run... 00:06:40.174 21:23:40 json_config -- json_config/common.sh@25 -- # waitforlisten 1244758 /var/tmp/spdk_tgt.sock 00:06:40.174 21:23:40 json_config -- common/autotest_common.sh@830 -- # '[' -z 1244758 ']' 00:06:40.174 21:23:40 json_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:40.174 21:23:40 json_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:40.174 21:23:40 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:40.174 21:23:40 json_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:40.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:40.174 21:23:40 json_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:40.174 21:23:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:40.433 [2024-06-07 21:23:40.488006] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:06:40.433 [2024-06-07 21:23:40.488079] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1244758 ] 00:06:40.433 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.693 [2024-06-07 21:23:40.951087] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.952 [2024-06-07 21:23:41.058440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.211 21:23:41 json_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:41.211 21:23:41 json_config -- common/autotest_common.sh@863 -- # return 0 00:06:41.211 21:23:41 json_config -- json_config/common.sh@26 -- # echo '' 00:06:41.211 00:06:41.211 21:23:41 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:06:41.211 21:23:41 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:06:41.211 21:23:41 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:41.211 21:23:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:41.211 21:23:41 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:06:41.211 21:23:41 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:06:41.211 21:23:41 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:41.211 21:23:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:41.211 21:23:41 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:41.211 21:23:41 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:06:41.211 21:23:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:44.500 21:23:44 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:06:44.500 21:23:44 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:44.500 21:23:44 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:44.500 21:23:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:44.500 21:23:44 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:44.500 21:23:44 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:44.500 21:23:44 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:44.500 21:23:44 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:44.500 21:23:44 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:44.500 21:23:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:44.500 21:23:44 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:44.500 21:23:44 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:44.500 21:23:44 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:44.500 21:23:44 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:06:44.500 21:23:44 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:44.500 21:23:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:44.500 21:23:44 json_config -- json_config/json_config.sh@55 -- # return 0 00:06:44.500 21:23:44 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:06:44.500 21:23:44 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:44.500 21:23:44 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:44.500 21:23:44 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:06:44.500 21:23:44 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:06:44.500 21:23:44 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:06:44.500 21:23:44 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:44.500 21:23:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:44.500 21:23:44 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:44.500 21:23:44 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:06:44.500 21:23:44 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:06:44.500 21:23:44 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:44.500 21:23:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:44.759 MallocForNvmf0 00:06:44.759 21:23:45 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:44.759 21:23:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:45.018 MallocForNvmf1 00:06:45.018 21:23:45 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:45.018 21:23:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:45.276 [2024-06-07 21:23:45.478092] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:45.276 21:23:45 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:45.276 21:23:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:45.534 21:23:45 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:45.534 21:23:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:45.793 21:23:45 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:45.793 21:23:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:46.052 21:23:46 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:46.052 21:23:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:46.311 [2024-06-07 21:23:46.429199] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:46.311 21:23:46 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:06:46.311 21:23:46 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:46.311 21:23:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:46.311 21:23:46 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:06:46.311 21:23:46 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:46.311 21:23:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:46.311 21:23:46 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:06:46.311 21:23:46 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:46.311 21:23:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:46.570 MallocBdevForConfigChangeCheck 00:06:46.570 21:23:46 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:06:46.570 21:23:46 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:46.570 21:23:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:46.570 21:23:46 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:06:46.570 21:23:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:47.137 21:23:47 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:06:47.137 INFO: shutting down applications... 00:06:47.137 21:23:47 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:06:47.137 21:23:47 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:06:47.137 21:23:47 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:06:47.137 21:23:47 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:49.039 Calling clear_iscsi_subsystem 00:06:49.039 Calling clear_nvmf_subsystem 00:06:49.039 Calling clear_nbd_subsystem 00:06:49.039 Calling clear_ublk_subsystem 00:06:49.039 Calling clear_vhost_blk_subsystem 00:06:49.039 Calling clear_vhost_scsi_subsystem 00:06:49.039 Calling clear_bdev_subsystem 00:06:49.039 21:23:48 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:49.039 21:23:48 json_config -- json_config/json_config.sh@343 -- # count=100 00:06:49.039 21:23:48 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:49.039 21:23:48 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:49.039 21:23:48 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:49.039 21:23:48 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:49.039 21:23:49 json_config -- json_config/json_config.sh@345 -- # break 00:06:49.039 21:23:49 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:49.039 21:23:49 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:49.039 21:23:49 json_config -- json_config/common.sh@31 -- # local app=target 00:06:49.039 21:23:49 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:49.039 21:23:49 json_config -- json_config/common.sh@35 -- # [[ -n 1244758 ]] 00:06:49.040 21:23:49 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1244758 00:06:49.040 21:23:49 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:49.040 21:23:49 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:49.040 21:23:49 json_config -- json_config/common.sh@41 -- # kill -0 1244758 00:06:49.040 21:23:49 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:49.607 21:23:49 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:49.607 21:23:49 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:49.607 21:23:49 json_config -- json_config/common.sh@41 -- # kill -0 1244758 00:06:49.607 21:23:49 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:49.607 21:23:49 json_config -- json_config/common.sh@43 -- # break 00:06:49.607 21:23:49 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:49.607 21:23:49 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:49.607 SPDK target shutdown done 00:06:49.607 21:23:49 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:49.608 INFO: relaunching applications... 00:06:49.608 21:23:49 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:49.608 21:23:49 json_config -- json_config/common.sh@9 -- # local app=target 00:06:49.608 21:23:49 json_config -- json_config/common.sh@10 -- # shift 00:06:49.608 21:23:49 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:49.608 21:23:49 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:49.608 21:23:49 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:49.608 21:23:49 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:49.608 21:23:49 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:49.608 21:23:49 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1246649 00:06:49.608 21:23:49 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:49.608 Waiting for target to run... 00:06:49.608 21:23:49 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:49.608 21:23:49 json_config -- json_config/common.sh@25 -- # waitforlisten 1246649 /var/tmp/spdk_tgt.sock 00:06:49.608 21:23:49 json_config -- common/autotest_common.sh@830 -- # '[' -z 1246649 ']' 00:06:49.608 21:23:49 json_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:49.608 21:23:49 json_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:49.608 21:23:49 json_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:49.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:49.608 21:23:49 json_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:49.608 21:23:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:49.608 [2024-06-07 21:23:49.827578] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:06:49.608 [2024-06-07 21:23:49.827651] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1246649 ] 00:06:49.608 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.174 [2024-06-07 21:23:50.287399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.174 [2024-06-07 21:23:50.386074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.538 [2024-06-07 21:23:53.435329] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:53.538 [2024-06-07 21:23:53.467673] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:53.538 21:23:53 json_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:53.538 21:23:53 json_config -- common/autotest_common.sh@863 -- # return 0 00:06:53.538 21:23:53 json_config -- json_config/common.sh@26 -- # echo '' 00:06:53.538 00:06:53.538 21:23:53 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:53.538 21:23:53 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:53.538 INFO: Checking if target configuration is the same... 00:06:53.538 21:23:53 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:53.538 21:23:53 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:53.538 21:23:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:53.538 + '[' 2 -ne 2 ']' 00:06:53.538 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:53.538 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:53.538 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:53.538 +++ basename /dev/fd/62 00:06:53.538 ++ mktemp /tmp/62.XXX 00:06:53.538 + tmp_file_1=/tmp/62.7KF 00:06:53.538 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:53.538 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:53.538 + tmp_file_2=/tmp/spdk_tgt_config.json.J6G 00:06:53.538 + ret=0 00:06:53.538 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:53.797 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:53.797 + diff -u /tmp/62.7KF /tmp/spdk_tgt_config.json.J6G 00:06:53.797 + echo 'INFO: JSON config files are the same' 00:06:53.797 INFO: JSON config files are the same 00:06:53.797 + rm /tmp/62.7KF /tmp/spdk_tgt_config.json.J6G 00:06:53.797 + exit 0 00:06:53.797 21:23:53 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:53.797 21:23:53 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:53.797 INFO: changing configuration and checking if this can be detected... 00:06:53.797 21:23:53 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:53.797 21:23:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:54.056 21:23:54 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:54.056 21:23:54 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:54.056 21:23:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:54.056 + '[' 2 -ne 2 ']' 00:06:54.056 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:54.056 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:54.056 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:54.056 +++ basename /dev/fd/62 00:06:54.056 ++ mktemp /tmp/62.XXX 00:06:54.056 + tmp_file_1=/tmp/62.nB1 00:06:54.056 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:54.056 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:54.056 + tmp_file_2=/tmp/spdk_tgt_config.json.rJa 00:06:54.056 + ret=0 00:06:54.056 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:54.315 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:54.574 + diff -u /tmp/62.nB1 /tmp/spdk_tgt_config.json.rJa 00:06:54.574 + ret=1 00:06:54.574 + echo '=== Start of file: /tmp/62.nB1 ===' 00:06:54.574 + cat /tmp/62.nB1 00:06:54.574 + echo '=== End of file: /tmp/62.nB1 ===' 00:06:54.574 + echo '' 00:06:54.574 + echo '=== Start of file: /tmp/spdk_tgt_config.json.rJa ===' 00:06:54.574 + cat /tmp/spdk_tgt_config.json.rJa 00:06:54.574 + echo '=== End of file: /tmp/spdk_tgt_config.json.rJa ===' 00:06:54.574 + echo '' 00:06:54.574 + rm /tmp/62.nB1 /tmp/spdk_tgt_config.json.rJa 00:06:54.574 + exit 1 00:06:54.574 21:23:54 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:54.574 INFO: configuration change detected. 00:06:54.574 21:23:54 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:54.574 21:23:54 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:54.574 21:23:54 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:54.574 21:23:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:54.574 21:23:54 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:54.574 21:23:54 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:54.574 21:23:54 json_config -- json_config/json_config.sh@317 -- # [[ -n 1246649 ]] 00:06:54.574 21:23:54 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:54.574 21:23:54 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:54.574 21:23:54 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:54.574 21:23:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:54.574 21:23:54 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:54.574 21:23:54 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:54.574 21:23:54 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:54.574 21:23:54 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:54.574 21:23:54 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:54.574 21:23:54 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:54.574 21:23:54 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:54.574 21:23:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:54.574 21:23:54 json_config -- json_config/json_config.sh@323 -- # killprocess 1246649 00:06:54.574 21:23:54 json_config -- common/autotest_common.sh@949 -- # '[' -z 1246649 ']' 00:06:54.574 21:23:54 json_config -- common/autotest_common.sh@953 -- # kill -0 1246649 00:06:54.574 21:23:54 json_config -- common/autotest_common.sh@954 -- # uname 00:06:54.574 21:23:54 json_config -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:54.574 21:23:54 json_config -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1246649 00:06:54.574 21:23:54 json_config -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:54.574 21:23:54 json_config -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:54.574 21:23:54 json_config -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1246649' 00:06:54.574 killing process with pid 1246649 00:06:54.574 21:23:54 json_config -- common/autotest_common.sh@968 -- # kill 1246649 00:06:54.574 21:23:54 json_config -- common/autotest_common.sh@973 -- # wait 1246649 00:06:56.479 21:23:56 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:56.479 21:23:56 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:56.479 21:23:56 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:56.479 21:23:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:56.479 21:23:56 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:56.479 21:23:56 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:56.479 INFO: Success 00:06:56.479 00:06:56.479 real 0m16.008s 00:06:56.479 user 0m17.627s 00:06:56.479 sys 0m2.172s 00:06:56.479 21:23:56 json_config -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:56.479 21:23:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:56.479 ************************************ 00:06:56.479 END TEST json_config 00:06:56.479 ************************************ 00:06:56.479 21:23:56 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:56.479 21:23:56 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:56.479 21:23:56 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:56.479 21:23:56 -- common/autotest_common.sh@10 -- # set +x 00:06:56.479 ************************************ 00:06:56.479 START TEST json_config_extra_key 00:06:56.479 ************************************ 00:06:56.479 21:23:56 json_config_extra_key -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:56.479 21:23:56 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:56.479 21:23:56 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:56.479 21:23:56 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:56.479 21:23:56 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:56.479 21:23:56 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:56.479 21:23:56 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:56.479 21:23:56 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:56.479 21:23:56 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:56.479 21:23:56 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:56.479 21:23:56 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:56.479 21:23:56 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:56.479 21:23:56 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:56.479 21:23:56 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:06:56.479 21:23:56 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:06:56.479 21:23:56 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:56.479 21:23:56 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:56.479 21:23:56 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:56.479 21:23:56 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:56.479 21:23:56 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:56.479 21:23:56 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:56.479 21:23:56 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:56.479 21:23:56 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:56.479 21:23:56 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.479 21:23:56 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.479 21:23:56 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.479 21:23:56 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:56.479 21:23:56 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.479 21:23:56 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:56.479 21:23:56 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:56.479 21:23:56 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:56.479 21:23:56 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:56.479 21:23:56 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:56.479 21:23:56 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:56.479 21:23:56 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:56.479 21:23:56 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:56.479 21:23:56 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:56.479 21:23:56 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:56.479 21:23:56 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:56.479 21:23:56 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:56.479 21:23:56 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:56.479 21:23:56 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:56.479 21:23:56 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:56.479 21:23:56 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:56.479 21:23:56 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:56.479 21:23:56 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:56.479 21:23:56 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:56.479 21:23:56 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:56.480 INFO: launching applications... 00:06:56.480 21:23:56 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:56.480 21:23:56 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:56.480 21:23:56 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:56.480 21:23:56 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:56.480 21:23:56 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:56.480 21:23:56 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:56.480 21:23:56 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:56.480 21:23:56 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:56.480 21:23:56 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1248061 00:06:56.480 21:23:56 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:56.480 Waiting for target to run... 00:06:56.480 21:23:56 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1248061 /var/tmp/spdk_tgt.sock 00:06:56.480 21:23:56 json_config_extra_key -- common/autotest_common.sh@830 -- # '[' -z 1248061 ']' 00:06:56.480 21:23:56 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:56.480 21:23:56 json_config_extra_key -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:56.480 21:23:56 json_config_extra_key -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:56.480 21:23:56 json_config_extra_key -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:56.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:56.480 21:23:56 json_config_extra_key -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:56.480 21:23:56 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:56.480 [2024-06-07 21:23:56.556601] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:06:56.480 [2024-06-07 21:23:56.556665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1248061 ] 00:06:56.480 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.048 [2024-06-07 21:23:57.008733] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.048 [2024-06-07 21:23:57.116327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.306 21:23:57 json_config_extra_key -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:57.306 21:23:57 json_config_extra_key -- common/autotest_common.sh@863 -- # return 0 00:06:57.306 21:23:57 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:57.306 00:06:57.306 21:23:57 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:57.306 INFO: shutting down applications... 00:06:57.306 21:23:57 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:57.306 21:23:57 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:57.306 21:23:57 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:57.306 21:23:57 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1248061 ]] 00:06:57.307 21:23:57 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1248061 00:06:57.307 21:23:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:57.307 21:23:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:57.307 21:23:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1248061 00:06:57.307 21:23:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:57.875 21:23:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:57.875 21:23:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:57.875 21:23:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1248061 00:06:57.875 21:23:57 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:57.875 21:23:57 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:57.875 21:23:57 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:57.875 21:23:57 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:57.875 SPDK target shutdown done 00:06:57.875 21:23:57 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:57.875 Success 00:06:57.875 00:06:57.875 real 0m1.598s 00:06:57.875 user 0m1.372s 00:06:57.875 sys 0m0.562s 00:06:57.875 21:23:57 json_config_extra_key -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:57.875 21:23:58 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:57.875 ************************************ 00:06:57.875 END TEST json_config_extra_key 00:06:57.875 ************************************ 00:06:57.875 21:23:58 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:57.875 21:23:58 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:57.875 21:23:58 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:57.875 21:23:58 -- common/autotest_common.sh@10 -- # set +x 00:06:57.875 ************************************ 00:06:57.875 START TEST alias_rpc 00:06:57.875 ************************************ 00:06:57.875 21:23:58 alias_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:58.134 * Looking for test storage... 00:06:58.134 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:58.134 21:23:58 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:58.134 21:23:58 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1248390 00:06:58.135 21:23:58 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:58.135 21:23:58 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1248390 00:06:58.135 21:23:58 alias_rpc -- common/autotest_common.sh@830 -- # '[' -z 1248390 ']' 00:06:58.135 21:23:58 alias_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.135 21:23:58 alias_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:58.135 21:23:58 alias_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.135 21:23:58 alias_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:58.135 21:23:58 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.135 [2024-06-07 21:23:58.213717] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:06:58.135 [2024-06-07 21:23:58.213776] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1248390 ] 00:06:58.135 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.135 [2024-06-07 21:23:58.302343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.135 [2024-06-07 21:23:58.394575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.071 21:23:59 alias_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:59.071 21:23:59 alias_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:59.071 21:23:59 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:59.330 21:23:59 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1248390 00:06:59.330 21:23:59 alias_rpc -- common/autotest_common.sh@949 -- # '[' -z 1248390 ']' 00:06:59.330 21:23:59 alias_rpc -- common/autotest_common.sh@953 -- # kill -0 1248390 00:06:59.330 21:23:59 alias_rpc -- common/autotest_common.sh@954 -- # uname 00:06:59.330 21:23:59 alias_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:59.330 21:23:59 alias_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1248390 00:06:59.330 21:23:59 alias_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:59.330 21:23:59 alias_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:59.331 21:23:59 alias_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1248390' 00:06:59.331 killing process with pid 1248390 00:06:59.331 21:23:59 alias_rpc -- common/autotest_common.sh@968 -- # kill 1248390 00:06:59.331 21:23:59 alias_rpc -- common/autotest_common.sh@973 -- # wait 1248390 00:06:59.590 00:06:59.590 real 0m1.741s 00:06:59.590 user 0m2.041s 00:06:59.590 sys 0m0.463s 00:06:59.590 21:23:59 alias_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:59.590 21:23:59 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.590 ************************************ 00:06:59.590 END TEST alias_rpc 00:06:59.590 ************************************ 00:06:59.590 21:23:59 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:59.590 21:23:59 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:59.590 21:23:59 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:59.590 21:23:59 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:59.590 21:23:59 -- common/autotest_common.sh@10 -- # set +x 00:06:59.878 ************************************ 00:06:59.878 START TEST spdkcli_tcp 00:06:59.878 ************************************ 00:06:59.878 21:23:59 spdkcli_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:59.878 * Looking for test storage... 00:06:59.878 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:59.878 21:23:59 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:59.878 21:23:59 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:59.879 21:23:59 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:59.879 21:23:59 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:59.879 21:23:59 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:59.879 21:23:59 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:59.879 21:23:59 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:59.879 21:23:59 spdkcli_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:59.879 21:23:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:59.879 21:23:59 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1248712 00:06:59.879 21:23:59 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1248712 00:06:59.879 21:23:59 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:59.879 21:23:59 spdkcli_tcp -- common/autotest_common.sh@830 -- # '[' -z 1248712 ']' 00:06:59.879 21:23:59 spdkcli_tcp -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.879 21:23:59 spdkcli_tcp -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:59.879 21:23:59 spdkcli_tcp -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.879 21:23:59 spdkcli_tcp -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:59.879 21:23:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:59.879 [2024-06-07 21:24:00.035354] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:06:59.879 [2024-06-07 21:24:00.035423] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1248712 ] 00:06:59.879 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.879 [2024-06-07 21:24:00.129209] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:00.138 [2024-06-07 21:24:00.220738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.138 [2024-06-07 21:24:00.220743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.706 21:24:00 spdkcli_tcp -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:00.706 21:24:00 spdkcli_tcp -- common/autotest_common.sh@863 -- # return 0 00:07:00.706 21:24:00 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:00.706 21:24:00 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1249028 00:07:00.706 21:24:00 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:00.965 [ 00:07:00.965 "bdev_malloc_delete", 00:07:00.965 "bdev_malloc_create", 00:07:00.965 "bdev_null_resize", 00:07:00.965 "bdev_null_delete", 00:07:00.965 "bdev_null_create", 00:07:00.965 "bdev_nvme_cuse_unregister", 00:07:00.965 "bdev_nvme_cuse_register", 00:07:00.965 "bdev_opal_new_user", 00:07:00.965 "bdev_opal_set_lock_state", 00:07:00.965 "bdev_opal_delete", 00:07:00.965 "bdev_opal_get_info", 00:07:00.965 "bdev_opal_create", 00:07:00.965 "bdev_nvme_opal_revert", 00:07:00.965 "bdev_nvme_opal_init", 00:07:00.965 "bdev_nvme_send_cmd", 00:07:00.965 "bdev_nvme_get_path_iostat", 00:07:00.965 "bdev_nvme_get_mdns_discovery_info", 00:07:00.965 "bdev_nvme_stop_mdns_discovery", 00:07:00.965 "bdev_nvme_start_mdns_discovery", 00:07:00.965 "bdev_nvme_set_multipath_policy", 00:07:00.965 "bdev_nvme_set_preferred_path", 00:07:00.965 "bdev_nvme_get_io_paths", 00:07:00.965 "bdev_nvme_remove_error_injection", 00:07:00.965 "bdev_nvme_add_error_injection", 00:07:00.965 "bdev_nvme_get_discovery_info", 00:07:00.965 "bdev_nvme_stop_discovery", 00:07:00.965 "bdev_nvme_start_discovery", 00:07:00.965 "bdev_nvme_get_controller_health_info", 00:07:00.965 "bdev_nvme_disable_controller", 00:07:00.965 "bdev_nvme_enable_controller", 00:07:00.965 "bdev_nvme_reset_controller", 00:07:00.965 "bdev_nvme_get_transport_statistics", 00:07:00.965 "bdev_nvme_apply_firmware", 00:07:00.965 "bdev_nvme_detach_controller", 00:07:00.965 "bdev_nvme_get_controllers", 00:07:00.965 "bdev_nvme_attach_controller", 00:07:00.965 "bdev_nvme_set_hotplug", 00:07:00.965 "bdev_nvme_set_options", 00:07:00.965 "bdev_passthru_delete", 00:07:00.965 "bdev_passthru_create", 00:07:00.965 "bdev_lvol_set_parent_bdev", 00:07:00.965 "bdev_lvol_set_parent", 00:07:00.965 "bdev_lvol_check_shallow_copy", 00:07:00.965 "bdev_lvol_start_shallow_copy", 00:07:00.965 "bdev_lvol_grow_lvstore", 00:07:00.965 "bdev_lvol_get_lvols", 00:07:00.965 "bdev_lvol_get_lvstores", 00:07:00.965 "bdev_lvol_delete", 00:07:00.965 "bdev_lvol_set_read_only", 00:07:00.965 "bdev_lvol_resize", 00:07:00.965 "bdev_lvol_decouple_parent", 00:07:00.965 "bdev_lvol_inflate", 00:07:00.965 "bdev_lvol_rename", 00:07:00.965 "bdev_lvol_clone_bdev", 00:07:00.965 "bdev_lvol_clone", 00:07:00.965 "bdev_lvol_snapshot", 00:07:00.965 "bdev_lvol_create", 00:07:00.965 "bdev_lvol_delete_lvstore", 00:07:00.965 "bdev_lvol_rename_lvstore", 00:07:00.965 "bdev_lvol_create_lvstore", 00:07:00.965 "bdev_raid_set_options", 00:07:00.965 "bdev_raid_remove_base_bdev", 00:07:00.965 "bdev_raid_add_base_bdev", 00:07:00.965 "bdev_raid_delete", 00:07:00.965 "bdev_raid_create", 00:07:00.965 "bdev_raid_get_bdevs", 00:07:00.965 "bdev_error_inject_error", 00:07:00.965 "bdev_error_delete", 00:07:00.965 "bdev_error_create", 00:07:00.965 "bdev_split_delete", 00:07:00.965 "bdev_split_create", 00:07:00.965 "bdev_delay_delete", 00:07:00.965 "bdev_delay_create", 00:07:00.965 "bdev_delay_update_latency", 00:07:00.965 "bdev_zone_block_delete", 00:07:00.965 "bdev_zone_block_create", 00:07:00.965 "blobfs_create", 00:07:00.965 "blobfs_detect", 00:07:00.965 "blobfs_set_cache_size", 00:07:00.965 "bdev_aio_delete", 00:07:00.965 "bdev_aio_rescan", 00:07:00.965 "bdev_aio_create", 00:07:00.965 "bdev_ftl_set_property", 00:07:00.965 "bdev_ftl_get_properties", 00:07:00.965 "bdev_ftl_get_stats", 00:07:00.965 "bdev_ftl_unmap", 00:07:00.965 "bdev_ftl_unload", 00:07:00.965 "bdev_ftl_delete", 00:07:00.965 "bdev_ftl_load", 00:07:00.965 "bdev_ftl_create", 00:07:00.965 "bdev_virtio_attach_controller", 00:07:00.965 "bdev_virtio_scsi_get_devices", 00:07:00.965 "bdev_virtio_detach_controller", 00:07:00.965 "bdev_virtio_blk_set_hotplug", 00:07:00.965 "bdev_iscsi_delete", 00:07:00.965 "bdev_iscsi_create", 00:07:00.965 "bdev_iscsi_set_options", 00:07:00.965 "accel_error_inject_error", 00:07:00.965 "ioat_scan_accel_module", 00:07:00.965 "dsa_scan_accel_module", 00:07:00.965 "iaa_scan_accel_module", 00:07:00.965 "vfu_virtio_create_scsi_endpoint", 00:07:00.965 "vfu_virtio_scsi_remove_target", 00:07:00.965 "vfu_virtio_scsi_add_target", 00:07:00.965 "vfu_virtio_create_blk_endpoint", 00:07:00.965 "vfu_virtio_delete_endpoint", 00:07:00.965 "keyring_file_remove_key", 00:07:00.965 "keyring_file_add_key", 00:07:00.965 "keyring_linux_set_options", 00:07:00.965 "iscsi_get_histogram", 00:07:00.965 "iscsi_enable_histogram", 00:07:00.965 "iscsi_set_options", 00:07:00.965 "iscsi_get_auth_groups", 00:07:00.965 "iscsi_auth_group_remove_secret", 00:07:00.965 "iscsi_auth_group_add_secret", 00:07:00.965 "iscsi_delete_auth_group", 00:07:00.965 "iscsi_create_auth_group", 00:07:00.965 "iscsi_set_discovery_auth", 00:07:00.965 "iscsi_get_options", 00:07:00.965 "iscsi_target_node_request_logout", 00:07:00.965 "iscsi_target_node_set_redirect", 00:07:00.965 "iscsi_target_node_set_auth", 00:07:00.965 "iscsi_target_node_add_lun", 00:07:00.965 "iscsi_get_stats", 00:07:00.965 "iscsi_get_connections", 00:07:00.965 "iscsi_portal_group_set_auth", 00:07:00.965 "iscsi_start_portal_group", 00:07:00.965 "iscsi_delete_portal_group", 00:07:00.965 "iscsi_create_portal_group", 00:07:00.965 "iscsi_get_portal_groups", 00:07:00.965 "iscsi_delete_target_node", 00:07:00.965 "iscsi_target_node_remove_pg_ig_maps", 00:07:00.965 "iscsi_target_node_add_pg_ig_maps", 00:07:00.965 "iscsi_create_target_node", 00:07:00.965 "iscsi_get_target_nodes", 00:07:00.965 "iscsi_delete_initiator_group", 00:07:00.965 "iscsi_initiator_group_remove_initiators", 00:07:00.965 "iscsi_initiator_group_add_initiators", 00:07:00.965 "iscsi_create_initiator_group", 00:07:00.965 "iscsi_get_initiator_groups", 00:07:00.965 "nvmf_set_crdt", 00:07:00.965 "nvmf_set_config", 00:07:00.965 "nvmf_set_max_subsystems", 00:07:00.965 "nvmf_stop_mdns_prr", 00:07:00.965 "nvmf_publish_mdns_prr", 00:07:00.965 "nvmf_subsystem_get_listeners", 00:07:00.965 "nvmf_subsystem_get_qpairs", 00:07:00.965 "nvmf_subsystem_get_controllers", 00:07:00.965 "nvmf_get_stats", 00:07:00.965 "nvmf_get_transports", 00:07:00.965 "nvmf_create_transport", 00:07:00.965 "nvmf_get_targets", 00:07:00.965 "nvmf_delete_target", 00:07:00.965 "nvmf_create_target", 00:07:00.965 "nvmf_subsystem_allow_any_host", 00:07:00.965 "nvmf_subsystem_remove_host", 00:07:00.965 "nvmf_subsystem_add_host", 00:07:00.965 "nvmf_ns_remove_host", 00:07:00.965 "nvmf_ns_add_host", 00:07:00.965 "nvmf_subsystem_remove_ns", 00:07:00.965 "nvmf_subsystem_add_ns", 00:07:00.965 "nvmf_subsystem_listener_set_ana_state", 00:07:00.965 "nvmf_discovery_get_referrals", 00:07:00.965 "nvmf_discovery_remove_referral", 00:07:00.965 "nvmf_discovery_add_referral", 00:07:00.965 "nvmf_subsystem_remove_listener", 00:07:00.965 "nvmf_subsystem_add_listener", 00:07:00.965 "nvmf_delete_subsystem", 00:07:00.965 "nvmf_create_subsystem", 00:07:00.965 "nvmf_get_subsystems", 00:07:00.965 "env_dpdk_get_mem_stats", 00:07:00.965 "nbd_get_disks", 00:07:00.965 "nbd_stop_disk", 00:07:00.965 "nbd_start_disk", 00:07:00.965 "ublk_recover_disk", 00:07:00.966 "ublk_get_disks", 00:07:00.966 "ublk_stop_disk", 00:07:00.966 "ublk_start_disk", 00:07:00.966 "ublk_destroy_target", 00:07:00.966 "ublk_create_target", 00:07:00.966 "virtio_blk_create_transport", 00:07:00.966 "virtio_blk_get_transports", 00:07:00.966 "vhost_controller_set_coalescing", 00:07:00.966 "vhost_get_controllers", 00:07:00.966 "vhost_delete_controller", 00:07:00.966 "vhost_create_blk_controller", 00:07:00.966 "vhost_scsi_controller_remove_target", 00:07:00.966 "vhost_scsi_controller_add_target", 00:07:00.966 "vhost_start_scsi_controller", 00:07:00.966 "vhost_create_scsi_controller", 00:07:00.966 "thread_set_cpumask", 00:07:00.966 "framework_get_scheduler", 00:07:00.966 "framework_set_scheduler", 00:07:00.966 "framework_get_reactors", 00:07:00.966 "thread_get_io_channels", 00:07:00.966 "thread_get_pollers", 00:07:00.966 "thread_get_stats", 00:07:00.966 "framework_monitor_context_switch", 00:07:00.966 "spdk_kill_instance", 00:07:00.966 "log_enable_timestamps", 00:07:00.966 "log_get_flags", 00:07:00.966 "log_clear_flag", 00:07:00.966 "log_set_flag", 00:07:00.966 "log_get_level", 00:07:00.966 "log_set_level", 00:07:00.966 "log_get_print_level", 00:07:00.966 "log_set_print_level", 00:07:00.966 "framework_enable_cpumask_locks", 00:07:00.966 "framework_disable_cpumask_locks", 00:07:00.966 "framework_wait_init", 00:07:00.966 "framework_start_init", 00:07:00.966 "scsi_get_devices", 00:07:00.966 "bdev_get_histogram", 00:07:00.966 "bdev_enable_histogram", 00:07:00.966 "bdev_set_qos_limit", 00:07:00.966 "bdev_set_qd_sampling_period", 00:07:00.966 "bdev_get_bdevs", 00:07:00.966 "bdev_reset_iostat", 00:07:00.966 "bdev_get_iostat", 00:07:00.966 "bdev_examine", 00:07:00.966 "bdev_wait_for_examine", 00:07:00.966 "bdev_set_options", 00:07:00.966 "notify_get_notifications", 00:07:00.966 "notify_get_types", 00:07:00.966 "accel_get_stats", 00:07:00.966 "accel_set_options", 00:07:00.966 "accel_set_driver", 00:07:00.966 "accel_crypto_key_destroy", 00:07:00.966 "accel_crypto_keys_get", 00:07:00.966 "accel_crypto_key_create", 00:07:00.966 "accel_assign_opc", 00:07:00.966 "accel_get_module_info", 00:07:00.966 "accel_get_opc_assignments", 00:07:00.966 "vmd_rescan", 00:07:00.966 "vmd_remove_device", 00:07:00.966 "vmd_enable", 00:07:00.966 "sock_get_default_impl", 00:07:00.966 "sock_set_default_impl", 00:07:00.966 "sock_impl_set_options", 00:07:00.966 "sock_impl_get_options", 00:07:00.966 "iobuf_get_stats", 00:07:00.966 "iobuf_set_options", 00:07:00.966 "keyring_get_keys", 00:07:00.966 "framework_get_pci_devices", 00:07:00.966 "framework_get_config", 00:07:00.966 "framework_get_subsystems", 00:07:00.966 "vfu_tgt_set_base_path", 00:07:00.966 "trace_get_info", 00:07:00.966 "trace_get_tpoint_group_mask", 00:07:00.966 "trace_disable_tpoint_group", 00:07:00.966 "trace_enable_tpoint_group", 00:07:00.966 "trace_clear_tpoint_mask", 00:07:00.966 "trace_set_tpoint_mask", 00:07:00.966 "spdk_get_version", 00:07:00.966 "rpc_get_methods" 00:07:00.966 ] 00:07:00.966 21:24:01 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:00.966 21:24:01 spdkcli_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:00.966 21:24:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:01.225 21:24:01 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:01.225 21:24:01 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1248712 00:07:01.225 21:24:01 spdkcli_tcp -- common/autotest_common.sh@949 -- # '[' -z 1248712 ']' 00:07:01.225 21:24:01 spdkcli_tcp -- common/autotest_common.sh@953 -- # kill -0 1248712 00:07:01.225 21:24:01 spdkcli_tcp -- common/autotest_common.sh@954 -- # uname 00:07:01.225 21:24:01 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:01.225 21:24:01 spdkcli_tcp -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1248712 00:07:01.225 21:24:01 spdkcli_tcp -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:01.225 21:24:01 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:01.225 21:24:01 spdkcli_tcp -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1248712' 00:07:01.225 killing process with pid 1248712 00:07:01.225 21:24:01 spdkcli_tcp -- common/autotest_common.sh@968 -- # kill 1248712 00:07:01.225 21:24:01 spdkcli_tcp -- common/autotest_common.sh@973 -- # wait 1248712 00:07:01.484 00:07:01.484 real 0m1.761s 00:07:01.484 user 0m3.395s 00:07:01.484 sys 0m0.495s 00:07:01.484 21:24:01 spdkcli_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:01.484 21:24:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:01.484 ************************************ 00:07:01.484 END TEST spdkcli_tcp 00:07:01.484 ************************************ 00:07:01.484 21:24:01 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:01.484 21:24:01 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:01.484 21:24:01 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:01.484 21:24:01 -- common/autotest_common.sh@10 -- # set +x 00:07:01.484 ************************************ 00:07:01.484 START TEST dpdk_mem_utility 00:07:01.484 ************************************ 00:07:01.484 21:24:01 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:01.743 * Looking for test storage... 00:07:01.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:07:01.743 21:24:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:01.743 21:24:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:01.743 21:24:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1249150 00:07:01.743 21:24:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1249150 00:07:01.743 21:24:01 dpdk_mem_utility -- common/autotest_common.sh@830 -- # '[' -z 1249150 ']' 00:07:01.743 21:24:01 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.743 21:24:01 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:01.743 21:24:01 dpdk_mem_utility -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.743 21:24:01 dpdk_mem_utility -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:01.743 21:24:01 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:01.743 [2024-06-07 21:24:01.853071] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:07:01.743 [2024-06-07 21:24:01.853128] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1249150 ] 00:07:01.743 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.743 [2024-06-07 21:24:01.944705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.003 [2024-06-07 21:24:02.036972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.570 21:24:02 dpdk_mem_utility -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:02.570 21:24:02 dpdk_mem_utility -- common/autotest_common.sh@863 -- # return 0 00:07:02.570 21:24:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:02.570 21:24:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:02.570 21:24:02 dpdk_mem_utility -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:02.570 21:24:02 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:02.570 { 00:07:02.570 "filename": "/tmp/spdk_mem_dump.txt" 00:07:02.570 } 00:07:02.570 21:24:02 dpdk_mem_utility -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:02.570 21:24:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:02.830 DPDK memory size 814.000000 MiB in 1 heap(s) 00:07:02.830 1 heaps totaling size 814.000000 MiB 00:07:02.830 size: 814.000000 MiB heap id: 0 00:07:02.830 end heaps---------- 00:07:02.830 8 mempools totaling size 598.116089 MiB 00:07:02.830 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:02.830 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:02.830 size: 84.521057 MiB name: bdev_io_1249150 00:07:02.830 size: 51.011292 MiB name: evtpool_1249150 00:07:02.830 size: 50.003479 MiB name: msgpool_1249150 00:07:02.830 size: 21.763794 MiB name: PDU_Pool 00:07:02.830 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:02.830 size: 0.026123 MiB name: Session_Pool 00:07:02.830 end mempools------- 00:07:02.830 6 memzones totaling size 4.142822 MiB 00:07:02.830 size: 1.000366 MiB name: RG_ring_0_1249150 00:07:02.830 size: 1.000366 MiB name: RG_ring_1_1249150 00:07:02.830 size: 1.000366 MiB name: RG_ring_4_1249150 00:07:02.830 size: 1.000366 MiB name: RG_ring_5_1249150 00:07:02.830 size: 0.125366 MiB name: RG_ring_2_1249150 00:07:02.830 size: 0.015991 MiB name: RG_ring_3_1249150 00:07:02.830 end memzones------- 00:07:02.830 21:24:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:07:02.830 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:07:02.830 list of free elements. size: 12.519348 MiB 00:07:02.830 element at address: 0x200000400000 with size: 1.999512 MiB 00:07:02.830 element at address: 0x200018e00000 with size: 0.999878 MiB 00:07:02.830 element at address: 0x200019000000 with size: 0.999878 MiB 00:07:02.830 element at address: 0x200003e00000 with size: 0.996277 MiB 00:07:02.830 element at address: 0x200031c00000 with size: 0.994446 MiB 00:07:02.830 element at address: 0x200013800000 with size: 0.978699 MiB 00:07:02.830 element at address: 0x200007000000 with size: 0.959839 MiB 00:07:02.830 element at address: 0x200019200000 with size: 0.936584 MiB 00:07:02.830 element at address: 0x200000200000 with size: 0.841614 MiB 00:07:02.830 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:07:02.830 element at address: 0x20000b200000 with size: 0.490723 MiB 00:07:02.830 element at address: 0x200000800000 with size: 0.487793 MiB 00:07:02.830 element at address: 0x200019400000 with size: 0.485657 MiB 00:07:02.830 element at address: 0x200027e00000 with size: 0.410034 MiB 00:07:02.830 element at address: 0x200003a00000 with size: 0.355530 MiB 00:07:02.830 list of standard malloc elements. size: 199.218079 MiB 00:07:02.830 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:07:02.831 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:07:02.831 element at address: 0x200018efff80 with size: 1.000122 MiB 00:07:02.831 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:07:02.831 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:07:02.831 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:02.831 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:07:02.831 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:02.831 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:07:02.831 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:07:02.831 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:07:02.831 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:07:02.831 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:07:02.831 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:07:02.831 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:02.831 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:02.831 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:07:02.831 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:07:02.831 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:07:02.831 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:07:02.831 element at address: 0x200003adb300 with size: 0.000183 MiB 00:07:02.831 element at address: 0x200003adb500 with size: 0.000183 MiB 00:07:02.831 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:07:02.831 element at address: 0x200003affa80 with size: 0.000183 MiB 00:07:02.831 element at address: 0x200003affb40 with size: 0.000183 MiB 00:07:02.831 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:07:02.831 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:07:02.831 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:07:02.831 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:07:02.831 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:07:02.831 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:07:02.831 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:07:02.831 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:07:02.831 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:07:02.831 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:07:02.831 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:07:02.831 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:07:02.831 element at address: 0x200027e69040 with size: 0.000183 MiB 00:07:02.831 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:07:02.831 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:07:02.831 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:07:02.831 list of memzone associated elements. size: 602.262573 MiB 00:07:02.831 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:07:02.831 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:02.831 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:07:02.831 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:02.831 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:07:02.831 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1249150_0 00:07:02.831 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:07:02.831 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1249150_0 00:07:02.831 element at address: 0x200003fff380 with size: 48.003052 MiB 00:07:02.831 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1249150_0 00:07:02.831 element at address: 0x2000195be940 with size: 20.255554 MiB 00:07:02.831 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:02.831 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:07:02.831 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:02.831 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:07:02.831 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1249150 00:07:02.831 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:07:02.831 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1249150 00:07:02.831 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:02.831 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1249150 00:07:02.831 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:07:02.831 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:02.831 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:07:02.831 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:02.831 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:07:02.831 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:02.831 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:07:02.831 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:02.831 element at address: 0x200003eff180 with size: 1.000488 MiB 00:07:02.831 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1249150 00:07:02.831 element at address: 0x200003affc00 with size: 1.000488 MiB 00:07:02.831 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1249150 00:07:02.831 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:07:02.831 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1249150 00:07:02.831 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:07:02.831 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1249150 00:07:02.831 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:07:02.831 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1249150 00:07:02.831 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:07:02.831 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:02.831 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:07:02.831 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:02.831 element at address: 0x20001947c540 with size: 0.250488 MiB 00:07:02.831 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:02.831 element at address: 0x200003adf880 with size: 0.125488 MiB 00:07:02.831 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1249150 00:07:02.831 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:07:02.831 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:02.831 element at address: 0x200027e69100 with size: 0.023743 MiB 00:07:02.831 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:02.831 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:07:02.831 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1249150 00:07:02.831 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:07:02.831 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:02.831 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:07:02.831 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1249150 00:07:02.831 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:07:02.831 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1249150 00:07:02.831 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:07:02.831 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:02.831 21:24:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:02.831 21:24:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1249150 00:07:02.831 21:24:02 dpdk_mem_utility -- common/autotest_common.sh@949 -- # '[' -z 1249150 ']' 00:07:02.831 21:24:02 dpdk_mem_utility -- common/autotest_common.sh@953 -- # kill -0 1249150 00:07:02.831 21:24:02 dpdk_mem_utility -- common/autotest_common.sh@954 -- # uname 00:07:02.831 21:24:02 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:02.831 21:24:02 dpdk_mem_utility -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1249150 00:07:02.831 21:24:02 dpdk_mem_utility -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:02.831 21:24:02 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:02.831 21:24:02 dpdk_mem_utility -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1249150' 00:07:02.831 killing process with pid 1249150 00:07:02.831 21:24:02 dpdk_mem_utility -- common/autotest_common.sh@968 -- # kill 1249150 00:07:02.831 21:24:02 dpdk_mem_utility -- common/autotest_common.sh@973 -- # wait 1249150 00:07:03.090 00:07:03.090 real 0m1.604s 00:07:03.090 user 0m1.783s 00:07:03.090 sys 0m0.456s 00:07:03.090 21:24:03 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:03.090 21:24:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:03.090 ************************************ 00:07:03.090 END TEST dpdk_mem_utility 00:07:03.090 ************************************ 00:07:03.090 21:24:03 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:07:03.090 21:24:03 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:03.090 21:24:03 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:03.090 21:24:03 -- common/autotest_common.sh@10 -- # set +x 00:07:03.349 ************************************ 00:07:03.349 START TEST event 00:07:03.349 ************************************ 00:07:03.349 21:24:03 event -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:07:03.349 * Looking for test storage... 00:07:03.349 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:03.349 21:24:03 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:07:03.349 21:24:03 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:03.349 21:24:03 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:03.349 21:24:03 event -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:07:03.349 21:24:03 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:03.349 21:24:03 event -- common/autotest_common.sh@10 -- # set +x 00:07:03.349 ************************************ 00:07:03.349 START TEST event_perf 00:07:03.349 ************************************ 00:07:03.349 21:24:03 event.event_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:03.349 Running I/O for 1 seconds...[2024-06-07 21:24:03.530446] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:07:03.349 [2024-06-07 21:24:03.530520] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1249603 ] 00:07:03.349 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.608 [2024-06-07 21:24:03.621835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:03.608 [2024-06-07 21:24:03.717366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.608 [2024-06-07 21:24:03.717424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:03.608 [2024-06-07 21:24:03.717424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.608 [2024-06-07 21:24:03.717382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:04.546 Running I/O for 1 seconds... 00:07:04.546 lcore 0: 171018 00:07:04.546 lcore 1: 171016 00:07:04.546 lcore 2: 171015 00:07:04.546 lcore 3: 171018 00:07:04.546 done. 00:07:04.546 00:07:04.546 real 0m1.288s 00:07:04.546 user 0m4.178s 00:07:04.546 sys 0m0.105s 00:07:04.546 21:24:04 event.event_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:04.546 21:24:04 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:04.546 ************************************ 00:07:04.546 END TEST event_perf 00:07:04.546 ************************************ 00:07:04.805 21:24:04 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:04.805 21:24:04 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:04.805 21:24:04 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:04.805 21:24:04 event -- common/autotest_common.sh@10 -- # set +x 00:07:04.805 ************************************ 00:07:04.805 START TEST event_reactor 00:07:04.805 ************************************ 00:07:04.805 21:24:04 event.event_reactor -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:04.805 [2024-06-07 21:24:04.889256] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:07:04.805 [2024-06-07 21:24:04.889323] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1249830 ] 00:07:04.805 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.805 [2024-06-07 21:24:04.978434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.805 [2024-06-07 21:24:05.068544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.182 test_start 00:07:06.182 oneshot 00:07:06.182 tick 100 00:07:06.182 tick 100 00:07:06.182 tick 250 00:07:06.182 tick 100 00:07:06.182 tick 100 00:07:06.182 tick 250 00:07:06.182 tick 100 00:07:06.182 tick 500 00:07:06.182 tick 100 00:07:06.182 tick 100 00:07:06.182 tick 250 00:07:06.182 tick 100 00:07:06.182 tick 100 00:07:06.182 test_end 00:07:06.182 00:07:06.182 real 0m1.276s 00:07:06.182 user 0m1.167s 00:07:06.182 sys 0m0.105s 00:07:06.182 21:24:06 event.event_reactor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:06.182 21:24:06 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:06.182 ************************************ 00:07:06.182 END TEST event_reactor 00:07:06.182 ************************************ 00:07:06.182 21:24:06 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:06.182 21:24:06 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:06.182 21:24:06 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:06.182 21:24:06 event -- common/autotest_common.sh@10 -- # set +x 00:07:06.182 ************************************ 00:07:06.182 START TEST event_reactor_perf 00:07:06.182 ************************************ 00:07:06.182 21:24:06 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:06.182 [2024-06-07 21:24:06.234102] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:07:06.182 [2024-06-07 21:24:06.234170] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1250069 ] 00:07:06.182 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.182 [2024-06-07 21:24:06.326174] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.182 [2024-06-07 21:24:06.416231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.560 test_start 00:07:07.560 test_end 00:07:07.560 Performance: 313556 events per second 00:07:07.560 00:07:07.560 real 0m1.280s 00:07:07.560 user 0m1.182s 00:07:07.560 sys 0m0.092s 00:07:07.560 21:24:07 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:07.560 21:24:07 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:07.560 ************************************ 00:07:07.560 END TEST event_reactor_perf 00:07:07.560 ************************************ 00:07:07.560 21:24:07 event -- event/event.sh@49 -- # uname -s 00:07:07.560 21:24:07 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:07.560 21:24:07 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:07.560 21:24:07 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:07.560 21:24:07 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:07.560 21:24:07 event -- common/autotest_common.sh@10 -- # set +x 00:07:07.560 ************************************ 00:07:07.560 START TEST event_scheduler 00:07:07.560 ************************************ 00:07:07.560 21:24:07 event.event_scheduler -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:07.560 * Looking for test storage... 00:07:07.560 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:07:07.560 21:24:07 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:07.560 21:24:07 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1250376 00:07:07.560 21:24:07 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:07.560 21:24:07 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:07.560 21:24:07 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1250376 00:07:07.560 21:24:07 event.event_scheduler -- common/autotest_common.sh@830 -- # '[' -z 1250376 ']' 00:07:07.560 21:24:07 event.event_scheduler -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.560 21:24:07 event.event_scheduler -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:07.560 21:24:07 event.event_scheduler -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.560 21:24:07 event.event_scheduler -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:07.560 21:24:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:07.560 [2024-06-07 21:24:07.703129] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:07:07.560 [2024-06-07 21:24:07.703197] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1250376 ] 00:07:07.560 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.560 [2024-06-07 21:24:07.768170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:07.820 [2024-06-07 21:24:07.845097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.820 [2024-06-07 21:24:07.845122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.820 [2024-06-07 21:24:07.845235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:07.820 [2024-06-07 21:24:07.845236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:07.820 21:24:07 event.event_scheduler -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:07.820 21:24:07 event.event_scheduler -- common/autotest_common.sh@863 -- # return 0 00:07:07.820 21:24:07 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:07.820 21:24:07 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:07.820 21:24:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:07.820 [2024-06-07 21:24:07.909912] dpdk_governor.c: 131:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:07:07.820 [2024-06-07 21:24:07.909929] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:07:07.820 [2024-06-07 21:24:07.909939] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:07.820 [2024-06-07 21:24:07.909944] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:07.820 [2024-06-07 21:24:07.909949] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:07.820 21:24:07 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:07.820 21:24:07 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:07.820 21:24:07 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:07.820 21:24:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:07.820 [2024-06-07 21:24:07.981275] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:07.820 21:24:07 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:07.821 21:24:07 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:07.821 21:24:07 event.event_scheduler -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:07.821 21:24:07 event.event_scheduler -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:07.821 21:24:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:07.821 ************************************ 00:07:07.821 START TEST scheduler_create_thread 00:07:07.821 ************************************ 00:07:07.821 21:24:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # scheduler_create_thread 00:07:07.821 21:24:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:07.821 21:24:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:07.821 21:24:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.821 2 00:07:07.821 21:24:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:07.821 21:24:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:07.821 21:24:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:07.821 21:24:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.821 3 00:07:07.821 21:24:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:07.821 21:24:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:07.821 21:24:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:07.821 21:24:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.821 4 00:07:07.821 21:24:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:07.821 21:24:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:07.821 21:24:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:07.821 21:24:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.821 5 00:07:07.821 21:24:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:07.821 21:24:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:07.821 21:24:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:07.821 21:24:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.821 6 00:07:07.821 21:24:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:07.821 21:24:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:07.821 21:24:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:07.821 21:24:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.821 7 00:07:07.821 21:24:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:07.821 21:24:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:07.821 21:24:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:07.821 21:24:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.821 8 00:07:07.821 21:24:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:07.821 21:24:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:07.821 21:24:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:07.821 21:24:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:08.080 9 00:07:08.080 21:24:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:08.080 21:24:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:08.080 21:24:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:08.080 21:24:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:08.080 10 00:07:08.080 21:24:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:08.080 21:24:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:08.080 21:24:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:08.080 21:24:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:08.080 21:24:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:08.080 21:24:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:08.080 21:24:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:08.080 21:24:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:08.080 21:24:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:08.080 21:24:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:08.080 21:24:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:08.080 21:24:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:08.080 21:24:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:09.458 21:24:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:09.458 21:24:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:09.458 21:24:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:09.458 21:24:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:09.458 21:24:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.395 21:24:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:10.395 00:07:10.395 real 0m2.620s 00:07:10.395 user 0m0.021s 00:07:10.395 sys 0m0.007s 00:07:10.395 21:24:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:10.395 21:24:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.395 ************************************ 00:07:10.395 END TEST scheduler_create_thread 00:07:10.395 ************************************ 00:07:10.654 21:24:10 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:10.654 21:24:10 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1250376 00:07:10.654 21:24:10 event.event_scheduler -- common/autotest_common.sh@949 -- # '[' -z 1250376 ']' 00:07:10.654 21:24:10 event.event_scheduler -- common/autotest_common.sh@953 -- # kill -0 1250376 00:07:10.654 21:24:10 event.event_scheduler -- common/autotest_common.sh@954 -- # uname 00:07:10.654 21:24:10 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:10.654 21:24:10 event.event_scheduler -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1250376 00:07:10.654 21:24:10 event.event_scheduler -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:07:10.654 21:24:10 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:07:10.654 21:24:10 event.event_scheduler -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1250376' 00:07:10.654 killing process with pid 1250376 00:07:10.654 21:24:10 event.event_scheduler -- common/autotest_common.sh@968 -- # kill 1250376 00:07:10.654 21:24:10 event.event_scheduler -- common/autotest_common.sh@973 -- # wait 1250376 00:07:10.916 [2024-06-07 21:24:11.119470] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:11.175 00:07:11.175 real 0m3.738s 00:07:11.175 user 0m5.695s 00:07:11.175 sys 0m0.364s 00:07:11.175 21:24:11 event.event_scheduler -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:11.175 21:24:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:11.175 ************************************ 00:07:11.175 END TEST event_scheduler 00:07:11.175 ************************************ 00:07:11.175 21:24:11 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:11.175 21:24:11 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:11.175 21:24:11 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:11.175 21:24:11 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:11.175 21:24:11 event -- common/autotest_common.sh@10 -- # set +x 00:07:11.175 ************************************ 00:07:11.175 START TEST app_repeat 00:07:11.175 ************************************ 00:07:11.175 21:24:11 event.app_repeat -- common/autotest_common.sh@1124 -- # app_repeat_test 00:07:11.175 21:24:11 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.175 21:24:11 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:11.175 21:24:11 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:11.175 21:24:11 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:11.175 21:24:11 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:11.175 21:24:11 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:11.175 21:24:11 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:11.175 21:24:11 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1251587 00:07:11.175 21:24:11 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:11.175 21:24:11 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:11.175 21:24:11 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1251587' 00:07:11.175 Process app_repeat pid: 1251587 00:07:11.175 21:24:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:11.175 21:24:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:11.175 spdk_app_start Round 0 00:07:11.175 21:24:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1251587 /var/tmp/spdk-nbd.sock 00:07:11.175 21:24:11 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 1251587 ']' 00:07:11.175 21:24:11 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:11.175 21:24:11 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:11.175 21:24:11 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:11.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:11.175 21:24:11 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:11.175 21:24:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:11.175 [2024-06-07 21:24:11.416390] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:07:11.176 [2024-06-07 21:24:11.416442] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1251587 ] 00:07:11.435 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.435 [2024-06-07 21:24:11.507498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:11.435 [2024-06-07 21:24:11.600639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.435 [2024-06-07 21:24:11.600644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.435 21:24:11 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:11.435 21:24:11 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:07:11.435 21:24:11 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:11.693 Malloc0 00:07:11.693 21:24:11 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:11.952 Malloc1 00:07:11.952 21:24:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:11.952 21:24:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.952 21:24:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:11.952 21:24:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:11.952 21:24:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:11.952 21:24:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:11.952 21:24:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:11.952 21:24:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.952 21:24:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:11.952 21:24:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:11.952 21:24:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:11.952 21:24:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:11.952 21:24:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:11.952 21:24:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:11.952 21:24:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:11.952 21:24:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:12.210 /dev/nbd0 00:07:12.210 21:24:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:12.469 21:24:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:12.469 21:24:12 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:07:12.469 21:24:12 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:07:12.469 21:24:12 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:07:12.469 21:24:12 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:07:12.469 21:24:12 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:07:12.469 21:24:12 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:07:12.469 21:24:12 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:07:12.469 21:24:12 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:07:12.469 21:24:12 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:12.469 1+0 records in 00:07:12.469 1+0 records out 00:07:12.469 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000200894 s, 20.4 MB/s 00:07:12.469 21:24:12 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:12.469 21:24:12 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:07:12.469 21:24:12 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:12.469 21:24:12 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:07:12.469 21:24:12 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:07:12.469 21:24:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:12.469 21:24:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:12.469 21:24:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:12.728 /dev/nbd1 00:07:12.728 21:24:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:12.728 21:24:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:12.728 21:24:12 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:07:12.728 21:24:12 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:07:12.728 21:24:12 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:07:12.728 21:24:12 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:07:12.728 21:24:12 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:07:12.728 21:24:12 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:07:12.728 21:24:12 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:07:12.728 21:24:12 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:07:12.728 21:24:12 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:12.728 1+0 records in 00:07:12.728 1+0 records out 00:07:12.728 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236911 s, 17.3 MB/s 00:07:12.728 21:24:12 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:12.728 21:24:12 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:07:12.728 21:24:12 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:12.728 21:24:12 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:07:12.728 21:24:12 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:07:12.728 21:24:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:12.728 21:24:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:12.728 21:24:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:12.728 21:24:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.728 21:24:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:12.987 21:24:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:12.987 { 00:07:12.987 "nbd_device": "/dev/nbd0", 00:07:12.987 "bdev_name": "Malloc0" 00:07:12.987 }, 00:07:12.987 { 00:07:12.987 "nbd_device": "/dev/nbd1", 00:07:12.987 "bdev_name": "Malloc1" 00:07:12.987 } 00:07:12.987 ]' 00:07:12.987 21:24:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:12.987 { 00:07:12.987 "nbd_device": "/dev/nbd0", 00:07:12.987 "bdev_name": "Malloc0" 00:07:12.987 }, 00:07:12.987 { 00:07:12.987 "nbd_device": "/dev/nbd1", 00:07:12.987 "bdev_name": "Malloc1" 00:07:12.987 } 00:07:12.987 ]' 00:07:12.987 21:24:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:12.987 21:24:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:12.987 /dev/nbd1' 00:07:12.987 21:24:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:12.987 /dev/nbd1' 00:07:12.987 21:24:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:12.987 21:24:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:12.987 21:24:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:12.987 21:24:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:12.987 21:24:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:12.987 21:24:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:12.987 21:24:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.987 21:24:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:12.987 21:24:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:12.987 21:24:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:12.987 21:24:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:12.987 21:24:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:12.987 256+0 records in 00:07:12.987 256+0 records out 00:07:12.987 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00983764 s, 107 MB/s 00:07:12.987 21:24:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:12.987 21:24:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:12.987 256+0 records in 00:07:12.987 256+0 records out 00:07:12.987 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0191889 s, 54.6 MB/s 00:07:12.987 21:24:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:12.987 21:24:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:12.987 256+0 records in 00:07:12.987 256+0 records out 00:07:12.987 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0207888 s, 50.4 MB/s 00:07:12.987 21:24:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:12.987 21:24:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.987 21:24:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:12.987 21:24:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:12.987 21:24:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:12.987 21:24:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:12.987 21:24:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:12.987 21:24:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:12.987 21:24:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:12.987 21:24:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:12.987 21:24:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:12.987 21:24:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:12.987 21:24:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:12.987 21:24:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.987 21:24:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.987 21:24:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:12.987 21:24:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:12.987 21:24:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.987 21:24:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:13.245 21:24:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:13.245 21:24:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:13.245 21:24:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:13.245 21:24:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.245 21:24:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.245 21:24:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:13.245 21:24:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:13.245 21:24:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.245 21:24:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:13.245 21:24:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:13.503 21:24:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:13.503 21:24:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:13.503 21:24:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:13.503 21:24:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.503 21:24:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.503 21:24:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:13.504 21:24:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:13.504 21:24:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.504 21:24:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:13.504 21:24:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.504 21:24:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:13.790 21:24:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:13.790 21:24:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:13.790 21:24:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:13.790 21:24:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:13.790 21:24:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:13.790 21:24:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:13.790 21:24:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:13.790 21:24:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:13.790 21:24:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:13.790 21:24:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:13.790 21:24:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:13.790 21:24:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:13.790 21:24:13 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:14.048 21:24:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:14.307 [2024-06-07 21:24:14.405155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:14.307 [2024-06-07 21:24:14.488872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.307 [2024-06-07 21:24:14.488876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.307 [2024-06-07 21:24:14.532543] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:14.307 [2024-06-07 21:24:14.532591] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:17.593 21:24:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:17.593 21:24:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:17.593 spdk_app_start Round 1 00:07:17.593 21:24:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1251587 /var/tmp/spdk-nbd.sock 00:07:17.593 21:24:17 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 1251587 ']' 00:07:17.593 21:24:17 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:17.593 21:24:17 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:17.593 21:24:17 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:17.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:17.593 21:24:17 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:17.593 21:24:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:17.593 21:24:17 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:17.593 21:24:17 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:07:17.593 21:24:17 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:17.593 Malloc0 00:07:17.593 21:24:17 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:17.852 Malloc1 00:07:17.852 21:24:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:17.852 21:24:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.852 21:24:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:17.852 21:24:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:17.852 21:24:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:17.852 21:24:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:17.852 21:24:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:17.852 21:24:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.852 21:24:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:17.852 21:24:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:17.852 21:24:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:17.852 21:24:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:17.852 21:24:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:17.852 21:24:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:17.852 21:24:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:17.852 21:24:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:18.111 /dev/nbd0 00:07:18.111 21:24:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:18.111 21:24:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:18.111 21:24:18 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:07:18.111 21:24:18 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:07:18.111 21:24:18 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:07:18.111 21:24:18 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:07:18.111 21:24:18 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:07:18.111 21:24:18 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:07:18.111 21:24:18 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:07:18.111 21:24:18 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:07:18.111 21:24:18 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:18.111 1+0 records in 00:07:18.111 1+0 records out 00:07:18.111 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226994 s, 18.0 MB/s 00:07:18.111 21:24:18 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:18.111 21:24:18 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:07:18.111 21:24:18 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:18.111 21:24:18 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:07:18.111 21:24:18 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:07:18.111 21:24:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:18.111 21:24:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:18.111 21:24:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:18.370 /dev/nbd1 00:07:18.370 21:24:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:18.370 21:24:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:18.370 21:24:18 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:07:18.370 21:24:18 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:07:18.370 21:24:18 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:07:18.370 21:24:18 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:07:18.370 21:24:18 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:07:18.370 21:24:18 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:07:18.370 21:24:18 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:07:18.370 21:24:18 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:07:18.370 21:24:18 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:18.370 1+0 records in 00:07:18.370 1+0 records out 00:07:18.370 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212879 s, 19.2 MB/s 00:07:18.370 21:24:18 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:18.370 21:24:18 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:07:18.370 21:24:18 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:18.370 21:24:18 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:07:18.370 21:24:18 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:07:18.370 21:24:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:18.370 21:24:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:18.370 21:24:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:18.370 21:24:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.370 21:24:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:18.629 21:24:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:18.629 { 00:07:18.629 "nbd_device": "/dev/nbd0", 00:07:18.629 "bdev_name": "Malloc0" 00:07:18.629 }, 00:07:18.629 { 00:07:18.629 "nbd_device": "/dev/nbd1", 00:07:18.629 "bdev_name": "Malloc1" 00:07:18.629 } 00:07:18.629 ]' 00:07:18.629 21:24:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:18.629 { 00:07:18.629 "nbd_device": "/dev/nbd0", 00:07:18.629 "bdev_name": "Malloc0" 00:07:18.629 }, 00:07:18.629 { 00:07:18.629 "nbd_device": "/dev/nbd1", 00:07:18.629 "bdev_name": "Malloc1" 00:07:18.629 } 00:07:18.629 ]' 00:07:18.629 21:24:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:18.629 21:24:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:18.629 /dev/nbd1' 00:07:18.629 21:24:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:18.629 /dev/nbd1' 00:07:18.629 21:24:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:18.629 21:24:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:18.629 21:24:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:18.629 21:24:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:18.629 21:24:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:18.629 21:24:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:18.629 21:24:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.629 21:24:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:18.629 21:24:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:18.629 21:24:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:18.629 21:24:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:18.629 21:24:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:18.629 256+0 records in 00:07:18.629 256+0 records out 00:07:18.629 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103637 s, 101 MB/s 00:07:18.629 21:24:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:18.629 21:24:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:18.629 256+0 records in 00:07:18.629 256+0 records out 00:07:18.629 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0197993 s, 53.0 MB/s 00:07:18.629 21:24:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:18.629 21:24:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:18.629 256+0 records in 00:07:18.629 256+0 records out 00:07:18.629 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203655 s, 51.5 MB/s 00:07:18.629 21:24:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:18.629 21:24:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.629 21:24:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:18.629 21:24:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:18.629 21:24:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:18.629 21:24:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:18.629 21:24:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:18.629 21:24:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.629 21:24:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:18.629 21:24:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.629 21:24:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:18.629 21:24:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:18.629 21:24:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:18.629 21:24:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.629 21:24:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.629 21:24:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:18.629 21:24:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:18.629 21:24:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.629 21:24:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:18.891 21:24:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:18.891 21:24:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:18.891 21:24:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:18.891 21:24:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.891 21:24:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.891 21:24:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:18.891 21:24:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:18.891 21:24:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.891 21:24:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.891 21:24:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:19.150 21:24:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:19.150 21:24:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:19.150 21:24:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:19.150 21:24:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:19.150 21:24:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:19.150 21:24:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:19.150 21:24:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:19.150 21:24:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:19.150 21:24:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:19.150 21:24:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.150 21:24:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:19.409 21:24:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:19.409 21:24:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:19.409 21:24:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:19.409 21:24:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:19.409 21:24:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:19.409 21:24:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:19.409 21:24:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:19.409 21:24:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:19.409 21:24:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:19.409 21:24:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:19.409 21:24:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:19.409 21:24:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:19.409 21:24:19 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:19.668 21:24:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:19.927 [2024-06-07 21:24:20.092038] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:19.927 [2024-06-07 21:24:20.178121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.927 [2024-06-07 21:24:20.178126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.187 [2024-06-07 21:24:20.223517] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:20.187 [2024-06-07 21:24:20.223565] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:22.773 21:24:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:22.773 21:24:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:22.773 spdk_app_start Round 2 00:07:22.773 21:24:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1251587 /var/tmp/spdk-nbd.sock 00:07:22.773 21:24:22 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 1251587 ']' 00:07:22.773 21:24:22 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:22.773 21:24:22 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:22.773 21:24:22 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:22.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:22.773 21:24:22 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:22.773 21:24:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:23.032 21:24:23 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:23.032 21:24:23 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:07:23.032 21:24:23 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:23.290 Malloc0 00:07:23.290 21:24:23 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:23.549 Malloc1 00:07:23.549 21:24:23 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:23.549 21:24:23 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.549 21:24:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:23.549 21:24:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:23.549 21:24:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:23.549 21:24:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:23.549 21:24:23 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:23.549 21:24:23 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.549 21:24:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:23.549 21:24:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:23.549 21:24:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:23.550 21:24:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:23.550 21:24:23 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:23.550 21:24:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:23.550 21:24:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:23.550 21:24:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:23.809 /dev/nbd0 00:07:23.809 21:24:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:23.809 21:24:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:23.809 21:24:23 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:07:23.809 21:24:23 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:07:23.809 21:24:23 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:07:23.809 21:24:23 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:07:23.809 21:24:23 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:07:23.809 21:24:23 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:07:23.809 21:24:23 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:07:23.809 21:24:23 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:07:23.809 21:24:23 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:23.809 1+0 records in 00:07:23.809 1+0 records out 00:07:23.809 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235658 s, 17.4 MB/s 00:07:23.809 21:24:23 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:23.809 21:24:23 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:07:23.809 21:24:23 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:23.809 21:24:23 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:07:23.809 21:24:23 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:07:23.809 21:24:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:23.809 21:24:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:23.809 21:24:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:24.068 /dev/nbd1 00:07:24.068 21:24:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:24.068 21:24:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:24.068 21:24:24 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:07:24.068 21:24:24 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:07:24.068 21:24:24 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:07:24.068 21:24:24 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:07:24.068 21:24:24 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:07:24.068 21:24:24 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:07:24.068 21:24:24 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:07:24.068 21:24:24 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:07:24.068 21:24:24 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:24.068 1+0 records in 00:07:24.068 1+0 records out 00:07:24.068 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000244679 s, 16.7 MB/s 00:07:24.068 21:24:24 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:24.068 21:24:24 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:07:24.068 21:24:24 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:24.068 21:24:24 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:07:24.068 21:24:24 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:07:24.068 21:24:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:24.068 21:24:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:24.068 21:24:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:24.068 21:24:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:24.068 21:24:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:24.328 21:24:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:24.328 { 00:07:24.328 "nbd_device": "/dev/nbd0", 00:07:24.328 "bdev_name": "Malloc0" 00:07:24.328 }, 00:07:24.328 { 00:07:24.328 "nbd_device": "/dev/nbd1", 00:07:24.328 "bdev_name": "Malloc1" 00:07:24.328 } 00:07:24.328 ]' 00:07:24.328 21:24:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:24.328 { 00:07:24.328 "nbd_device": "/dev/nbd0", 00:07:24.328 "bdev_name": "Malloc0" 00:07:24.328 }, 00:07:24.328 { 00:07:24.328 "nbd_device": "/dev/nbd1", 00:07:24.328 "bdev_name": "Malloc1" 00:07:24.328 } 00:07:24.328 ]' 00:07:24.328 21:24:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:24.328 21:24:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:24.328 /dev/nbd1' 00:07:24.328 21:24:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:24.328 /dev/nbd1' 00:07:24.328 21:24:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:24.328 21:24:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:24.328 21:24:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:24.328 21:24:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:24.328 21:24:24 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:24.328 21:24:24 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:24.328 21:24:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:24.328 21:24:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:24.328 21:24:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:24.328 21:24:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:24.328 21:24:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:24.328 21:24:24 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:24.328 256+0 records in 00:07:24.328 256+0 records out 00:07:24.328 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00976026 s, 107 MB/s 00:07:24.328 21:24:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:24.328 21:24:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:24.328 256+0 records in 00:07:24.328 256+0 records out 00:07:24.328 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0198384 s, 52.9 MB/s 00:07:24.328 21:24:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:24.328 21:24:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:24.328 256+0 records in 00:07:24.328 256+0 records out 00:07:24.328 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0210814 s, 49.7 MB/s 00:07:24.328 21:24:24 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:24.328 21:24:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:24.328 21:24:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:24.328 21:24:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:24.328 21:24:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:24.328 21:24:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:24.328 21:24:24 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:24.328 21:24:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:24.328 21:24:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:24.328 21:24:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:24.328 21:24:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:24.328 21:24:24 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:24.328 21:24:24 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:24.328 21:24:24 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:24.328 21:24:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:24.328 21:24:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:24.328 21:24:24 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:24.328 21:24:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:24.328 21:24:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:24.587 21:24:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:24.587 21:24:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:24.587 21:24:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:24.587 21:24:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:24.587 21:24:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:24.587 21:24:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:24.587 21:24:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:24.587 21:24:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:24.587 21:24:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:24.587 21:24:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:24.846 21:24:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:24.846 21:24:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:24.846 21:24:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:24.846 21:24:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:24.846 21:24:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:24.846 21:24:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:24.846 21:24:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:24.846 21:24:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:24.846 21:24:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:24.846 21:24:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:24.846 21:24:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:25.105 21:24:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:25.105 21:24:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:25.105 21:24:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:25.105 21:24:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:25.105 21:24:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:25.105 21:24:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:25.105 21:24:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:25.105 21:24:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:25.105 21:24:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:25.105 21:24:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:25.105 21:24:25 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:25.105 21:24:25 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:25.105 21:24:25 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:25.365 21:24:25 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:25.624 [2024-06-07 21:24:25.810739] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:25.883 [2024-06-07 21:24:25.894195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.883 [2024-06-07 21:24:25.894199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.883 [2024-06-07 21:24:25.939452] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:25.883 [2024-06-07 21:24:25.939499] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:28.419 21:24:28 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1251587 /var/tmp/spdk-nbd.sock 00:07:28.419 21:24:28 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 1251587 ']' 00:07:28.419 21:24:28 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:28.419 21:24:28 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:28.419 21:24:28 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:28.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:28.419 21:24:28 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:28.419 21:24:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:28.678 21:24:28 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:28.678 21:24:28 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:07:28.678 21:24:28 event.app_repeat -- event/event.sh@39 -- # killprocess 1251587 00:07:28.678 21:24:28 event.app_repeat -- common/autotest_common.sh@949 -- # '[' -z 1251587 ']' 00:07:28.678 21:24:28 event.app_repeat -- common/autotest_common.sh@953 -- # kill -0 1251587 00:07:28.678 21:24:28 event.app_repeat -- common/autotest_common.sh@954 -- # uname 00:07:28.678 21:24:28 event.app_repeat -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:28.678 21:24:28 event.app_repeat -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1251587 00:07:28.678 21:24:28 event.app_repeat -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:28.678 21:24:28 event.app_repeat -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:28.678 21:24:28 event.app_repeat -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1251587' 00:07:28.678 killing process with pid 1251587 00:07:28.678 21:24:28 event.app_repeat -- common/autotest_common.sh@968 -- # kill 1251587 00:07:28.678 21:24:28 event.app_repeat -- common/autotest_common.sh@973 -- # wait 1251587 00:07:28.937 spdk_app_start is called in Round 0. 00:07:28.937 Shutdown signal received, stop current app iteration 00:07:28.937 Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 reinitialization... 00:07:28.937 spdk_app_start is called in Round 1. 00:07:28.937 Shutdown signal received, stop current app iteration 00:07:28.937 Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 reinitialization... 00:07:28.937 spdk_app_start is called in Round 2. 00:07:28.937 Shutdown signal received, stop current app iteration 00:07:28.937 Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 reinitialization... 00:07:28.937 spdk_app_start is called in Round 3. 00:07:28.937 Shutdown signal received, stop current app iteration 00:07:28.937 21:24:29 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:28.937 21:24:29 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:28.937 00:07:28.937 real 0m17.707s 00:07:28.937 user 0m39.071s 00:07:28.937 sys 0m2.904s 00:07:28.937 21:24:29 event.app_repeat -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:28.937 21:24:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:28.937 ************************************ 00:07:28.937 END TEST app_repeat 00:07:28.937 ************************************ 00:07:28.937 21:24:29 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:28.937 21:24:29 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:28.937 21:24:29 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:28.937 21:24:29 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:28.937 21:24:29 event -- common/autotest_common.sh@10 -- # set +x 00:07:28.937 ************************************ 00:07:28.937 START TEST cpu_locks 00:07:28.937 ************************************ 00:07:28.937 21:24:29 event.cpu_locks -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:29.196 * Looking for test storage... 00:07:29.196 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:29.196 21:24:29 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:29.196 21:24:29 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:29.196 21:24:29 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:29.196 21:24:29 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:29.196 21:24:29 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:29.196 21:24:29 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:29.196 21:24:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:29.196 ************************************ 00:07:29.196 START TEST default_locks 00:07:29.196 ************************************ 00:07:29.196 21:24:29 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # default_locks 00:07:29.196 21:24:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1254967 00:07:29.196 21:24:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1254967 00:07:29.196 21:24:29 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 1254967 ']' 00:07:29.196 21:24:29 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.196 21:24:29 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:29.197 21:24:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:29.197 21:24:29 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.197 21:24:29 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:29.197 21:24:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:29.197 [2024-06-07 21:24:29.336309] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:07:29.197 [2024-06-07 21:24:29.336363] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1254967 ] 00:07:29.197 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.197 [2024-06-07 21:24:29.424802] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.454 [2024-06-07 21:24:29.515012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.021 21:24:30 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:30.021 21:24:30 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 0 00:07:30.021 21:24:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1254967 00:07:30.021 21:24:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1254967 00:07:30.021 21:24:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:30.280 lslocks: write error 00:07:30.280 21:24:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1254967 00:07:30.280 21:24:30 event.cpu_locks.default_locks -- common/autotest_common.sh@949 -- # '[' -z 1254967 ']' 00:07:30.280 21:24:30 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # kill -0 1254967 00:07:30.280 21:24:30 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # uname 00:07:30.280 21:24:30 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:30.280 21:24:30 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1254967 00:07:30.539 21:24:30 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:30.539 21:24:30 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:30.539 21:24:30 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1254967' 00:07:30.539 killing process with pid 1254967 00:07:30.539 21:24:30 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # kill 1254967 00:07:30.539 21:24:30 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # wait 1254967 00:07:30.798 21:24:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1254967 00:07:30.798 21:24:30 event.cpu_locks.default_locks -- common/autotest_common.sh@649 -- # local es=0 00:07:30.798 21:24:30 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 1254967 00:07:30.798 21:24:30 event.cpu_locks.default_locks -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:07:30.798 21:24:30 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:30.798 21:24:30 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:07:30.798 21:24:30 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:30.798 21:24:30 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # waitforlisten 1254967 00:07:30.798 21:24:30 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 1254967 ']' 00:07:30.798 21:24:30 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.798 21:24:30 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:30.798 21:24:30 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.798 21:24:30 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:30.798 21:24:30 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:30.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (1254967) - No such process 00:07:30.798 ERROR: process (pid: 1254967) is no longer running 00:07:30.798 21:24:30 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:30.798 21:24:30 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 1 00:07:30.798 21:24:30 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # es=1 00:07:30.798 21:24:30 event.cpu_locks.default_locks -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:30.798 21:24:30 event.cpu_locks.default_locks -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:30.798 21:24:30 event.cpu_locks.default_locks -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:30.798 21:24:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:30.798 21:24:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:30.798 21:24:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:30.798 21:24:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:30.798 00:07:30.798 real 0m1.618s 00:07:30.798 user 0m1.779s 00:07:30.798 sys 0m0.519s 00:07:30.798 21:24:30 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:30.798 21:24:30 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:30.798 ************************************ 00:07:30.798 END TEST default_locks 00:07:30.798 ************************************ 00:07:30.798 21:24:30 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:30.798 21:24:30 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:30.798 21:24:30 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:30.798 21:24:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:30.798 ************************************ 00:07:30.798 START TEST default_locks_via_rpc 00:07:30.798 ************************************ 00:07:30.798 21:24:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # default_locks_via_rpc 00:07:30.798 21:24:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1255357 00:07:30.798 21:24:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1255357 00:07:30.799 21:24:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 1255357 ']' 00:07:30.799 21:24:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.799 21:24:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:30.799 21:24:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:30.799 21:24:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.799 21:24:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:30.799 21:24:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:30.799 [2024-06-07 21:24:31.018761] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:07:30.799 [2024-06-07 21:24:31.018817] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1255357 ] 00:07:30.799 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.058 [2024-06-07 21:24:31.108320] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.058 [2024-06-07 21:24:31.198491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.010 21:24:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:32.010 21:24:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:07:32.010 21:24:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:32.010 21:24:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:32.010 21:24:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.010 21:24:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:32.010 21:24:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:32.010 21:24:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:32.010 21:24:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:32.010 21:24:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:32.010 21:24:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:32.010 21:24:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:32.010 21:24:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.010 21:24:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:32.010 21:24:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1255357 00:07:32.010 21:24:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1255357 00:07:32.010 21:24:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:32.269 21:24:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1255357 00:07:32.269 21:24:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@949 -- # '[' -z 1255357 ']' 00:07:32.269 21:24:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # kill -0 1255357 00:07:32.269 21:24:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # uname 00:07:32.269 21:24:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:32.269 21:24:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1255357 00:07:32.269 21:24:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:32.269 21:24:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:32.269 21:24:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1255357' 00:07:32.269 killing process with pid 1255357 00:07:32.269 21:24:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # kill 1255357 00:07:32.269 21:24:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # wait 1255357 00:07:32.528 00:07:32.528 real 0m1.810s 00:07:32.528 user 0m1.966s 00:07:32.528 sys 0m0.593s 00:07:32.528 21:24:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:32.528 21:24:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.528 ************************************ 00:07:32.528 END TEST default_locks_via_rpc 00:07:32.528 ************************************ 00:07:32.787 21:24:32 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:32.787 21:24:32 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:32.787 21:24:32 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:32.787 21:24:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:32.787 ************************************ 00:07:32.787 START TEST non_locking_app_on_locked_coremask 00:07:32.787 ************************************ 00:07:32.787 21:24:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # non_locking_app_on_locked_coremask 00:07:32.787 21:24:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1255810 00:07:32.787 21:24:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1255810 /var/tmp/spdk.sock 00:07:32.787 21:24:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 1255810 ']' 00:07:32.787 21:24:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.787 21:24:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:32.787 21:24:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.787 21:24:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:32.787 21:24:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:32.787 21:24:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:32.787 [2024-06-07 21:24:32.893464] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:07:32.787 [2024-06-07 21:24:32.893518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1255810 ] 00:07:32.787 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.787 [2024-06-07 21:24:32.980887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.045 [2024-06-07 21:24:33.073435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.612 21:24:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:33.612 21:24:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:07:33.612 21:24:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1255899 00:07:33.612 21:24:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1255899 /var/tmp/spdk2.sock 00:07:33.612 21:24:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 1255899 ']' 00:07:33.612 21:24:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:33.612 21:24:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:33.612 21:24:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:33.612 21:24:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:33.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:33.612 21:24:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:33.612 21:24:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:33.612 [2024-06-07 21:24:33.866406] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:07:33.612 [2024-06-07 21:24:33.866466] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1255899 ] 00:07:33.871 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.871 [2024-06-07 21:24:33.984984] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:33.871 [2024-06-07 21:24:33.985022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.130 [2024-06-07 21:24:34.172602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.697 21:24:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:34.697 21:24:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:07:34.697 21:24:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1255810 00:07:34.697 21:24:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1255810 00:07:34.697 21:24:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:35.265 lslocks: write error 00:07:35.265 21:24:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1255810 00:07:35.265 21:24:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 1255810 ']' 00:07:35.265 21:24:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 1255810 00:07:35.265 21:24:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:07:35.265 21:24:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:35.265 21:24:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1255810 00:07:35.265 21:24:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:35.265 21:24:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:35.265 21:24:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1255810' 00:07:35.265 killing process with pid 1255810 00:07:35.265 21:24:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 1255810 00:07:35.265 21:24:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 1255810 00:07:35.834 21:24:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1255899 00:07:35.834 21:24:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 1255899 ']' 00:07:35.834 21:24:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 1255899 00:07:35.834 21:24:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:07:35.834 21:24:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:35.834 21:24:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1255899 00:07:35.834 21:24:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:35.834 21:24:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:35.834 21:24:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1255899' 00:07:35.834 killing process with pid 1255899 00:07:35.834 21:24:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 1255899 00:07:35.834 21:24:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 1255899 00:07:36.401 00:07:36.401 real 0m3.585s 00:07:36.401 user 0m4.009s 00:07:36.401 sys 0m1.052s 00:07:36.401 21:24:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:36.401 21:24:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:36.401 ************************************ 00:07:36.401 END TEST non_locking_app_on_locked_coremask 00:07:36.401 ************************************ 00:07:36.401 21:24:36 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:36.401 21:24:36 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:36.401 21:24:36 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:36.401 21:24:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:36.401 ************************************ 00:07:36.401 START TEST locking_app_on_unlocked_coremask 00:07:36.401 ************************************ 00:07:36.401 21:24:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_unlocked_coremask 00:07:36.401 21:24:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1256380 00:07:36.401 21:24:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1256380 /var/tmp/spdk.sock 00:07:36.401 21:24:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 1256380 ']' 00:07:36.401 21:24:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.401 21:24:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:36.401 21:24:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:36.401 21:24:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.401 21:24:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:36.401 21:24:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:36.401 [2024-06-07 21:24:36.544851] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:07:36.401 [2024-06-07 21:24:36.544906] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1256380 ] 00:07:36.401 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.401 [2024-06-07 21:24:36.636122] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:36.401 [2024-06-07 21:24:36.636155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.659 [2024-06-07 21:24:36.727873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.226 21:24:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:37.226 21:24:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:07:37.226 21:24:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1256640 00:07:37.226 21:24:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1256640 /var/tmp/spdk2.sock 00:07:37.226 21:24:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 1256640 ']' 00:07:37.226 21:24:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:37.226 21:24:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:37.226 21:24:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:37.226 21:24:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:37.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:37.226 21:24:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:37.226 21:24:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:37.485 [2024-06-07 21:24:37.528921] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:07:37.485 [2024-06-07 21:24:37.528981] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1256640 ] 00:07:37.485 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.485 [2024-06-07 21:24:37.650189] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.744 [2024-06-07 21:24:37.821008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.312 21:24:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:38.312 21:24:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:07:38.312 21:24:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1256640 00:07:38.312 21:24:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1256640 00:07:38.312 21:24:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:38.881 lslocks: write error 00:07:38.881 21:24:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1256380 00:07:38.881 21:24:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 1256380 ']' 00:07:38.881 21:24:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 1256380 00:07:38.881 21:24:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:07:38.881 21:24:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:38.881 21:24:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1256380 00:07:38.881 21:24:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:38.881 21:24:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:38.881 21:24:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1256380' 00:07:38.881 killing process with pid 1256380 00:07:38.881 21:24:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 1256380 00:07:38.881 21:24:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 1256380 00:07:39.819 21:24:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1256640 00:07:39.819 21:24:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 1256640 ']' 00:07:39.819 21:24:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 1256640 00:07:39.819 21:24:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:07:39.819 21:24:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:39.819 21:24:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1256640 00:07:39.819 21:24:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:39.819 21:24:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:39.819 21:24:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1256640' 00:07:39.819 killing process with pid 1256640 00:07:39.819 21:24:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 1256640 00:07:39.819 21:24:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 1256640 00:07:40.078 00:07:40.078 real 0m3.617s 00:07:40.078 user 0m4.041s 00:07:40.078 sys 0m1.067s 00:07:40.078 21:24:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:40.078 21:24:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:40.078 ************************************ 00:07:40.078 END TEST locking_app_on_unlocked_coremask 00:07:40.078 ************************************ 00:07:40.078 21:24:40 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:40.078 21:24:40 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:40.078 21:24:40 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:40.078 21:24:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:40.078 ************************************ 00:07:40.078 START TEST locking_app_on_locked_coremask 00:07:40.078 ************************************ 00:07:40.078 21:24:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_locked_coremask 00:07:40.078 21:24:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1257198 00:07:40.078 21:24:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1257198 /var/tmp/spdk.sock 00:07:40.078 21:24:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 1257198 ']' 00:07:40.078 21:24:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.078 21:24:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:40.078 21:24:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.078 21:24:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:40.078 21:24:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:40.078 21:24:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:40.078 [2024-06-07 21:24:40.229898] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:07:40.078 [2024-06-07 21:24:40.229955] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1257198 ] 00:07:40.078 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.078 [2024-06-07 21:24:40.316995] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.337 [2024-06-07 21:24:40.409074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.905 21:24:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:40.905 21:24:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:07:40.905 21:24:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1257344 00:07:40.905 21:24:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1257344 /var/tmp/spdk2.sock 00:07:40.905 21:24:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@649 -- # local es=0 00:07:40.905 21:24:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 1257344 /var/tmp/spdk2.sock 00:07:40.905 21:24:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:07:40.905 21:24:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:40.905 21:24:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:40.905 21:24:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:07:40.905 21:24:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:40.905 21:24:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # waitforlisten 1257344 /var/tmp/spdk2.sock 00:07:40.905 21:24:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 1257344 ']' 00:07:40.905 21:24:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:40.905 21:24:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:40.905 21:24:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:40.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:40.905 21:24:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:40.905 21:24:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:41.164 [2024-06-07 21:24:41.211930] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:07:41.164 [2024-06-07 21:24:41.211995] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1257344 ] 00:07:41.164 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.164 [2024-06-07 21:24:41.334466] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1257198 has claimed it. 00:07:41.164 [2024-06-07 21:24:41.334512] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:41.732 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (1257344) - No such process 00:07:41.732 ERROR: process (pid: 1257344) is no longer running 00:07:41.732 21:24:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:41.732 21:24:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 1 00:07:41.732 21:24:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # es=1 00:07:41.732 21:24:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:41.732 21:24:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:41.732 21:24:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:41.732 21:24:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1257198 00:07:41.732 21:24:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1257198 00:07:41.732 21:24:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:41.992 lslocks: write error 00:07:41.992 21:24:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1257198 00:07:41.992 21:24:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 1257198 ']' 00:07:41.992 21:24:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 1257198 00:07:41.992 21:24:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:07:41.992 21:24:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:41.992 21:24:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1257198 00:07:41.992 21:24:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:41.992 21:24:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:41.992 21:24:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1257198' 00:07:41.992 killing process with pid 1257198 00:07:41.992 21:24:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 1257198 00:07:41.992 21:24:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 1257198 00:07:42.251 00:07:42.251 real 0m2.322s 00:07:42.251 user 0m2.682s 00:07:42.251 sys 0m0.644s 00:07:42.251 21:24:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:42.251 21:24:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:42.251 ************************************ 00:07:42.251 END TEST locking_app_on_locked_coremask 00:07:42.251 ************************************ 00:07:42.509 21:24:42 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:42.509 21:24:42 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:42.509 21:24:42 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:42.509 21:24:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:42.509 ************************************ 00:07:42.509 START TEST locking_overlapped_coremask 00:07:42.509 ************************************ 00:07:42.509 21:24:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask 00:07:42.509 21:24:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1257605 00:07:42.509 21:24:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1257605 /var/tmp/spdk.sock 00:07:42.509 21:24:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 1257605 ']' 00:07:42.509 21:24:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.510 21:24:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:42.510 21:24:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.510 21:24:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:42.510 21:24:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:42.510 21:24:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:42.510 [2024-06-07 21:24:42.619051] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:07:42.510 [2024-06-07 21:24:42.619108] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1257605 ] 00:07:42.510 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.510 [2024-06-07 21:24:42.706518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:42.769 [2024-06-07 21:24:42.800221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.769 [2024-06-07 21:24:42.800242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:42.769 [2024-06-07 21:24:42.800246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.333 21:24:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:43.333 21:24:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 0 00:07:43.333 21:24:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1257773 00:07:43.333 21:24:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1257773 /var/tmp/spdk2.sock 00:07:43.333 21:24:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:43.333 21:24:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@649 -- # local es=0 00:07:43.333 21:24:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 1257773 /var/tmp/spdk2.sock 00:07:43.333 21:24:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:07:43.333 21:24:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:43.333 21:24:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:07:43.334 21:24:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:43.334 21:24:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # waitforlisten 1257773 /var/tmp/spdk2.sock 00:07:43.334 21:24:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 1257773 ']' 00:07:43.334 21:24:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:43.334 21:24:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:43.334 21:24:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:43.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:43.334 21:24:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:43.334 21:24:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:43.592 [2024-06-07 21:24:43.617726] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:07:43.592 [2024-06-07 21:24:43.617789] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1257773 ] 00:07:43.592 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.592 [2024-06-07 21:24:43.710194] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1257605 has claimed it. 00:07:43.592 [2024-06-07 21:24:43.710231] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:44.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (1257773) - No such process 00:07:44.160 ERROR: process (pid: 1257773) is no longer running 00:07:44.160 21:24:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:44.160 21:24:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 1 00:07:44.160 21:24:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # es=1 00:07:44.160 21:24:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:44.160 21:24:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:44.160 21:24:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:44.160 21:24:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:44.160 21:24:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:44.160 21:24:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:44.160 21:24:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:44.160 21:24:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1257605 00:07:44.160 21:24:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@949 -- # '[' -z 1257605 ']' 00:07:44.160 21:24:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # kill -0 1257605 00:07:44.160 21:24:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # uname 00:07:44.160 21:24:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:44.160 21:24:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1257605 00:07:44.160 21:24:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:44.160 21:24:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:44.160 21:24:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1257605' 00:07:44.160 killing process with pid 1257605 00:07:44.160 21:24:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # kill 1257605 00:07:44.160 21:24:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # wait 1257605 00:07:44.728 00:07:44.728 real 0m2.139s 00:07:44.728 user 0m6.109s 00:07:44.728 sys 0m0.474s 00:07:44.728 21:24:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:44.728 21:24:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:44.728 ************************************ 00:07:44.728 END TEST locking_overlapped_coremask 00:07:44.728 ************************************ 00:07:44.728 21:24:44 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:44.728 21:24:44 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:44.728 21:24:44 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:44.728 21:24:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:44.728 ************************************ 00:07:44.728 START TEST locking_overlapped_coremask_via_rpc 00:07:44.728 ************************************ 00:07:44.728 21:24:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask_via_rpc 00:07:44.728 21:24:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1258067 00:07:44.728 21:24:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1258067 /var/tmp/spdk.sock 00:07:44.728 21:24:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:44.728 21:24:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 1258067 ']' 00:07:44.728 21:24:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.728 21:24:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:44.728 21:24:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.728 21:24:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:44.729 21:24:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:44.729 [2024-06-07 21:24:44.827172] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:07:44.729 [2024-06-07 21:24:44.827228] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1258067 ] 00:07:44.729 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.729 [2024-06-07 21:24:44.914687] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:44.729 [2024-06-07 21:24:44.914717] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:44.987 [2024-06-07 21:24:45.009183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.987 [2024-06-07 21:24:45.009295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.987 [2024-06-07 21:24:45.009296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.555 21:24:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:45.555 21:24:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:07:45.555 21:24:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1258324 00:07:45.555 21:24:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1258324 /var/tmp/spdk2.sock 00:07:45.555 21:24:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:45.555 21:24:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 1258324 ']' 00:07:45.555 21:24:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:45.555 21:24:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:45.555 21:24:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:45.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:45.555 21:24:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:45.555 21:24:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.555 [2024-06-07 21:24:45.824502] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:07:45.555 [2024-06-07 21:24:45.824564] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1258324 ] 00:07:45.815 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.815 [2024-06-07 21:24:45.914791] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:45.815 [2024-06-07 21:24:45.914819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:45.815 [2024-06-07 21:24:46.065206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:45.815 [2024-06-07 21:24:46.065324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:45.815 [2024-06-07 21:24:46.065325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:46.753 21:24:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:46.753 21:24:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:07:46.753 21:24:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:46.753 21:24:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:46.753 21:24:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.753 21:24:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:46.753 21:24:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:46.753 21:24:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@649 -- # local es=0 00:07:46.753 21:24:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:46.753 21:24:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:07:46.753 21:24:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:46.753 21:24:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:07:46.753 21:24:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:46.753 21:24:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:46.753 21:24:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:46.753 21:24:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.753 [2024-06-07 21:24:46.786104] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1258067 has claimed it. 00:07:46.753 request: 00:07:46.753 { 00:07:46.753 "method": "framework_enable_cpumask_locks", 00:07:46.753 "req_id": 1 00:07:46.753 } 00:07:46.753 Got JSON-RPC error response 00:07:46.753 response: 00:07:46.753 { 00:07:46.753 "code": -32603, 00:07:46.753 "message": "Failed to claim CPU core: 2" 00:07:46.753 } 00:07:46.753 21:24:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:07:46.753 21:24:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # es=1 00:07:46.753 21:24:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:46.753 21:24:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:46.753 21:24:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:46.753 21:24:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1258067 /var/tmp/spdk.sock 00:07:46.753 21:24:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 1258067 ']' 00:07:46.753 21:24:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.753 21:24:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:46.753 21:24:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.753 21:24:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:46.753 21:24:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.753 21:24:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:46.753 21:24:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:07:46.753 21:24:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1258324 /var/tmp/spdk2.sock 00:07:46.753 21:24:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 1258324 ']' 00:07:46.753 21:24:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:46.753 21:24:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:46.753 21:24:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:46.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:46.753 21:24:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:46.754 21:24:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:47.013 21:24:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:47.013 21:24:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:07:47.013 21:24:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:47.013 21:24:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:47.013 21:24:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:47.013 21:24:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:47.013 00:07:47.013 real 0m2.482s 00:07:47.013 user 0m1.215s 00:07:47.013 sys 0m0.191s 00:07:47.013 21:24:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:47.013 21:24:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:47.013 ************************************ 00:07:47.013 END TEST locking_overlapped_coremask_via_rpc 00:07:47.013 ************************************ 00:07:47.272 21:24:47 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:47.272 21:24:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1258067 ]] 00:07:47.272 21:24:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1258067 00:07:47.272 21:24:47 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 1258067 ']' 00:07:47.272 21:24:47 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 1258067 00:07:47.272 21:24:47 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:07:47.272 21:24:47 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:47.272 21:24:47 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1258067 00:07:47.272 21:24:47 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:47.272 21:24:47 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:47.272 21:24:47 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1258067' 00:07:47.272 killing process with pid 1258067 00:07:47.272 21:24:47 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 1258067 00:07:47.272 21:24:47 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 1258067 00:07:47.532 21:24:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1258324 ]] 00:07:47.532 21:24:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1258324 00:07:47.532 21:24:47 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 1258324 ']' 00:07:47.532 21:24:47 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 1258324 00:07:47.532 21:24:47 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:07:47.532 21:24:47 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:47.532 21:24:47 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1258324 00:07:47.532 21:24:47 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:07:47.532 21:24:47 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:07:47.532 21:24:47 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1258324' 00:07:47.532 killing process with pid 1258324 00:07:47.532 21:24:47 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 1258324 00:07:47.532 21:24:47 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 1258324 00:07:47.791 21:24:48 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:47.791 21:24:48 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:47.791 21:24:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1258067 ]] 00:07:47.791 21:24:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1258067 00:07:47.791 21:24:48 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 1258067 ']' 00:07:47.791 21:24:48 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 1258067 00:07:47.791 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (1258067) - No such process 00:07:47.791 21:24:48 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 1258067 is not found' 00:07:47.791 Process with pid 1258067 is not found 00:07:47.791 21:24:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1258324 ]] 00:07:47.791 21:24:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1258324 00:07:47.791 21:24:48 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 1258324 ']' 00:07:47.791 21:24:48 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 1258324 00:07:47.791 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (1258324) - No such process 00:07:47.791 21:24:48 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 1258324 is not found' 00:07:47.791 Process with pid 1258324 is not found 00:07:47.791 21:24:48 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:47.791 00:07:47.791 real 0m18.885s 00:07:47.791 user 0m33.846s 00:07:47.791 sys 0m5.503s 00:07:47.791 21:24:48 event.cpu_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:47.791 21:24:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:47.791 ************************************ 00:07:47.791 END TEST cpu_locks 00:07:47.791 ************************************ 00:07:48.050 00:07:48.050 real 0m44.684s 00:07:48.050 user 1m25.332s 00:07:48.050 sys 0m9.426s 00:07:48.050 21:24:48 event -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:48.050 21:24:48 event -- common/autotest_common.sh@10 -- # set +x 00:07:48.050 ************************************ 00:07:48.050 END TEST event 00:07:48.050 ************************************ 00:07:48.050 21:24:48 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:48.050 21:24:48 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:48.050 21:24:48 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:48.050 21:24:48 -- common/autotest_common.sh@10 -- # set +x 00:07:48.050 ************************************ 00:07:48.050 START TEST thread 00:07:48.050 ************************************ 00:07:48.050 21:24:48 thread -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:48.050 * Looking for test storage... 00:07:48.050 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:48.050 21:24:48 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:48.050 21:24:48 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:07:48.050 21:24:48 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:48.050 21:24:48 thread -- common/autotest_common.sh@10 -- # set +x 00:07:48.050 ************************************ 00:07:48.050 START TEST thread_poller_perf 00:07:48.050 ************************************ 00:07:48.050 21:24:48 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:48.050 [2024-06-07 21:24:48.277425] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:07:48.050 [2024-06-07 21:24:48.277494] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1258730 ] 00:07:48.050 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.309 [2024-06-07 21:24:48.368717] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.309 [2024-06-07 21:24:48.457730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.309 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:49.687 ====================================== 00:07:49.687 busy:2211021904 (cyc) 00:07:49.687 total_run_count: 256000 00:07:49.687 tsc_hz: 2200000000 (cyc) 00:07:49.687 ====================================== 00:07:49.687 poller_cost: 8636 (cyc), 3925 (nsec) 00:07:49.687 00:07:49.687 real 0m1.287s 00:07:49.687 user 0m1.176s 00:07:49.687 sys 0m0.105s 00:07:49.687 21:24:49 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:49.687 21:24:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:49.687 ************************************ 00:07:49.687 END TEST thread_poller_perf 00:07:49.687 ************************************ 00:07:49.687 21:24:49 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:49.687 21:24:49 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:07:49.687 21:24:49 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:49.687 21:24:49 thread -- common/autotest_common.sh@10 -- # set +x 00:07:49.687 ************************************ 00:07:49.687 START TEST thread_poller_perf 00:07:49.687 ************************************ 00:07:49.687 21:24:49 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:49.687 [2024-06-07 21:24:49.635685] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:07:49.687 [2024-06-07 21:24:49.635757] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1258981 ] 00:07:49.687 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.687 [2024-06-07 21:24:49.729012] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.687 [2024-06-07 21:24:49.814591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.687 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:50.650 ====================================== 00:07:50.650 busy:2202639234 (cyc) 00:07:50.650 total_run_count: 3336000 00:07:50.651 tsc_hz: 2200000000 (cyc) 00:07:50.651 ====================================== 00:07:50.651 poller_cost: 660 (cyc), 300 (nsec) 00:07:50.651 00:07:50.651 real 0m1.282s 00:07:50.651 user 0m1.177s 00:07:50.651 sys 0m0.098s 00:07:50.651 21:24:50 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:50.651 21:24:50 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:50.651 ************************************ 00:07:50.651 END TEST thread_poller_perf 00:07:50.651 ************************************ 00:07:50.952 21:24:50 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:50.952 00:07:50.952 real 0m2.788s 00:07:50.952 user 0m2.428s 00:07:50.952 sys 0m0.366s 00:07:50.952 21:24:50 thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:50.952 21:24:50 thread -- common/autotest_common.sh@10 -- # set +x 00:07:50.952 ************************************ 00:07:50.952 END TEST thread 00:07:50.952 ************************************ 00:07:50.952 21:24:50 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:50.952 21:24:50 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:50.952 21:24:50 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:50.952 21:24:50 -- common/autotest_common.sh@10 -- # set +x 00:07:50.952 ************************************ 00:07:50.952 START TEST accel 00:07:50.952 ************************************ 00:07:50.952 21:24:50 accel -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:50.952 * Looking for test storage... 00:07:50.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:50.952 21:24:51 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:50.952 21:24:51 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:50.952 21:24:51 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:50.952 21:24:51 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1259311 00:07:50.952 21:24:51 accel -- accel/accel.sh@63 -- # waitforlisten 1259311 00:07:50.952 21:24:51 accel -- common/autotest_common.sh@830 -- # '[' -z 1259311 ']' 00:07:50.952 21:24:51 accel -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.952 21:24:51 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:50.952 21:24:51 accel -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:50.952 21:24:51 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:50.952 21:24:51 accel -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.952 21:24:51 accel -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:50.952 21:24:51 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:50.952 21:24:51 accel -- common/autotest_common.sh@10 -- # set +x 00:07:50.952 21:24:51 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:50.952 21:24:51 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.952 21:24:51 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.952 21:24:51 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:50.952 21:24:51 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:50.952 21:24:51 accel -- accel/accel.sh@41 -- # jq -r . 00:07:50.952 [2024-06-07 21:24:51.146130] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:07:50.952 [2024-06-07 21:24:51.146190] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1259311 ] 00:07:50.952 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.212 [2024-06-07 21:24:51.237161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.212 [2024-06-07 21:24:51.329389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.150 21:24:52 accel -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:52.150 21:24:52 accel -- common/autotest_common.sh@863 -- # return 0 00:07:52.150 21:24:52 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:52.150 21:24:52 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:52.150 21:24:52 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:52.150 21:24:52 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:52.150 21:24:52 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:52.150 21:24:52 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:52.150 21:24:52 accel -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:52.150 21:24:52 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:52.150 21:24:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:52.150 21:24:52 accel -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:52.150 21:24:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:52.150 21:24:52 accel -- accel/accel.sh@72 -- # IFS== 00:07:52.150 21:24:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:52.150 21:24:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:52.150 21:24:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:52.150 21:24:52 accel -- accel/accel.sh@72 -- # IFS== 00:07:52.150 21:24:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:52.150 21:24:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:52.150 21:24:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:52.150 21:24:52 accel -- accel/accel.sh@72 -- # IFS== 00:07:52.150 21:24:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:52.150 21:24:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:52.150 21:24:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:52.150 21:24:52 accel -- accel/accel.sh@72 -- # IFS== 00:07:52.150 21:24:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:52.150 21:24:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:52.150 21:24:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:52.150 21:24:52 accel -- accel/accel.sh@72 -- # IFS== 00:07:52.150 21:24:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:52.150 21:24:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:52.150 21:24:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:52.150 21:24:52 accel -- accel/accel.sh@72 -- # IFS== 00:07:52.150 21:24:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:52.150 21:24:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:52.150 21:24:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:52.150 21:24:52 accel -- accel/accel.sh@72 -- # IFS== 00:07:52.150 21:24:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:52.150 21:24:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:52.150 21:24:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:52.150 21:24:52 accel -- accel/accel.sh@72 -- # IFS== 00:07:52.150 21:24:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:52.150 21:24:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:52.150 21:24:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:52.150 21:24:52 accel -- accel/accel.sh@72 -- # IFS== 00:07:52.150 21:24:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:52.150 21:24:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:52.150 21:24:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:52.150 21:24:52 accel -- accel/accel.sh@72 -- # IFS== 00:07:52.150 21:24:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:52.150 21:24:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:52.150 21:24:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:52.150 21:24:52 accel -- accel/accel.sh@72 -- # IFS== 00:07:52.150 21:24:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:52.150 21:24:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:52.150 21:24:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:52.150 21:24:52 accel -- accel/accel.sh@72 -- # IFS== 00:07:52.150 21:24:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:52.150 21:24:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:52.150 21:24:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:52.150 21:24:52 accel -- accel/accel.sh@72 -- # IFS== 00:07:52.150 21:24:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:52.150 21:24:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:52.150 21:24:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:52.150 21:24:52 accel -- accel/accel.sh@72 -- # IFS== 00:07:52.150 21:24:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:52.150 21:24:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:52.150 21:24:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:52.150 21:24:52 accel -- accel/accel.sh@72 -- # IFS== 00:07:52.150 21:24:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:52.150 21:24:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:52.150 21:24:52 accel -- accel/accel.sh@75 -- # killprocess 1259311 00:07:52.150 21:24:52 accel -- common/autotest_common.sh@949 -- # '[' -z 1259311 ']' 00:07:52.150 21:24:52 accel -- common/autotest_common.sh@953 -- # kill -0 1259311 00:07:52.150 21:24:52 accel -- common/autotest_common.sh@954 -- # uname 00:07:52.150 21:24:52 accel -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:52.150 21:24:52 accel -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1259311 00:07:52.150 21:24:52 accel -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:52.150 21:24:52 accel -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:52.150 21:24:52 accel -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1259311' 00:07:52.150 killing process with pid 1259311 00:07:52.150 21:24:52 accel -- common/autotest_common.sh@968 -- # kill 1259311 00:07:52.150 21:24:52 accel -- common/autotest_common.sh@973 -- # wait 1259311 00:07:52.410 21:24:52 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:52.410 21:24:52 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:52.410 21:24:52 accel -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:52.410 21:24:52 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:52.410 21:24:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:52.410 21:24:52 accel.accel_help -- common/autotest_common.sh@1124 -- # accel_perf -h 00:07:52.410 21:24:52 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:52.410 21:24:52 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:52.410 21:24:52 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:52.410 21:24:52 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:52.410 21:24:52 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:52.410 21:24:52 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.410 21:24:52 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:52.410 21:24:52 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:52.410 21:24:52 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:52.410 21:24:52 accel.accel_help -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:52.410 21:24:52 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:52.410 21:24:52 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:52.410 21:24:52 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:07:52.410 21:24:52 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:52.410 21:24:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:52.410 ************************************ 00:07:52.410 START TEST accel_missing_filename 00:07:52.410 ************************************ 00:07:52.410 21:24:52 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress 00:07:52.410 21:24:52 accel.accel_missing_filename -- common/autotest_common.sh@649 -- # local es=0 00:07:52.410 21:24:52 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:52.410 21:24:52 accel.accel_missing_filename -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:07:52.410 21:24:52 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:52.410 21:24:52 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # type -t accel_perf 00:07:52.410 21:24:52 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:52.410 21:24:52 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress 00:07:52.410 21:24:52 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:52.410 21:24:52 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:52.410 21:24:52 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:52.410 21:24:52 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:52.410 21:24:52 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:52.410 21:24:52 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.410 21:24:52 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:52.410 21:24:52 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:52.410 21:24:52 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:52.410 [2024-06-07 21:24:52.672399] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:07:52.411 [2024-06-07 21:24:52.672467] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1259663 ] 00:07:52.670 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.670 [2024-06-07 21:24:52.760850] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.670 [2024-06-07 21:24:52.848612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.670 [2024-06-07 21:24:52.893720] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:52.931 [2024-06-07 21:24:52.956981] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:52.931 A filename is required. 00:07:52.931 21:24:53 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # es=234 00:07:52.931 21:24:53 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:52.931 21:24:53 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # es=106 00:07:52.931 21:24:53 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # case "$es" in 00:07:52.931 21:24:53 accel.accel_missing_filename -- common/autotest_common.sh@669 -- # es=1 00:07:52.931 21:24:53 accel.accel_missing_filename -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:52.931 00:07:52.931 real 0m0.394s 00:07:52.931 user 0m0.288s 00:07:52.931 sys 0m0.146s 00:07:52.931 21:24:53 accel.accel_missing_filename -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:52.931 21:24:53 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:52.931 ************************************ 00:07:52.931 END TEST accel_missing_filename 00:07:52.931 ************************************ 00:07:52.931 21:24:53 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:52.931 21:24:53 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:07:52.931 21:24:53 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:52.931 21:24:53 accel -- common/autotest_common.sh@10 -- # set +x 00:07:52.931 ************************************ 00:07:52.931 START TEST accel_compress_verify 00:07:52.931 ************************************ 00:07:52.931 21:24:53 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:52.931 21:24:53 accel.accel_compress_verify -- common/autotest_common.sh@649 -- # local es=0 00:07:52.931 21:24:53 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:52.931 21:24:53 accel.accel_compress_verify -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:07:52.931 21:24:53 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:52.931 21:24:53 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # type -t accel_perf 00:07:52.931 21:24:53 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:52.931 21:24:53 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:52.931 21:24:53 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:52.931 21:24:53 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:52.931 21:24:53 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:52.931 21:24:53 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:52.931 21:24:53 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:52.931 21:24:53 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.931 21:24:53 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:52.931 21:24:53 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:52.931 21:24:53 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:52.931 [2024-06-07 21:24:53.130205] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:07:52.931 [2024-06-07 21:24:53.130254] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1259884 ] 00:07:52.931 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.190 [2024-06-07 21:24:53.207979] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.190 [2024-06-07 21:24:53.297310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.190 [2024-06-07 21:24:53.342369] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:53.190 [2024-06-07 21:24:53.404883] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:53.450 00:07:53.450 Compression does not support the verify option, aborting. 00:07:53.450 21:24:53 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # es=161 00:07:53.450 21:24:53 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:53.450 21:24:53 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # es=33 00:07:53.450 21:24:53 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # case "$es" in 00:07:53.450 21:24:53 accel.accel_compress_verify -- common/autotest_common.sh@669 -- # es=1 00:07:53.450 21:24:53 accel.accel_compress_verify -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:53.450 00:07:53.450 real 0m0.383s 00:07:53.450 user 0m0.277s 00:07:53.450 sys 0m0.142s 00:07:53.450 21:24:53 accel.accel_compress_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:53.450 21:24:53 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:53.450 ************************************ 00:07:53.450 END TEST accel_compress_verify 00:07:53.450 ************************************ 00:07:53.450 21:24:53 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:53.450 21:24:53 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:07:53.450 21:24:53 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:53.450 21:24:53 accel -- common/autotest_common.sh@10 -- # set +x 00:07:53.450 ************************************ 00:07:53.450 START TEST accel_wrong_workload 00:07:53.450 ************************************ 00:07:53.450 21:24:53 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w foobar 00:07:53.450 21:24:53 accel.accel_wrong_workload -- common/autotest_common.sh@649 -- # local es=0 00:07:53.450 21:24:53 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:53.450 21:24:53 accel.accel_wrong_workload -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:07:53.450 21:24:53 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:53.450 21:24:53 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # type -t accel_perf 00:07:53.450 21:24:53 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:53.450 21:24:53 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w foobar 00:07:53.450 21:24:53 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:53.450 21:24:53 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:53.450 21:24:53 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:53.450 21:24:53 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:53.450 21:24:53 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:53.450 21:24:53 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:53.450 21:24:53 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:53.450 21:24:53 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:53.450 21:24:53 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:53.450 Unsupported workload type: foobar 00:07:53.450 [2024-06-07 21:24:53.579411] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:53.450 accel_perf options: 00:07:53.450 [-h help message] 00:07:53.450 [-q queue depth per core] 00:07:53.450 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:53.450 [-T number of threads per core 00:07:53.450 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:53.450 [-t time in seconds] 00:07:53.450 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:53.450 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:53.450 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:53.450 [-l for compress/decompress workloads, name of uncompressed input file 00:07:53.450 [-S for crc32c workload, use this seed value (default 0) 00:07:53.450 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:53.450 [-f for fill workload, use this BYTE value (default 255) 00:07:53.450 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:53.450 [-y verify result if this switch is on] 00:07:53.450 [-a tasks to allocate per core (default: same value as -q)] 00:07:53.450 Can be used to spread operations across a wider range of memory. 00:07:53.450 21:24:53 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # es=1 00:07:53.450 21:24:53 accel.accel_wrong_workload -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:53.450 21:24:53 accel.accel_wrong_workload -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:53.451 21:24:53 accel.accel_wrong_workload -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:53.451 00:07:53.451 real 0m0.035s 00:07:53.451 user 0m0.021s 00:07:53.451 sys 0m0.014s 00:07:53.451 21:24:53 accel.accel_wrong_workload -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:53.451 21:24:53 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:53.451 ************************************ 00:07:53.451 END TEST accel_wrong_workload 00:07:53.451 ************************************ 00:07:53.451 Error: writing output failed: Broken pipe 00:07:53.451 21:24:53 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:53.451 21:24:53 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:07:53.451 21:24:53 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:53.451 21:24:53 accel -- common/autotest_common.sh@10 -- # set +x 00:07:53.451 ************************************ 00:07:53.451 START TEST accel_negative_buffers 00:07:53.451 ************************************ 00:07:53.451 21:24:53 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:53.451 21:24:53 accel.accel_negative_buffers -- common/autotest_common.sh@649 -- # local es=0 00:07:53.451 21:24:53 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:53.451 21:24:53 accel.accel_negative_buffers -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:07:53.451 21:24:53 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:53.451 21:24:53 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # type -t accel_perf 00:07:53.451 21:24:53 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:53.451 21:24:53 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w xor -y -x -1 00:07:53.451 21:24:53 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:53.451 21:24:53 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:53.451 21:24:53 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:53.451 21:24:53 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:53.451 21:24:53 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:53.451 21:24:53 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:53.451 21:24:53 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:53.451 21:24:53 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:53.451 21:24:53 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:53.451 -x option must be non-negative. 00:07:53.451 [2024-06-07 21:24:53.676158] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:53.451 accel_perf options: 00:07:53.451 [-h help message] 00:07:53.451 [-q queue depth per core] 00:07:53.451 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:53.451 [-T number of threads per core 00:07:53.451 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:53.451 [-t time in seconds] 00:07:53.451 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:53.451 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:53.451 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:53.451 [-l for compress/decompress workloads, name of uncompressed input file 00:07:53.451 [-S for crc32c workload, use this seed value (default 0) 00:07:53.451 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:53.451 [-f for fill workload, use this BYTE value (default 255) 00:07:53.451 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:53.451 [-y verify result if this switch is on] 00:07:53.451 [-a tasks to allocate per core (default: same value as -q)] 00:07:53.451 Can be used to spread operations across a wider range of memory. 00:07:53.451 21:24:53 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # es=1 00:07:53.451 21:24:53 accel.accel_negative_buffers -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:53.451 21:24:53 accel.accel_negative_buffers -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:53.451 21:24:53 accel.accel_negative_buffers -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:53.451 00:07:53.451 real 0m0.033s 00:07:53.451 user 0m0.017s 00:07:53.451 sys 0m0.016s 00:07:53.451 21:24:53 accel.accel_negative_buffers -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:53.451 21:24:53 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:53.451 ************************************ 00:07:53.451 END TEST accel_negative_buffers 00:07:53.451 ************************************ 00:07:53.451 Error: writing output failed: Broken pipe 00:07:53.451 21:24:53 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:53.451 21:24:53 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:07:53.451 21:24:53 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:53.451 21:24:53 accel -- common/autotest_common.sh@10 -- # set +x 00:07:53.711 ************************************ 00:07:53.711 START TEST accel_crc32c 00:07:53.711 ************************************ 00:07:53.711 21:24:53 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:53.711 21:24:53 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:53.711 21:24:53 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:53.711 21:24:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:53.711 21:24:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:53.711 21:24:53 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:53.711 21:24:53 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:53.711 21:24:53 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:53.711 21:24:53 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:53.711 21:24:53 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:53.711 21:24:53 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:53.711 21:24:53 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:53.711 21:24:53 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:53.711 21:24:53 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:53.711 21:24:53 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:53.711 [2024-06-07 21:24:53.777821] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:07:53.711 [2024-06-07 21:24:53.777875] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1259947 ] 00:07:53.711 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.711 [2024-06-07 21:24:53.867594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.711 [2024-06-07 21:24:53.957022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:53.971 21:24:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:54.911 21:24:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:54.911 21:24:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:54.911 21:24:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:54.911 21:24:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:54.911 21:24:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:54.911 21:24:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:54.911 21:24:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:54.911 21:24:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:54.911 21:24:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:54.911 21:24:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:54.911 21:24:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:54.911 21:24:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:54.911 21:24:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:54.911 21:24:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:54.911 21:24:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:54.911 21:24:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:54.911 21:24:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:54.911 21:24:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:54.911 21:24:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:54.911 21:24:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:54.911 21:24:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:54.911 21:24:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:54.911 21:24:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:54.911 21:24:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:54.911 21:24:55 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:54.911 21:24:55 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:54.911 21:24:55 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:54.911 00:07:54.911 real 0m1.401s 00:07:54.911 user 0m1.256s 00:07:54.911 sys 0m0.152s 00:07:54.911 21:24:55 accel.accel_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:54.911 21:24:55 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:54.911 ************************************ 00:07:54.911 END TEST accel_crc32c 00:07:54.911 ************************************ 00:07:55.171 21:24:55 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:55.171 21:24:55 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:07:55.171 21:24:55 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:55.171 21:24:55 accel -- common/autotest_common.sh@10 -- # set +x 00:07:55.171 ************************************ 00:07:55.171 START TEST accel_crc32c_C2 00:07:55.171 ************************************ 00:07:55.171 21:24:55 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:55.171 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:55.171 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:55.171 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.171 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.171 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:55.171 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:55.171 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:55.171 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:55.171 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:55.171 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:55.171 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:55.171 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:55.171 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:55.171 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:55.171 [2024-06-07 21:24:55.246292] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:07:55.171 [2024-06-07 21:24:55.246344] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1260232 ] 00:07:55.171 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.171 [2024-06-07 21:24:55.333665] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.171 [2024-06-07 21:24:55.420239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.430 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:55.430 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.430 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.430 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.430 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:55.430 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.430 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.430 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.430 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:55.430 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.430 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.430 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.430 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:55.430 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.430 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.430 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.430 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:55.430 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.430 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.430 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.430 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:55.430 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.430 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:55.430 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.430 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.430 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:55.430 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.430 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.430 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.430 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:55.431 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.431 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.431 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.431 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:55.431 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.431 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.431 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.431 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:55.431 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.431 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:55.431 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.431 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.431 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:55.431 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.431 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.431 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.431 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:55.431 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.431 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.431 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.431 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:55.431 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.431 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.431 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.431 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:55.431 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.431 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.431 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.431 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:55.431 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.431 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.431 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.431 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:55.431 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.431 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.431 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.431 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:55.431 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.431 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.431 21:24:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:56.369 21:24:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:56.369 21:24:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.369 21:24:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:56.369 21:24:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:56.369 21:24:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:56.369 21:24:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.369 21:24:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:56.369 21:24:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:56.369 21:24:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:56.369 21:24:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.369 21:24:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:56.369 21:24:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:56.369 21:24:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:56.369 21:24:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.369 21:24:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:56.369 21:24:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:56.369 21:24:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:56.369 21:24:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.369 21:24:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:56.369 21:24:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:56.369 21:24:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:56.369 21:24:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.369 21:24:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:56.369 21:24:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:56.369 21:24:56 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:56.369 21:24:56 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:56.369 21:24:56 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:56.369 00:07:56.369 real 0m1.395s 00:07:56.369 user 0m1.269s 00:07:56.369 sys 0m0.136s 00:07:56.369 21:24:56 accel.accel_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:56.369 21:24:56 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:56.369 ************************************ 00:07:56.369 END TEST accel_crc32c_C2 00:07:56.369 ************************************ 00:07:56.628 21:24:56 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:56.628 21:24:56 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:07:56.628 21:24:56 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:56.628 21:24:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:56.628 ************************************ 00:07:56.628 START TEST accel_copy 00:07:56.628 ************************************ 00:07:56.628 21:24:56 accel.accel_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy -y 00:07:56.628 21:24:56 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:56.628 21:24:56 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:56.628 21:24:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:56.628 21:24:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:56.628 21:24:56 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:56.628 21:24:56 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:56.628 21:24:56 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:56.628 21:24:56 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:56.628 21:24:56 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:56.628 21:24:56 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:56.628 21:24:56 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:56.628 21:24:56 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:56.628 21:24:56 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:56.628 21:24:56 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:56.628 [2024-06-07 21:24:56.706782] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:07:56.628 [2024-06-07 21:24:56.706835] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1260511 ] 00:07:56.628 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.628 [2024-06-07 21:24:56.792588] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.628 [2024-06-07 21:24:56.879261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:56.887 21:24:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:56.888 21:24:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:56.888 21:24:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:56.888 21:24:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:56.888 21:24:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:56.888 21:24:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:56.888 21:24:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:56.888 21:24:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:56.888 21:24:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:57.824 21:24:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:57.824 21:24:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:57.824 21:24:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:57.824 21:24:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:57.824 21:24:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:57.824 21:24:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:57.824 21:24:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:57.824 21:24:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:57.824 21:24:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:57.824 21:24:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:57.824 21:24:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:57.824 21:24:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:57.824 21:24:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:57.824 21:24:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:57.824 21:24:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:57.824 21:24:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:57.824 21:24:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:57.824 21:24:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:57.824 21:24:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:57.824 21:24:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:57.824 21:24:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:57.824 21:24:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:57.824 21:24:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:57.824 21:24:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:57.824 21:24:58 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:57.824 21:24:58 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:57.824 21:24:58 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:57.824 00:07:57.824 real 0m1.393s 00:07:57.824 user 0m1.262s 00:07:57.824 sys 0m0.139s 00:07:57.824 21:24:58 accel.accel_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:57.824 21:24:58 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:57.824 ************************************ 00:07:57.824 END TEST accel_copy 00:07:57.824 ************************************ 00:07:58.097 21:24:58 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:58.097 21:24:58 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:07:58.097 21:24:58 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:58.097 21:24:58 accel -- common/autotest_common.sh@10 -- # set +x 00:07:58.097 ************************************ 00:07:58.097 START TEST accel_fill 00:07:58.097 ************************************ 00:07:58.097 21:24:58 accel.accel_fill -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:58.097 21:24:58 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:58.097 21:24:58 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:58.097 21:24:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:58.097 21:24:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:58.097 21:24:58 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:58.097 21:24:58 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:58.097 21:24:58 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:58.097 21:24:58 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:58.097 21:24:58 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:58.097 21:24:58 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:58.097 21:24:58 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:58.097 21:24:58 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:58.097 21:24:58 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:58.097 21:24:58 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:58.097 [2024-06-07 21:24:58.166946] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:07:58.097 [2024-06-07 21:24:58.166999] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1260797 ] 00:07:58.097 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.097 [2024-06-07 21:24:58.254086] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.097 [2024-06-07 21:24:58.338183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.356 21:24:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:58.356 21:24:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:58.356 21:24:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:58.356 21:24:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:58.356 21:24:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:58.356 21:24:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:58.356 21:24:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:58.356 21:24:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:58.356 21:24:58 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:58.356 21:24:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:58.356 21:24:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:58.356 21:24:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:58.356 21:24:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:58.356 21:24:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:58.356 21:24:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:58.356 21:24:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:58.356 21:24:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:58.357 21:24:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:58.357 21:24:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:58.357 21:24:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:58.357 21:24:58 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:58.357 21:24:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:58.357 21:24:58 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:58.357 21:24:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:58.357 21:24:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:58.357 21:24:58 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:58.357 21:24:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:58.357 21:24:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:58.357 21:24:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:58.357 21:24:58 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:58.357 21:24:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:58.357 21:24:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:58.357 21:24:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:58.357 21:24:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:58.357 21:24:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:58.357 21:24:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:58.357 21:24:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:58.357 21:24:58 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:58.357 21:24:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:58.357 21:24:58 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:58.357 21:24:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:58.357 21:24:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:58.357 21:24:58 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:58.357 21:24:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:58.357 21:24:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:58.357 21:24:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:58.357 21:24:58 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:58.357 21:24:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:58.357 21:24:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:58.357 21:24:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:58.357 21:24:58 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:58.357 21:24:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:58.357 21:24:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:58.357 21:24:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:58.357 21:24:58 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:58.357 21:24:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:58.357 21:24:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:58.357 21:24:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:58.357 21:24:58 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:58.357 21:24:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:58.357 21:24:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:58.357 21:24:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:58.357 21:24:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:58.357 21:24:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:58.357 21:24:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:58.357 21:24:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:58.357 21:24:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:58.357 21:24:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:58.357 21:24:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:58.357 21:24:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:59.295 21:24:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:59.295 21:24:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:59.295 21:24:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:59.295 21:24:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:59.295 21:24:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:59.295 21:24:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:59.295 21:24:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:59.295 21:24:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:59.295 21:24:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:59.295 21:24:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:59.295 21:24:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:59.295 21:24:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:59.295 21:24:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:59.295 21:24:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:59.295 21:24:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:59.295 21:24:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:59.295 21:24:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:59.295 21:24:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:59.295 21:24:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:59.295 21:24:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:59.295 21:24:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:59.295 21:24:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:59.295 21:24:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:59.295 21:24:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:59.295 21:24:59 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:59.295 21:24:59 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:59.295 21:24:59 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:59.295 00:07:59.295 real 0m1.393s 00:07:59.295 user 0m1.260s 00:07:59.295 sys 0m0.143s 00:07:59.295 21:24:59 accel.accel_fill -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:59.295 21:24:59 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:59.295 ************************************ 00:07:59.295 END TEST accel_fill 00:07:59.295 ************************************ 00:07:59.555 21:24:59 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:59.555 21:24:59 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:07:59.555 21:24:59 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:59.555 21:24:59 accel -- common/autotest_common.sh@10 -- # set +x 00:07:59.555 ************************************ 00:07:59.555 START TEST accel_copy_crc32c 00:07:59.555 ************************************ 00:07:59.555 21:24:59 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y 00:07:59.555 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:59.555 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:59.555 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:59.555 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:59.555 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:59.555 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:59.555 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:59.555 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:59.555 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:59.555 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:59.555 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:59.555 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:59.555 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:59.555 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:59.555 [2024-06-07 21:24:59.625567] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:07:59.555 [2024-06-07 21:24:59.625637] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1261074 ] 00:07:59.555 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.555 [2024-06-07 21:24:59.713549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.555 [2024-06-07 21:24:59.797765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:59.816 21:24:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:00.754 21:25:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:00.754 21:25:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:00.754 21:25:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:00.754 21:25:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:00.754 21:25:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:00.754 21:25:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:00.754 21:25:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:00.754 21:25:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:00.754 21:25:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:00.754 21:25:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:00.754 21:25:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:00.754 21:25:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:00.754 21:25:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:00.754 21:25:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:00.754 21:25:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:00.754 21:25:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:00.754 21:25:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:00.754 21:25:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:00.754 21:25:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:00.754 21:25:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:00.754 21:25:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:00.754 21:25:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:00.754 21:25:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:00.754 21:25:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:00.754 21:25:00 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:00.754 21:25:00 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:00.754 21:25:00 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:00.754 00:08:00.754 real 0m1.395s 00:08:00.754 user 0m1.262s 00:08:00.754 sys 0m0.143s 00:08:00.754 21:25:00 accel.accel_copy_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:00.754 21:25:00 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:08:00.754 ************************************ 00:08:00.754 END TEST accel_copy_crc32c 00:08:00.754 ************************************ 00:08:01.013 21:25:01 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:08:01.013 21:25:01 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:08:01.013 21:25:01 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:01.013 21:25:01 accel -- common/autotest_common.sh@10 -- # set +x 00:08:01.013 ************************************ 00:08:01.013 START TEST accel_copy_crc32c_C2 00:08:01.013 ************************************ 00:08:01.013 21:25:01 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:08:01.013 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:08:01.013 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:08:01.013 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:08:01.013 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:01.013 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:01.013 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:08:01.013 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:08:01.013 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:01.013 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:01.013 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:01.013 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:01.013 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:01.013 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:08:01.013 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:08:01.013 [2024-06-07 21:25:01.071672] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:08:01.014 [2024-06-07 21:25:01.071708] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1261360 ] 00:08:01.014 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.014 [2024-06-07 21:25:01.148156] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.014 [2024-06-07 21:25:01.234779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.014 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:01.014 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.014 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:01.014 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:01.014 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:01.014 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.014 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:01.014 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:01.014 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:08:01.014 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.014 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:01.014 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:01.014 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:01.014 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.014 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:01.014 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:01.273 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:01.273 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.273 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:01.273 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:01.273 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:08:01.273 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.273 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:08:01.273 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:01.273 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:01.273 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:08:01.273 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.273 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:01.273 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:01.273 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:01.273 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.273 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:01.273 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:01.273 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:08:01.273 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.273 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:01.273 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:01.273 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:01.273 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.273 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:01.273 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:01.273 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:08:01.273 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.273 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:08:01.273 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:01.273 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:01.273 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:01.273 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.273 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:01.273 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:01.273 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:01.273 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.273 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:01.273 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:01.273 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:08:01.273 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.273 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:01.273 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:01.273 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:01.273 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.273 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:01.273 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:01.273 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:08:01.273 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.273 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:01.273 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:01.273 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:01.274 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.274 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:01.274 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:01.274 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:01.274 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.274 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:01.274 21:25:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:02.211 21:25:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:02.211 21:25:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.211 21:25:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:02.211 21:25:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:02.211 21:25:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:02.211 21:25:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.211 21:25:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:02.211 21:25:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:02.211 21:25:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:02.211 21:25:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.211 21:25:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:02.211 21:25:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:02.211 21:25:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:02.211 21:25:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.211 21:25:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:02.211 21:25:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:02.211 21:25:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:02.211 21:25:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.211 21:25:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:02.211 21:25:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:02.211 21:25:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:02.211 21:25:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.211 21:25:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:02.211 21:25:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:02.211 21:25:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:02.211 21:25:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:02.211 21:25:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:02.211 00:08:02.211 real 0m1.370s 00:08:02.211 user 0m1.253s 00:08:02.211 sys 0m0.127s 00:08:02.211 21:25:02 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:02.211 21:25:02 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:08:02.211 ************************************ 00:08:02.211 END TEST accel_copy_crc32c_C2 00:08:02.211 ************************************ 00:08:02.211 21:25:02 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:08:02.211 21:25:02 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:08:02.211 21:25:02 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:02.211 21:25:02 accel -- common/autotest_common.sh@10 -- # set +x 00:08:02.470 ************************************ 00:08:02.470 START TEST accel_dualcast 00:08:02.470 ************************************ 00:08:02.470 21:25:02 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dualcast -y 00:08:02.470 21:25:02 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:08:02.470 21:25:02 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:08:02.470 21:25:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:02.470 21:25:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:02.470 21:25:02 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:08:02.470 21:25:02 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:08:02.470 21:25:02 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:08:02.470 21:25:02 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:02.470 21:25:02 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:02.470 21:25:02 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:02.470 21:25:02 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:02.470 21:25:02 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:02.470 21:25:02 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:08:02.470 21:25:02 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:08:02.470 [2024-06-07 21:25:02.519042] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:08:02.471 [2024-06-07 21:25:02.519108] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1261639 ] 00:08:02.471 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.471 [2024-06-07 21:25:02.607500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.471 [2024-06-07 21:25:02.694170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.471 21:25:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:02.729 21:25:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:02.729 21:25:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:02.729 21:25:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:02.729 21:25:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:02.729 21:25:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:02.729 21:25:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:02.729 21:25:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:02.729 21:25:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:08:02.729 21:25:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:02.729 21:25:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:02.729 21:25:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:02.729 21:25:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:02.729 21:25:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:02.729 21:25:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:02.729 21:25:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:02.729 21:25:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:02.729 21:25:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:02.729 21:25:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:02.729 21:25:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:02.729 21:25:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:08:02.729 21:25:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:02.729 21:25:02 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:08:02.730 21:25:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:02.730 21:25:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:02.730 21:25:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:02.730 21:25:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:02.730 21:25:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:02.730 21:25:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:02.730 21:25:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:02.730 21:25:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:02.730 21:25:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:02.730 21:25:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:02.730 21:25:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:08:02.730 21:25:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:02.730 21:25:02 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:08:02.730 21:25:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:02.730 21:25:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:02.730 21:25:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:08:02.730 21:25:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:02.730 21:25:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:02.730 21:25:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:02.730 21:25:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:08:02.730 21:25:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:02.730 21:25:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:02.730 21:25:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:02.730 21:25:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:08:02.730 21:25:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:02.730 21:25:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:02.730 21:25:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:02.730 21:25:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:08:02.730 21:25:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:02.730 21:25:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:02.730 21:25:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:02.730 21:25:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:08:02.730 21:25:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:02.730 21:25:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:02.730 21:25:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:02.730 21:25:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:02.730 21:25:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:02.730 21:25:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:02.730 21:25:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:02.730 21:25:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:02.730 21:25:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:02.730 21:25:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:02.730 21:25:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:03.721 21:25:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:03.721 21:25:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:03.721 21:25:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:03.721 21:25:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:03.721 21:25:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:03.721 21:25:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:03.721 21:25:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:03.721 21:25:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:03.721 21:25:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:03.721 21:25:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:03.721 21:25:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:03.721 21:25:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:03.721 21:25:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:03.721 21:25:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:03.721 21:25:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:03.721 21:25:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:03.721 21:25:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:03.721 21:25:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:03.721 21:25:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:03.721 21:25:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:03.721 21:25:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:03.721 21:25:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:03.721 21:25:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:03.721 21:25:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:03.721 21:25:03 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:03.721 21:25:03 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:08:03.721 21:25:03 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:03.721 00:08:03.721 real 0m1.397s 00:08:03.721 user 0m1.263s 00:08:03.721 sys 0m0.143s 00:08:03.721 21:25:03 accel.accel_dualcast -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:03.721 21:25:03 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:08:03.721 ************************************ 00:08:03.721 END TEST accel_dualcast 00:08:03.721 ************************************ 00:08:03.721 21:25:03 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:08:03.721 21:25:03 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:08:03.721 21:25:03 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:03.721 21:25:03 accel -- common/autotest_common.sh@10 -- # set +x 00:08:03.980 ************************************ 00:08:03.980 START TEST accel_compare 00:08:03.980 ************************************ 00:08:03.980 21:25:03 accel.accel_compare -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compare -y 00:08:03.980 21:25:03 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:08:03.980 21:25:03 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:08:03.980 21:25:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:03.980 21:25:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:03.980 21:25:03 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:08:03.980 21:25:03 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:03.980 21:25:03 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:08:03.980 21:25:03 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:03.980 21:25:03 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:03.980 21:25:03 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:03.980 21:25:03 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:03.980 21:25:03 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:03.980 21:25:03 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:08:03.980 21:25:03 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:08:03.980 [2024-06-07 21:25:03.978526] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:08:03.980 [2024-06-07 21:25:03.978594] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1261916 ] 00:08:03.980 EAL: No free 2048 kB hugepages reported on node 1 00:08:03.980 [2024-06-07 21:25:04.066061] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.980 [2024-06-07 21:25:04.153018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.980 21:25:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:03.980 21:25:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:03.980 21:25:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:03.980 21:25:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:03.980 21:25:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:03.980 21:25:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:03.980 21:25:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:03.980 21:25:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:03.980 21:25:04 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:08:03.980 21:25:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:03.980 21:25:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:03.980 21:25:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:03.980 21:25:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:03.980 21:25:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:03.980 21:25:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:03.980 21:25:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:03.980 21:25:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:03.980 21:25:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:03.980 21:25:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:03.981 21:25:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:03.981 21:25:04 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:08:03.981 21:25:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:03.981 21:25:04 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:08:03.981 21:25:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:03.981 21:25:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:03.981 21:25:04 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:03.981 21:25:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:03.981 21:25:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:03.981 21:25:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:03.981 21:25:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:03.981 21:25:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:03.981 21:25:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:03.981 21:25:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:03.981 21:25:04 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:08:03.981 21:25:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:03.981 21:25:04 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:08:03.981 21:25:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:03.981 21:25:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:03.981 21:25:04 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:03.981 21:25:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:03.981 21:25:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:03.981 21:25:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:03.981 21:25:04 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:03.981 21:25:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:03.981 21:25:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:03.981 21:25:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:03.981 21:25:04 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:08:03.981 21:25:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:03.981 21:25:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:03.981 21:25:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:03.981 21:25:04 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:08:03.981 21:25:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:03.981 21:25:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:03.981 21:25:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:03.981 21:25:04 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:08:03.981 21:25:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:03.981 21:25:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:03.981 21:25:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:03.981 21:25:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:03.981 21:25:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:03.981 21:25:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:03.981 21:25:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:03.981 21:25:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:03.981 21:25:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:03.981 21:25:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:03.981 21:25:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:05.358 21:25:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:05.358 21:25:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:05.358 21:25:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:05.358 21:25:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:05.358 21:25:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:05.358 21:25:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:05.358 21:25:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:05.358 21:25:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:05.358 21:25:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:05.358 21:25:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:05.358 21:25:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:05.358 21:25:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:05.358 21:25:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:05.358 21:25:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:05.358 21:25:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:05.358 21:25:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:05.358 21:25:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:05.358 21:25:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:05.358 21:25:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:05.358 21:25:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:05.358 21:25:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:05.358 21:25:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:05.358 21:25:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:05.358 21:25:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:05.358 21:25:05 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:05.358 21:25:05 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:08:05.358 21:25:05 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:05.358 00:08:05.358 real 0m1.397s 00:08:05.358 user 0m1.269s 00:08:05.358 sys 0m0.137s 00:08:05.358 21:25:05 accel.accel_compare -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:05.358 21:25:05 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:08:05.358 ************************************ 00:08:05.358 END TEST accel_compare 00:08:05.358 ************************************ 00:08:05.358 21:25:05 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:08:05.358 21:25:05 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:08:05.358 21:25:05 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:05.358 21:25:05 accel -- common/autotest_common.sh@10 -- # set +x 00:08:05.358 ************************************ 00:08:05.358 START TEST accel_xor 00:08:05.358 ************************************ 00:08:05.358 21:25:05 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y 00:08:05.358 21:25:05 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:05.359 21:25:05 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:05.359 21:25:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.359 21:25:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.359 21:25:05 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:08:05.359 21:25:05 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:05.359 21:25:05 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:05.359 21:25:05 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:05.359 21:25:05 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:05.359 21:25:05 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:05.359 21:25:05 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:05.359 21:25:05 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:05.359 21:25:05 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:05.359 21:25:05 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:05.359 [2024-06-07 21:25:05.438071] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:08:05.359 [2024-06-07 21:25:05.438124] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1262204 ] 00:08:05.359 EAL: No free 2048 kB hugepages reported on node 1 00:08:05.359 [2024-06-07 21:25:05.525690] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.359 [2024-06-07 21:25:05.611933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.618 21:25:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:05.618 21:25:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.618 21:25:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.618 21:25:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.618 21:25:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:05.618 21:25:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.618 21:25:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.618 21:25:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.618 21:25:05 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:05.618 21:25:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.618 21:25:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.618 21:25:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.618 21:25:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:05.618 21:25:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.618 21:25:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.618 21:25:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.618 21:25:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:05.618 21:25:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.618 21:25:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.618 21:25:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.618 21:25:05 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:05.618 21:25:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.618 21:25:05 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:05.618 21:25:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.619 21:25:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.619 21:25:05 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:08:05.619 21:25:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.619 21:25:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.619 21:25:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.619 21:25:05 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:05.619 21:25:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.619 21:25:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.619 21:25:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.619 21:25:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:05.619 21:25:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.619 21:25:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.619 21:25:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.619 21:25:05 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:05.619 21:25:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.619 21:25:05 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:05.619 21:25:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.619 21:25:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.619 21:25:05 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:05.619 21:25:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.619 21:25:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.619 21:25:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.619 21:25:05 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:05.619 21:25:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.619 21:25:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.619 21:25:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.619 21:25:05 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:05.619 21:25:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.619 21:25:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.619 21:25:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.619 21:25:05 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:05.619 21:25:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.619 21:25:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.619 21:25:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.619 21:25:05 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:05.619 21:25:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.619 21:25:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.619 21:25:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.619 21:25:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:05.619 21:25:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.619 21:25:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.619 21:25:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:05.619 21:25:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:05.619 21:25:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:05.619 21:25:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:05.619 21:25:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:06.556 21:25:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:06.556 21:25:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:06.556 21:25:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:06.556 21:25:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:06.556 21:25:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:06.556 21:25:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:06.556 21:25:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:06.556 21:25:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:06.556 21:25:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:06.556 21:25:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:06.556 21:25:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:06.556 21:25:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:06.556 21:25:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:06.556 21:25:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:06.556 21:25:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:06.556 21:25:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:06.556 21:25:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:06.556 21:25:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:06.556 21:25:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:06.556 21:25:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:06.556 21:25:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:06.556 21:25:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:06.556 21:25:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:06.556 21:25:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:06.556 21:25:06 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:06.556 21:25:06 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:06.556 21:25:06 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:06.556 00:08:06.556 real 0m1.395s 00:08:06.556 user 0m1.262s 00:08:06.556 sys 0m0.143s 00:08:06.556 21:25:06 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:06.556 21:25:06 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:06.556 ************************************ 00:08:06.556 END TEST accel_xor 00:08:06.556 ************************************ 00:08:06.816 21:25:06 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:08:06.816 21:25:06 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:08:06.816 21:25:06 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:06.816 21:25:06 accel -- common/autotest_common.sh@10 -- # set +x 00:08:06.816 ************************************ 00:08:06.816 START TEST accel_xor 00:08:06.816 ************************************ 00:08:06.816 21:25:06 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y -x 3 00:08:06.816 21:25:06 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:06.816 21:25:06 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:06.816 21:25:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:06.816 21:25:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:06.816 21:25:06 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:08:06.816 21:25:06 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:06.816 21:25:06 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:06.816 21:25:06 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:06.816 21:25:06 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:06.816 21:25:06 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:06.816 21:25:06 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:06.816 21:25:06 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:06.816 21:25:06 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:06.816 21:25:06 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:06.816 [2024-06-07 21:25:06.887990] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:08:06.816 [2024-06-07 21:25:06.888125] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1262481 ] 00:08:06.816 EAL: No free 2048 kB hugepages reported on node 1 00:08:06.816 [2024-06-07 21:25:06.973348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.816 [2024-06-07 21:25:07.060810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.075 21:25:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:07.075 21:25:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:07.075 21:25:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:07.075 21:25:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:07.075 21:25:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:07.075 21:25:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:07.075 21:25:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:07.075 21:25:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:07.075 21:25:07 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:07.075 21:25:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:07.075 21:25:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:07.075 21:25:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:07.075 21:25:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:07.075 21:25:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:07.075 21:25:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:07.075 21:25:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:07.075 21:25:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:07.075 21:25:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:07.075 21:25:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:07.075 21:25:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:07.075 21:25:07 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:07.075 21:25:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:07.075 21:25:07 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:07.075 21:25:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:07.075 21:25:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:07.075 21:25:07 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:08:07.075 21:25:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:07.075 21:25:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:07.075 21:25:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:07.075 21:25:07 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:07.075 21:25:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:07.075 21:25:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:07.075 21:25:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:07.075 21:25:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:07.075 21:25:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:07.075 21:25:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:07.075 21:25:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:07.075 21:25:07 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:07.075 21:25:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:07.075 21:25:07 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:07.075 21:25:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:07.075 21:25:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:07.075 21:25:07 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:07.075 21:25:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:07.075 21:25:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:07.075 21:25:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:07.075 21:25:07 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:07.075 21:25:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:07.075 21:25:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:07.076 21:25:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:07.076 21:25:07 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:07.076 21:25:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:07.076 21:25:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:07.076 21:25:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:07.076 21:25:07 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:07.076 21:25:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:07.076 21:25:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:07.076 21:25:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:07.076 21:25:07 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:07.076 21:25:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:07.076 21:25:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:07.076 21:25:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:07.076 21:25:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:07.076 21:25:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:07.076 21:25:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:07.076 21:25:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:07.076 21:25:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:07.076 21:25:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:07.076 21:25:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:07.076 21:25:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:08.013 21:25:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:08.013 21:25:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:08.013 21:25:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:08.013 21:25:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:08.013 21:25:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:08.013 21:25:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:08.013 21:25:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:08.013 21:25:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:08.013 21:25:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:08.013 21:25:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:08.013 21:25:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:08.013 21:25:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:08.013 21:25:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:08.013 21:25:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:08.013 21:25:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:08.013 21:25:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:08.013 21:25:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:08.013 21:25:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:08.013 21:25:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:08.013 21:25:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:08.013 21:25:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:08.013 21:25:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:08.013 21:25:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:08.013 21:25:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:08.013 21:25:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:08.013 21:25:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:08.013 21:25:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:08.013 00:08:08.013 real 0m1.393s 00:08:08.013 user 0m1.261s 00:08:08.013 sys 0m0.142s 00:08:08.013 21:25:08 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:08.013 21:25:08 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:08.013 ************************************ 00:08:08.013 END TEST accel_xor 00:08:08.013 ************************************ 00:08:08.273 21:25:08 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:08:08.273 21:25:08 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:08:08.273 21:25:08 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:08.273 21:25:08 accel -- common/autotest_common.sh@10 -- # set +x 00:08:08.273 ************************************ 00:08:08.273 START TEST accel_dif_verify 00:08:08.273 ************************************ 00:08:08.273 21:25:08 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_verify 00:08:08.273 21:25:08 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:08:08.273 21:25:08 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:08:08.273 21:25:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.273 21:25:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.273 21:25:08 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:08:08.273 21:25:08 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:08.273 21:25:08 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:08.273 21:25:08 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:08.273 21:25:08 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:08.273 21:25:08 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:08.273 21:25:08 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:08.273 21:25:08 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:08.273 21:25:08 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:08.273 21:25:08 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:08:08.273 [2024-06-07 21:25:08.345402] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:08:08.273 [2024-06-07 21:25:08.345471] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1262766 ] 00:08:08.273 EAL: No free 2048 kB hugepages reported on node 1 00:08:08.273 [2024-06-07 21:25:08.432536] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.273 [2024-06-07 21:25:08.520279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.532 21:25:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.533 21:25:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:08:08.533 21:25:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.533 21:25:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.533 21:25:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.533 21:25:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:08.533 21:25:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.533 21:25:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.533 21:25:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:08.533 21:25:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:08.533 21:25:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:08.533 21:25:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:08.533 21:25:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:09.470 21:25:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:09.470 21:25:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:09.470 21:25:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:09.470 21:25:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:09.470 21:25:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:09.470 21:25:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:09.470 21:25:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:09.470 21:25:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:09.470 21:25:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:09.470 21:25:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:09.470 21:25:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:09.470 21:25:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:09.470 21:25:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:09.470 21:25:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:09.470 21:25:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:09.470 21:25:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:09.470 21:25:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:09.470 21:25:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:09.470 21:25:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:09.470 21:25:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:09.470 21:25:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:09.470 21:25:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:09.470 21:25:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:09.470 21:25:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:09.470 21:25:09 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:09.470 21:25:09 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:08:09.470 21:25:09 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:09.470 00:08:09.470 real 0m1.397s 00:08:09.470 user 0m1.263s 00:08:09.470 sys 0m0.141s 00:08:09.470 21:25:09 accel.accel_dif_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:09.470 21:25:09 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:08:09.470 ************************************ 00:08:09.470 END TEST accel_dif_verify 00:08:09.470 ************************************ 00:08:09.729 21:25:09 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:08:09.729 21:25:09 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:08:09.729 21:25:09 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:09.729 21:25:09 accel -- common/autotest_common.sh@10 -- # set +x 00:08:09.729 ************************************ 00:08:09.729 START TEST accel_dif_generate 00:08:09.729 ************************************ 00:08:09.729 21:25:09 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate 00:08:09.729 21:25:09 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:08:09.729 21:25:09 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:08:09.729 21:25:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.729 21:25:09 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:08:09.729 21:25:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.729 21:25:09 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:09.729 21:25:09 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:08:09.729 21:25:09 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:09.729 21:25:09 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:09.729 21:25:09 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:09.729 21:25:09 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:09.729 21:25:09 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:09.729 21:25:09 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:08:09.729 21:25:09 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:08:09.729 [2024-06-07 21:25:09.786703] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:08:09.729 [2024-06-07 21:25:09.786738] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1263046 ] 00:08:09.729 EAL: No free 2048 kB hugepages reported on node 1 00:08:09.729 [2024-06-07 21:25:09.860742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.729 [2024-06-07 21:25:09.948766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.729 21:25:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:09.729 21:25:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.729 21:25:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.729 21:25:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.729 21:25:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:09.729 21:25:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.729 21:25:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.729 21:25:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.729 21:25:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:08:09.729 21:25:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.729 21:25:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.729 21:25:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.729 21:25:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:09.729 21:25:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.729 21:25:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.729 21:25:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.730 21:25:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:09.730 21:25:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.730 21:25:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.730 21:25:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.730 21:25:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:08:09.730 21:25:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.730 21:25:09 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:08:09.730 21:25:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.730 21:25:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.730 21:25:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:09.730 21:25:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.730 21:25:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.730 21:25:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.730 21:25:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:09.730 21:25:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.730 21:25:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.730 21:25:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.730 21:25:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:08:09.730 21:25:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.730 21:25:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.730 21:25:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.730 21:25:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:08:09.989 21:25:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.989 21:25:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.989 21:25:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.989 21:25:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:09.989 21:25:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.989 21:25:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.989 21:25:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.989 21:25:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:08:09.989 21:25:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.989 21:25:09 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:08:09.989 21:25:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.989 21:25:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.989 21:25:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:09.989 21:25:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.989 21:25:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.989 21:25:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.989 21:25:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:09.989 21:25:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.989 21:25:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.989 21:25:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.989 21:25:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:08:09.989 21:25:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.989 21:25:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.989 21:25:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.989 21:25:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:08:09.989 21:25:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.989 21:25:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.989 21:25:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.989 21:25:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:08:09.989 21:25:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.989 21:25:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.989 21:25:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.989 21:25:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:09.989 21:25:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.989 21:25:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.989 21:25:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.989 21:25:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:09.989 21:25:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.989 21:25:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.989 21:25:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:10.926 21:25:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:10.926 21:25:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:10.926 21:25:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:10.926 21:25:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:10.926 21:25:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:10.926 21:25:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:10.926 21:25:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:10.926 21:25:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:10.926 21:25:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:10.926 21:25:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:10.926 21:25:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:10.926 21:25:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:10.926 21:25:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:10.926 21:25:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:10.926 21:25:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:10.926 21:25:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:10.926 21:25:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:10.926 21:25:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:10.926 21:25:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:10.926 21:25:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:10.926 21:25:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:10.926 21:25:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:10.926 21:25:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:10.926 21:25:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:10.926 21:25:11 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:10.926 21:25:11 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:08:10.926 21:25:11 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:10.926 00:08:10.926 real 0m1.367s 00:08:10.926 user 0m1.251s 00:08:10.926 sys 0m0.124s 00:08:10.926 21:25:11 accel.accel_dif_generate -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:10.926 21:25:11 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:08:10.926 ************************************ 00:08:10.926 END TEST accel_dif_generate 00:08:10.926 ************************************ 00:08:10.926 21:25:11 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:08:10.926 21:25:11 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:08:10.926 21:25:11 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:10.926 21:25:11 accel -- common/autotest_common.sh@10 -- # set +x 00:08:11.186 ************************************ 00:08:11.186 START TEST accel_dif_generate_copy 00:08:11.186 ************************************ 00:08:11.186 21:25:11 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate_copy 00:08:11.186 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:11.186 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:08:11.186 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.186 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.186 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:08:11.186 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:11.186 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:11.186 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:11.186 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:11.186 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:11.186 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:11.186 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:11.186 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:11.186 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:08:11.186 [2024-06-07 21:25:11.232498] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:08:11.186 [2024-06-07 21:25:11.232552] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1263331 ] 00:08:11.186 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.186 [2024-06-07 21:25:11.318833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.186 [2024-06-07 21:25:11.405755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.186 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:11.186 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.186 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.186 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.186 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:11.186 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.186 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.186 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.186 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:08:11.186 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.186 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.186 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.186 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:11.186 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.186 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.186 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.186 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:11.186 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.186 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.186 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.186 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:08:11.186 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.186 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:08:11.186 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.186 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.186 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:11.186 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.186 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.186 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.186 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:11.186 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.186 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.186 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.186 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:11.186 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.186 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.446 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.446 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:08:11.446 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.446 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:11.446 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.446 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.446 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:11.446 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.446 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.446 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.446 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:11.446 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.446 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.446 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.446 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:08:11.446 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.446 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.446 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.446 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:11.446 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.446 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.446 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.446 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:08:11.446 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.446 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.446 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.446 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:11.446 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.446 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.446 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:11.446 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:11.446 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:11.446 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:11.446 21:25:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.383 21:25:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:12.383 21:25:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.383 21:25:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.383 21:25:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.383 21:25:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:12.383 21:25:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.383 21:25:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.383 21:25:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.383 21:25:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:12.383 21:25:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.383 21:25:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.383 21:25:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.383 21:25:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:12.383 21:25:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.383 21:25:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.383 21:25:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.383 21:25:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:12.383 21:25:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.383 21:25:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.383 21:25:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.383 21:25:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:12.384 21:25:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.384 21:25:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.384 21:25:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.384 21:25:12 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:12.384 21:25:12 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:08:12.384 21:25:12 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:12.384 00:08:12.384 real 0m1.393s 00:08:12.384 user 0m1.261s 00:08:12.384 sys 0m0.137s 00:08:12.384 21:25:12 accel.accel_dif_generate_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:12.384 21:25:12 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:08:12.384 ************************************ 00:08:12.384 END TEST accel_dif_generate_copy 00:08:12.384 ************************************ 00:08:12.384 21:25:12 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:08:12.384 21:25:12 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:12.384 21:25:12 accel -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:08:12.384 21:25:12 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:12.384 21:25:12 accel -- common/autotest_common.sh@10 -- # set +x 00:08:12.643 ************************************ 00:08:12.643 START TEST accel_comp 00:08:12.643 ************************************ 00:08:12.643 21:25:12 accel.accel_comp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:12.643 21:25:12 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:08:12.643 21:25:12 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:08:12.643 21:25:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:12.643 21:25:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:12.643 21:25:12 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:12.643 21:25:12 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:12.643 21:25:12 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:08:12.643 21:25:12 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:12.643 21:25:12 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:12.643 21:25:12 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:12.643 21:25:12 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:12.643 21:25:12 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:12.643 21:25:12 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:08:12.643 21:25:12 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:08:12.643 [2024-06-07 21:25:12.686139] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:08:12.643 [2024-06-07 21:25:12.686200] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1263608 ] 00:08:12.643 EAL: No free 2048 kB hugepages reported on node 1 00:08:12.643 [2024-06-07 21:25:12.775445] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.643 [2024-06-07 21:25:12.862435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.643 21:25:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:12.643 21:25:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:12.643 21:25:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:12.643 21:25:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:12.643 21:25:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:12.643 21:25:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:12.643 21:25:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:12.643 21:25:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:12.643 21:25:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:12.643 21:25:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:12.643 21:25:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:12.643 21:25:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:12.643 21:25:12 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:08:12.643 21:25:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:12.643 21:25:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:12.643 21:25:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:12.643 21:25:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:12.643 21:25:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:12.643 21:25:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:12.902 21:25:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:12.902 21:25:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:12.902 21:25:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:12.902 21:25:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:12.902 21:25:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:12.902 21:25:12 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:08:12.902 21:25:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:12.902 21:25:12 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:08:12.902 21:25:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:12.902 21:25:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:12.902 21:25:12 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:12.902 21:25:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:12.902 21:25:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:12.902 21:25:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:12.902 21:25:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:12.902 21:25:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:12.902 21:25:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:12.902 21:25:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:12.902 21:25:12 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:08:12.902 21:25:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:12.902 21:25:12 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:08:12.902 21:25:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:12.902 21:25:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:12.902 21:25:12 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:12.902 21:25:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:12.902 21:25:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:12.902 21:25:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:12.902 21:25:12 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:12.902 21:25:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:12.902 21:25:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:12.902 21:25:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:12.902 21:25:12 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:12.902 21:25:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:12.902 21:25:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:12.902 21:25:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:12.902 21:25:12 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:08:12.902 21:25:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:12.902 21:25:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:12.902 21:25:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:12.903 21:25:12 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:12.903 21:25:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:12.903 21:25:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:12.903 21:25:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:12.903 21:25:12 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:08:12.903 21:25:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:12.903 21:25:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:12.903 21:25:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:12.903 21:25:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:12.903 21:25:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:12.903 21:25:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:12.903 21:25:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:12.903 21:25:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:12.903 21:25:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:12.903 21:25:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:12.903 21:25:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.840 21:25:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:13.840 21:25:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.840 21:25:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.840 21:25:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.840 21:25:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:13.840 21:25:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.840 21:25:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.840 21:25:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.840 21:25:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:13.840 21:25:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.840 21:25:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.840 21:25:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.840 21:25:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:13.840 21:25:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.840 21:25:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.840 21:25:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.840 21:25:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:13.840 21:25:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.840 21:25:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.840 21:25:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.840 21:25:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:13.840 21:25:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.840 21:25:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.840 21:25:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.840 21:25:14 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:13.840 21:25:14 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:08:13.840 21:25:14 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:13.840 00:08:13.840 real 0m1.399s 00:08:13.840 user 0m1.267s 00:08:13.840 sys 0m0.139s 00:08:13.840 21:25:14 accel.accel_comp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:13.840 21:25:14 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:08:13.840 ************************************ 00:08:13.840 END TEST accel_comp 00:08:13.840 ************************************ 00:08:13.840 21:25:14 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:13.840 21:25:14 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:08:13.840 21:25:14 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:13.840 21:25:14 accel -- common/autotest_common.sh@10 -- # set +x 00:08:14.100 ************************************ 00:08:14.100 START TEST accel_decomp 00:08:14.100 ************************************ 00:08:14.100 21:25:14 accel.accel_decomp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:14.100 21:25:14 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:08:14.100 21:25:14 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:08:14.100 21:25:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:14.100 21:25:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:14.100 21:25:14 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:14.100 21:25:14 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:14.100 21:25:14 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:08:14.100 21:25:14 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:14.100 21:25:14 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:14.100 21:25:14 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:14.100 21:25:14 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:14.100 21:25:14 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:14.100 21:25:14 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:08:14.100 21:25:14 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:08:14.100 [2024-06-07 21:25:14.148962] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:08:14.100 [2024-06-07 21:25:14.149038] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1263890 ] 00:08:14.100 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.100 [2024-06-07 21:25:14.237270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.100 [2024-06-07 21:25:14.324378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:14.360 21:25:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:15.299 21:25:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:15.299 21:25:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.299 21:25:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:15.299 21:25:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:15.299 21:25:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:15.299 21:25:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.299 21:25:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:15.299 21:25:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:15.299 21:25:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:15.299 21:25:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.299 21:25:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:15.299 21:25:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:15.299 21:25:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:15.299 21:25:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.299 21:25:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:15.299 21:25:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:15.299 21:25:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:15.299 21:25:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.299 21:25:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:15.299 21:25:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:15.299 21:25:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:15.299 21:25:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.299 21:25:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:15.299 21:25:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:15.299 21:25:15 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:15.299 21:25:15 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:15.299 21:25:15 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:15.299 00:08:15.299 real 0m1.399s 00:08:15.299 user 0m1.266s 00:08:15.299 sys 0m0.140s 00:08:15.299 21:25:15 accel.accel_decomp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:15.299 21:25:15 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:08:15.299 ************************************ 00:08:15.299 END TEST accel_decomp 00:08:15.299 ************************************ 00:08:15.299 21:25:15 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:15.299 21:25:15 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:08:15.299 21:25:15 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:15.299 21:25:15 accel -- common/autotest_common.sh@10 -- # set +x 00:08:15.559 ************************************ 00:08:15.559 START TEST accel_decomp_full 00:08:15.559 ************************************ 00:08:15.559 21:25:15 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:15.559 21:25:15 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:08:15.559 21:25:15 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:08:15.559 21:25:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:15.559 21:25:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:15.559 21:25:15 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:15.559 21:25:15 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:15.559 21:25:15 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:08:15.559 21:25:15 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:15.559 21:25:15 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:15.559 21:25:15 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:15.559 21:25:15 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:15.559 21:25:15 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:15.559 21:25:15 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:08:15.559 21:25:15 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:08:15.559 [2024-06-07 21:25:15.608373] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:08:15.559 [2024-06-07 21:25:15.608425] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1264173 ] 00:08:15.559 EAL: No free 2048 kB hugepages reported on node 1 00:08:15.559 [2024-06-07 21:25:15.695280] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.559 [2024-06-07 21:25:15.782349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:15.818 21:25:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:08:15.819 21:25:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:15.819 21:25:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:15.819 21:25:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:15.819 21:25:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:15.819 21:25:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:15.819 21:25:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:15.819 21:25:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:15.819 21:25:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:15.819 21:25:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:15.819 21:25:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:15.819 21:25:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:16.756 21:25:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:16.756 21:25:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:16.756 21:25:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:16.756 21:25:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:16.756 21:25:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:16.756 21:25:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:16.756 21:25:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:16.756 21:25:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:16.756 21:25:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:16.756 21:25:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:16.756 21:25:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:16.756 21:25:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:16.756 21:25:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:16.756 21:25:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:16.756 21:25:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:16.756 21:25:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:16.756 21:25:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:16.756 21:25:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:16.756 21:25:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:16.756 21:25:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:16.756 21:25:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:16.756 21:25:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:16.756 21:25:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:16.756 21:25:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:16.756 21:25:16 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:16.756 21:25:16 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:16.756 21:25:16 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:16.756 00:08:16.756 real 0m1.407s 00:08:16.756 user 0m1.276s 00:08:16.756 sys 0m0.137s 00:08:16.756 21:25:16 accel.accel_decomp_full -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:16.756 21:25:16 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:08:16.756 ************************************ 00:08:16.756 END TEST accel_decomp_full 00:08:16.756 ************************************ 00:08:16.756 21:25:17 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:16.756 21:25:17 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:08:16.756 21:25:17 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:16.756 21:25:17 accel -- common/autotest_common.sh@10 -- # set +x 00:08:17.015 ************************************ 00:08:17.015 START TEST accel_decomp_mcore 00:08:17.015 ************************************ 00:08:17.015 21:25:17 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:17.015 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:17.015 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:17.015 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:17.015 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:17.015 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:17.015 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:17.015 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:17.015 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:17.015 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:17.015 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:17.015 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:17.015 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:17.015 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:17.015 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:17.015 [2024-06-07 21:25:17.075993] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:08:17.015 [2024-06-07 21:25:17.076065] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1264456 ] 00:08:17.015 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.015 [2024-06-07 21:25:17.164180] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:17.015 [2024-06-07 21:25:17.256136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.015 [2024-06-07 21:25:17.256256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:17.015 [2024-06-07 21:25:17.256606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:17.015 [2024-06-07 21:25:17.256608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:17.275 21:25:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:18.212 21:25:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:18.212 21:25:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:18.212 21:25:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:18.212 21:25:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:18.212 21:25:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:18.212 21:25:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:18.212 21:25:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:18.212 21:25:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:18.212 21:25:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:18.212 21:25:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:18.212 21:25:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:18.212 21:25:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:18.212 21:25:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:18.212 21:25:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:18.212 21:25:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:18.212 21:25:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:18.212 21:25:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:18.212 21:25:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:18.212 21:25:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:18.212 21:25:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:18.212 21:25:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:18.212 21:25:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:18.212 21:25:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:18.212 21:25:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:18.212 21:25:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:18.212 21:25:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:18.212 21:25:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:18.212 21:25:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:18.212 21:25:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:18.212 21:25:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:18.212 21:25:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:18.212 21:25:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:18.212 21:25:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:18.212 21:25:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:18.212 21:25:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:18.212 21:25:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:18.212 21:25:18 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:18.212 21:25:18 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:18.212 21:25:18 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:18.212 00:08:18.212 real 0m1.413s 00:08:18.212 user 0m4.630s 00:08:18.212 sys 0m0.147s 00:08:18.212 21:25:18 accel.accel_decomp_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:18.212 21:25:18 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:18.212 ************************************ 00:08:18.212 END TEST accel_decomp_mcore 00:08:18.212 ************************************ 00:08:18.472 21:25:18 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:18.472 21:25:18 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:08:18.472 21:25:18 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:18.472 21:25:18 accel -- common/autotest_common.sh@10 -- # set +x 00:08:18.472 ************************************ 00:08:18.472 START TEST accel_decomp_full_mcore 00:08:18.472 ************************************ 00:08:18.472 21:25:18 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:18.472 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:18.472 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:18.472 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:18.472 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:18.472 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:18.473 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:18.473 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:18.473 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:18.473 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:18.473 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:18.473 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:18.473 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:18.473 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:18.473 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:18.473 [2024-06-07 21:25:18.559060] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:08:18.473 [2024-06-07 21:25:18.559112] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1264745 ] 00:08:18.473 EAL: No free 2048 kB hugepages reported on node 1 00:08:18.473 [2024-06-07 21:25:18.645511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:18.473 [2024-06-07 21:25:18.736150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.473 [2024-06-07 21:25:18.736250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:18.473 [2024-06-07 21:25:18.736367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:18.473 [2024-06-07 21:25:18.736368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:18.735 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:18.736 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:18.736 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:18.736 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:18.736 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:18.736 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:18.736 21:25:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:19.735 21:25:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:19.735 21:25:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:19.735 21:25:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:19.735 21:25:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:19.735 21:25:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:19.735 21:25:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:19.735 21:25:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:19.735 21:25:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:19.735 21:25:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:19.735 21:25:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:19.735 21:25:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:19.735 21:25:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:19.735 21:25:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:19.735 21:25:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:19.735 21:25:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:19.735 21:25:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:19.735 21:25:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:19.735 21:25:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:19.735 21:25:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:19.735 21:25:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:19.735 21:25:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:19.735 21:25:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:19.735 21:25:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:19.735 21:25:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:19.735 21:25:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:19.735 21:25:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:19.735 21:25:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:19.735 21:25:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:19.735 21:25:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:19.735 21:25:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:19.735 21:25:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:19.735 21:25:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:19.735 21:25:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:19.735 21:25:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:19.735 21:25:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:19.735 21:25:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:19.735 21:25:19 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:19.735 21:25:19 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:19.735 21:25:19 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:19.735 00:08:19.735 real 0m1.424s 00:08:19.735 user 0m4.689s 00:08:19.735 sys 0m0.144s 00:08:19.735 21:25:19 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:19.735 21:25:19 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:19.735 ************************************ 00:08:19.735 END TEST accel_decomp_full_mcore 00:08:19.735 ************************************ 00:08:19.735 21:25:19 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:19.735 21:25:19 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:08:19.735 21:25:19 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:19.735 21:25:19 accel -- common/autotest_common.sh@10 -- # set +x 00:08:19.995 ************************************ 00:08:19.995 START TEST accel_decomp_mthread 00:08:19.995 ************************************ 00:08:19.995 21:25:20 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:19.995 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:19.995 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:19.995 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:19.995 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:19.995 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:19.995 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:19.995 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:19.995 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:19.995 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:19.995 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:19.995 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:19.995 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:19.995 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:19.995 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:19.995 [2024-06-07 21:25:20.054363] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:08:19.995 [2024-06-07 21:25:20.054426] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1265040 ] 00:08:19.995 EAL: No free 2048 kB hugepages reported on node 1 00:08:19.995 [2024-06-07 21:25:20.144905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.995 [2024-06-07 21:25:20.232084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:20.254 21:25:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:21.191 21:25:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:21.191 21:25:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:21.191 21:25:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:21.191 21:25:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:21.191 21:25:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:21.191 21:25:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:21.191 21:25:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:21.191 21:25:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:21.191 21:25:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:21.191 21:25:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:21.191 21:25:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:21.191 21:25:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:21.191 21:25:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:21.191 21:25:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:21.191 21:25:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:21.191 21:25:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:21.191 21:25:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:21.191 21:25:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:21.191 21:25:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:21.191 21:25:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:21.191 21:25:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:21.191 21:25:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:21.191 21:25:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:21.191 21:25:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:21.191 21:25:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:21.191 21:25:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:21.191 21:25:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:21.191 21:25:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:21.191 21:25:21 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:21.191 21:25:21 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:21.191 21:25:21 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:21.191 00:08:21.191 real 0m1.410s 00:08:21.191 user 0m1.273s 00:08:21.192 sys 0m0.150s 00:08:21.192 21:25:21 accel.accel_decomp_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:21.192 21:25:21 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:21.192 ************************************ 00:08:21.192 END TEST accel_decomp_mthread 00:08:21.192 ************************************ 00:08:21.451 21:25:21 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:21.451 21:25:21 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:08:21.451 21:25:21 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:21.451 21:25:21 accel -- common/autotest_common.sh@10 -- # set +x 00:08:21.451 ************************************ 00:08:21.451 START TEST accel_decomp_full_mthread 00:08:21.451 ************************************ 00:08:21.451 21:25:21 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:21.451 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:21.451 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:21.451 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:21.451 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:21.451 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:21.451 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:21.451 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:21.451 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:21.451 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:21.451 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:21.451 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:21.451 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:21.451 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:21.451 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:21.451 [2024-06-07 21:25:21.532679] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:08:21.451 [2024-06-07 21:25:21.532733] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1265331 ] 00:08:21.451 EAL: No free 2048 kB hugepages reported on node 1 00:08:21.451 [2024-06-07 21:25:21.622603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.451 [2024-06-07 21:25:21.711258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:21.710 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:21.711 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:21.711 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:21.711 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:21.711 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:21.711 21:25:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:23.088 21:25:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:23.088 21:25:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:23.088 21:25:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:23.088 21:25:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:23.088 21:25:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:23.088 21:25:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:23.089 21:25:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:23.089 21:25:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:23.089 21:25:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:23.089 21:25:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:23.089 21:25:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:23.089 21:25:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:23.089 21:25:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:23.089 21:25:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:23.089 21:25:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:23.089 21:25:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:23.089 21:25:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:23.089 21:25:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:23.089 21:25:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:23.089 21:25:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:23.089 21:25:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:23.089 21:25:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:23.089 21:25:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:23.089 21:25:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:23.089 21:25:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:23.089 21:25:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:23.089 21:25:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:23.089 21:25:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:23.089 21:25:22 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:23.089 21:25:22 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:23.089 21:25:22 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:23.089 00:08:23.089 real 0m1.439s 00:08:23.089 user 0m1.311s 00:08:23.089 sys 0m0.142s 00:08:23.089 21:25:22 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:23.089 21:25:22 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:23.089 ************************************ 00:08:23.089 END TEST accel_decomp_full_mthread 00:08:23.089 ************************************ 00:08:23.089 21:25:22 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:08:23.089 21:25:22 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:23.089 21:25:22 accel -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:08:23.089 21:25:22 accel -- accel/accel.sh@137 -- # build_accel_config 00:08:23.089 21:25:22 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:23.089 21:25:22 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:23.089 21:25:22 accel -- common/autotest_common.sh@10 -- # set +x 00:08:23.089 21:25:22 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:23.089 21:25:22 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:23.089 21:25:22 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:23.089 21:25:22 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:23.089 21:25:22 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:23.089 21:25:22 accel -- accel/accel.sh@41 -- # jq -r . 00:08:23.089 ************************************ 00:08:23.089 START TEST accel_dif_functional_tests 00:08:23.089 ************************************ 00:08:23.089 21:25:23 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:23.089 [2024-06-07 21:25:23.064315] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:08:23.089 [2024-06-07 21:25:23.064365] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1265667 ] 00:08:23.089 EAL: No free 2048 kB hugepages reported on node 1 00:08:23.089 [2024-06-07 21:25:23.153352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:23.089 [2024-06-07 21:25:23.242314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.089 [2024-06-07 21:25:23.242413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.089 [2024-06-07 21:25:23.242414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:23.089 00:08:23.089 00:08:23.089 CUnit - A unit testing framework for C - Version 2.1-3 00:08:23.089 http://cunit.sourceforge.net/ 00:08:23.089 00:08:23.089 00:08:23.089 Suite: accel_dif 00:08:23.089 Test: verify: DIF generated, GUARD check ...passed 00:08:23.089 Test: verify: DIF generated, APPTAG check ...passed 00:08:23.089 Test: verify: DIF generated, REFTAG check ...passed 00:08:23.089 Test: verify: DIF not generated, GUARD check ...[2024-06-07 21:25:23.317195] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:23.089 passed 00:08:23.089 Test: verify: DIF not generated, APPTAG check ...[2024-06-07 21:25:23.317259] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:23.089 passed 00:08:23.089 Test: verify: DIF not generated, REFTAG check ...[2024-06-07 21:25:23.317296] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:23.089 passed 00:08:23.089 Test: verify: APPTAG correct, APPTAG check ...passed 00:08:23.089 Test: verify: APPTAG incorrect, APPTAG check ...[2024-06-07 21:25:23.317358] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:23.089 passed 00:08:23.089 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:08:23.089 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:23.089 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:23.089 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-06-07 21:25:23.317498] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:23.089 passed 00:08:23.089 Test: verify copy: DIF generated, GUARD check ...passed 00:08:23.089 Test: verify copy: DIF generated, APPTAG check ...passed 00:08:23.089 Test: verify copy: DIF generated, REFTAG check ...passed 00:08:23.089 Test: verify copy: DIF not generated, GUARD check ...[2024-06-07 21:25:23.317650] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:23.089 passed 00:08:23.089 Test: verify copy: DIF not generated, APPTAG check ...[2024-06-07 21:25:23.317681] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:23.089 passed 00:08:23.089 Test: verify copy: DIF not generated, REFTAG check ...[2024-06-07 21:25:23.317708] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:23.089 passed 00:08:23.089 Test: generate copy: DIF generated, GUARD check ...passed 00:08:23.089 Test: generate copy: DIF generated, APTTAG check ...passed 00:08:23.089 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:23.089 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:23.089 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:23.089 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:23.089 Test: generate copy: iovecs-len validate ...[2024-06-07 21:25:23.317935] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:23.089 passed 00:08:23.089 Test: generate copy: buffer alignment validate ...passed 00:08:23.089 00:08:23.089 Run Summary: Type Total Ran Passed Failed Inactive 00:08:23.089 suites 1 1 n/a 0 0 00:08:23.089 tests 26 26 26 0 0 00:08:23.089 asserts 115 115 115 0 n/a 00:08:23.089 00:08:23.089 Elapsed time = 0.002 seconds 00:08:23.348 00:08:23.348 real 0m0.486s 00:08:23.348 user 0m0.702s 00:08:23.348 sys 0m0.168s 00:08:23.348 21:25:23 accel.accel_dif_functional_tests -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:23.348 21:25:23 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:08:23.348 ************************************ 00:08:23.348 END TEST accel_dif_functional_tests 00:08:23.348 ************************************ 00:08:23.348 00:08:23.348 real 0m32.538s 00:08:23.348 user 0m35.822s 00:08:23.348 sys 0m4.932s 00:08:23.348 21:25:23 accel -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:23.348 21:25:23 accel -- common/autotest_common.sh@10 -- # set +x 00:08:23.348 ************************************ 00:08:23.348 END TEST accel 00:08:23.348 ************************************ 00:08:23.348 21:25:23 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:23.348 21:25:23 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:23.348 21:25:23 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:23.348 21:25:23 -- common/autotest_common.sh@10 -- # set +x 00:08:23.348 ************************************ 00:08:23.348 START TEST accel_rpc 00:08:23.348 ************************************ 00:08:23.348 21:25:23 accel_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:23.607 * Looking for test storage... 00:08:23.607 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:08:23.607 21:25:23 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:23.607 21:25:23 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1265909 00:08:23.607 21:25:23 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1265909 00:08:23.607 21:25:23 accel_rpc -- common/autotest_common.sh@830 -- # '[' -z 1265909 ']' 00:08:23.607 21:25:23 accel_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.607 21:25:23 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:23.607 21:25:23 accel_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:23.607 21:25:23 accel_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.607 21:25:23 accel_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:23.607 21:25:23 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:23.607 [2024-06-07 21:25:23.754685] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:08:23.607 [2024-06-07 21:25:23.754751] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1265909 ] 00:08:23.607 EAL: No free 2048 kB hugepages reported on node 1 00:08:23.607 [2024-06-07 21:25:23.845831] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.865 [2024-06-07 21:25:23.936893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.434 21:25:24 accel_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:24.434 21:25:24 accel_rpc -- common/autotest_common.sh@863 -- # return 0 00:08:24.434 21:25:24 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:24.434 21:25:24 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:24.434 21:25:24 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:24.434 21:25:24 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:24.434 21:25:24 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:24.434 21:25:24 accel_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:24.434 21:25:24 accel_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:24.434 21:25:24 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.434 ************************************ 00:08:24.434 START TEST accel_assign_opcode 00:08:24.434 ************************************ 00:08:24.434 21:25:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # accel_assign_opcode_test_suite 00:08:24.434 21:25:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:24.434 21:25:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:24.434 21:25:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:24.434 [2024-06-07 21:25:24.623044] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:24.434 21:25:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:24.434 21:25:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:24.434 21:25:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:24.434 21:25:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:24.434 [2024-06-07 21:25:24.631054] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:24.434 21:25:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:24.434 21:25:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:24.434 21:25:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:24.434 21:25:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:24.693 21:25:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:24.694 21:25:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:24.694 21:25:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:24.694 21:25:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:08:24.694 21:25:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:24.694 21:25:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:24.694 21:25:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:24.694 software 00:08:24.694 00:08:24.694 real 0m0.257s 00:08:24.694 user 0m0.048s 00:08:24.694 sys 0m0.009s 00:08:24.694 21:25:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:24.694 21:25:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:24.694 ************************************ 00:08:24.694 END TEST accel_assign_opcode 00:08:24.694 ************************************ 00:08:24.694 21:25:24 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1265909 00:08:24.694 21:25:24 accel_rpc -- common/autotest_common.sh@949 -- # '[' -z 1265909 ']' 00:08:24.694 21:25:24 accel_rpc -- common/autotest_common.sh@953 -- # kill -0 1265909 00:08:24.694 21:25:24 accel_rpc -- common/autotest_common.sh@954 -- # uname 00:08:24.694 21:25:24 accel_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:24.694 21:25:24 accel_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1265909 00:08:24.694 21:25:24 accel_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:24.694 21:25:24 accel_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:24.694 21:25:24 accel_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1265909' 00:08:24.694 killing process with pid 1265909 00:08:24.694 21:25:24 accel_rpc -- common/autotest_common.sh@968 -- # kill 1265909 00:08:24.694 21:25:24 accel_rpc -- common/autotest_common.sh@973 -- # wait 1265909 00:08:25.262 00:08:25.262 real 0m1.687s 00:08:25.262 user 0m1.752s 00:08:25.262 sys 0m0.477s 00:08:25.262 21:25:25 accel_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:25.262 21:25:25 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:25.262 ************************************ 00:08:25.262 END TEST accel_rpc 00:08:25.262 ************************************ 00:08:25.262 21:25:25 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:25.262 21:25:25 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:25.262 21:25:25 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:25.262 21:25:25 -- common/autotest_common.sh@10 -- # set +x 00:08:25.262 ************************************ 00:08:25.262 START TEST app_cmdline 00:08:25.262 ************************************ 00:08:25.262 21:25:25 app_cmdline -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:25.262 * Looking for test storage... 00:08:25.262 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:25.262 21:25:25 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:25.262 21:25:25 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1266252 00:08:25.262 21:25:25 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1266252 00:08:25.262 21:25:25 app_cmdline -- common/autotest_common.sh@830 -- # '[' -z 1266252 ']' 00:08:25.262 21:25:25 app_cmdline -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.262 21:25:25 app_cmdline -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:25.262 21:25:25 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:25.262 21:25:25 app_cmdline -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.262 21:25:25 app_cmdline -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:25.262 21:25:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:25.262 [2024-06-07 21:25:25.513513] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:08:25.262 [2024-06-07 21:25:25.513578] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1266252 ] 00:08:25.521 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.521 [2024-06-07 21:25:25.602144] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.521 [2024-06-07 21:25:25.691967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.458 21:25:26 app_cmdline -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:26.458 21:25:26 app_cmdline -- common/autotest_common.sh@863 -- # return 0 00:08:26.458 21:25:26 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:26.458 { 00:08:26.458 "version": "SPDK v24.09-pre git sha1 422f7ef4e", 00:08:26.458 "fields": { 00:08:26.458 "major": 24, 00:08:26.458 "minor": 9, 00:08:26.458 "patch": 0, 00:08:26.458 "suffix": "-pre", 00:08:26.458 "commit": "422f7ef4e" 00:08:26.458 } 00:08:26.458 } 00:08:26.458 21:25:26 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:26.458 21:25:26 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:26.458 21:25:26 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:26.458 21:25:26 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:26.458 21:25:26 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:26.458 21:25:26 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:26.458 21:25:26 app_cmdline -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:26.458 21:25:26 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:26.458 21:25:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:26.458 21:25:26 app_cmdline -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:26.458 21:25:26 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:26.458 21:25:26 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:26.458 21:25:26 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:26.458 21:25:26 app_cmdline -- common/autotest_common.sh@649 -- # local es=0 00:08:26.458 21:25:26 app_cmdline -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:26.458 21:25:26 app_cmdline -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:26.458 21:25:26 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:26.458 21:25:26 app_cmdline -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:26.458 21:25:26 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:26.458 21:25:26 app_cmdline -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:26.458 21:25:26 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:26.458 21:25:26 app_cmdline -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:26.458 21:25:26 app_cmdline -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:26.458 21:25:26 app_cmdline -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:26.717 request: 00:08:26.717 { 00:08:26.717 "method": "env_dpdk_get_mem_stats", 00:08:26.717 "req_id": 1 00:08:26.717 } 00:08:26.717 Got JSON-RPC error response 00:08:26.717 response: 00:08:26.717 { 00:08:26.717 "code": -32601, 00:08:26.717 "message": "Method not found" 00:08:26.717 } 00:08:26.717 21:25:26 app_cmdline -- common/autotest_common.sh@652 -- # es=1 00:08:26.717 21:25:26 app_cmdline -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:08:26.717 21:25:26 app_cmdline -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:08:26.717 21:25:26 app_cmdline -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:08:26.717 21:25:26 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1266252 00:08:26.717 21:25:26 app_cmdline -- common/autotest_common.sh@949 -- # '[' -z 1266252 ']' 00:08:26.717 21:25:26 app_cmdline -- common/autotest_common.sh@953 -- # kill -0 1266252 00:08:26.717 21:25:26 app_cmdline -- common/autotest_common.sh@954 -- # uname 00:08:26.717 21:25:26 app_cmdline -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:26.717 21:25:26 app_cmdline -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1266252 00:08:26.717 21:25:26 app_cmdline -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:26.717 21:25:26 app_cmdline -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:26.717 21:25:26 app_cmdline -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1266252' 00:08:26.717 killing process with pid 1266252 00:08:26.717 21:25:26 app_cmdline -- common/autotest_common.sh@968 -- # kill 1266252 00:08:26.717 21:25:26 app_cmdline -- common/autotest_common.sh@973 -- # wait 1266252 00:08:27.284 00:08:27.284 real 0m1.908s 00:08:27.284 user 0m2.381s 00:08:27.284 sys 0m0.475s 00:08:27.284 21:25:27 app_cmdline -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:27.284 21:25:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:27.284 ************************************ 00:08:27.284 END TEST app_cmdline 00:08:27.284 ************************************ 00:08:27.284 21:25:27 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:27.284 21:25:27 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:27.284 21:25:27 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:27.284 21:25:27 -- common/autotest_common.sh@10 -- # set +x 00:08:27.284 ************************************ 00:08:27.284 START TEST version 00:08:27.284 ************************************ 00:08:27.284 21:25:27 version -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:27.284 * Looking for test storage... 00:08:27.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:27.284 21:25:27 version -- app/version.sh@17 -- # get_header_version major 00:08:27.284 21:25:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:27.284 21:25:27 version -- app/version.sh@14 -- # cut -f2 00:08:27.284 21:25:27 version -- app/version.sh@14 -- # tr -d '"' 00:08:27.284 21:25:27 version -- app/version.sh@17 -- # major=24 00:08:27.284 21:25:27 version -- app/version.sh@18 -- # get_header_version minor 00:08:27.284 21:25:27 version -- app/version.sh@14 -- # cut -f2 00:08:27.284 21:25:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:27.284 21:25:27 version -- app/version.sh@14 -- # tr -d '"' 00:08:27.284 21:25:27 version -- app/version.sh@18 -- # minor=9 00:08:27.285 21:25:27 version -- app/version.sh@19 -- # get_header_version patch 00:08:27.285 21:25:27 version -- app/version.sh@14 -- # cut -f2 00:08:27.285 21:25:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:27.285 21:25:27 version -- app/version.sh@14 -- # tr -d '"' 00:08:27.285 21:25:27 version -- app/version.sh@19 -- # patch=0 00:08:27.285 21:25:27 version -- app/version.sh@20 -- # get_header_version suffix 00:08:27.285 21:25:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:27.285 21:25:27 version -- app/version.sh@14 -- # cut -f2 00:08:27.285 21:25:27 version -- app/version.sh@14 -- # tr -d '"' 00:08:27.285 21:25:27 version -- app/version.sh@20 -- # suffix=-pre 00:08:27.285 21:25:27 version -- app/version.sh@22 -- # version=24.9 00:08:27.285 21:25:27 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:27.285 21:25:27 version -- app/version.sh@28 -- # version=24.9rc0 00:08:27.285 21:25:27 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:27.285 21:25:27 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:27.285 21:25:27 version -- app/version.sh@30 -- # py_version=24.9rc0 00:08:27.285 21:25:27 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:08:27.285 00:08:27.285 real 0m0.164s 00:08:27.285 user 0m0.081s 00:08:27.285 sys 0m0.118s 00:08:27.285 21:25:27 version -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:27.285 21:25:27 version -- common/autotest_common.sh@10 -- # set +x 00:08:27.285 ************************************ 00:08:27.285 END TEST version 00:08:27.285 ************************************ 00:08:27.285 21:25:27 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:08:27.285 21:25:27 -- spdk/autotest.sh@198 -- # uname -s 00:08:27.285 21:25:27 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:08:27.285 21:25:27 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:27.285 21:25:27 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:27.285 21:25:27 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:08:27.285 21:25:27 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:08:27.285 21:25:27 -- spdk/autotest.sh@260 -- # timing_exit lib 00:08:27.285 21:25:27 -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:27.285 21:25:27 -- common/autotest_common.sh@10 -- # set +x 00:08:27.543 21:25:27 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:08:27.543 21:25:27 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:08:27.543 21:25:27 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:08:27.543 21:25:27 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:08:27.543 21:25:27 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:08:27.543 21:25:27 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:08:27.543 21:25:27 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:27.543 21:25:27 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:08:27.543 21:25:27 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:27.543 21:25:27 -- common/autotest_common.sh@10 -- # set +x 00:08:27.543 ************************************ 00:08:27.543 START TEST nvmf_tcp 00:08:27.543 ************************************ 00:08:27.543 21:25:27 nvmf_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:27.543 * Looking for test storage... 00:08:27.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:27.543 21:25:27 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:27.543 21:25:27 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:27.543 21:25:27 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:27.543 21:25:27 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:08:27.543 21:25:27 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:27.543 21:25:27 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:27.543 21:25:27 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:27.543 21:25:27 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:27.543 21:25:27 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:27.543 21:25:27 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:27.544 21:25:27 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:27.544 21:25:27 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:27.544 21:25:27 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:27.544 21:25:27 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:27.544 21:25:27 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:08:27.544 21:25:27 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:08:27.544 21:25:27 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:27.544 21:25:27 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:27.544 21:25:27 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:27.544 21:25:27 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:27.544 21:25:27 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:27.544 21:25:27 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:27.544 21:25:27 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:27.544 21:25:27 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:27.544 21:25:27 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.544 21:25:27 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.544 21:25:27 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.544 21:25:27 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:08:27.544 21:25:27 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.544 21:25:27 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:08:27.544 21:25:27 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:27.544 21:25:27 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:27.544 21:25:27 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:27.544 21:25:27 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:27.544 21:25:27 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:27.544 21:25:27 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:27.544 21:25:27 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:27.544 21:25:27 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:27.544 21:25:27 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:27.544 21:25:27 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:08:27.544 21:25:27 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:08:27.544 21:25:27 nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:27.544 21:25:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:27.544 21:25:27 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:08:27.544 21:25:27 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:27.544 21:25:27 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:08:27.544 21:25:27 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:27.544 21:25:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:27.544 ************************************ 00:08:27.544 START TEST nvmf_example 00:08:27.544 ************************************ 00:08:27.544 21:25:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:27.803 * Looking for test storage... 00:08:27.803 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:27.803 21:25:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:08:27.804 21:25:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:34.374 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:34.374 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:08:34.374 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:34.374 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:34.374 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:34.374 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:34.374 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:34.374 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:08:34.374 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:34.374 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:08:34.374 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:08:34.374 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:08:34.374 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:08:34.374 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:08:34.374 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:08:34.374 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:34.374 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:34.374 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:34.374 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:34.374 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:34.374 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:34.374 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:34.374 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:34.374 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:34.374 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:34.374 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:34.374 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:34.374 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:34.374 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:34.374 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:34.374 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:34.374 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:34.374 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:34.374 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:34.374 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:34.374 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:34.374 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:34.374 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:34.374 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:34.374 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:34.374 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:34.374 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:34.374 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:34.374 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:34.374 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:34.374 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:34.374 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:34.374 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:34.374 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:34.374 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:34.374 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:34.374 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:34.374 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:34.375 Found net devices under 0000:af:00.0: cvl_0_0 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:34.375 Found net devices under 0000:af:00.1: cvl_0_1 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:34.375 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:34.375 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:08:34.375 00:08:34.375 --- 10.0.0.2 ping statistics --- 00:08:34.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.375 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:34.375 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:34.375 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:08:34.375 00:08:34.375 --- 10.0.0.1 ping statistics --- 00:08:34.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.375 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1270364 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1270364 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@830 -- # '[' -z 1270364 ']' 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:34.375 21:25:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:34.375 EAL: No free 2048 kB hugepages reported on node 1 00:08:35.312 21:25:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:35.312 21:25:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@863 -- # return 0 00:08:35.312 21:25:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:35.312 21:25:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:35.312 21:25:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:35.312 21:25:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:35.312 21:25:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:35.312 21:25:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:35.312 21:25:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:35.312 21:25:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:35.312 21:25:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:35.312 21:25:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:35.312 21:25:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:35.312 21:25:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:35.312 21:25:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:35.312 21:25:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:35.312 21:25:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:35.312 21:25:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:35.312 21:25:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:35.312 21:25:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:35.312 21:25:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:35.312 21:25:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:35.312 21:25:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:35.312 21:25:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:35.312 21:25:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:35.312 21:25:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:35.312 21:25:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:35.312 21:25:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:08:35.312 21:25:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:35.571 EAL: No free 2048 kB hugepages reported on node 1 00:08:45.553 Initializing NVMe Controllers 00:08:45.553 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:45.553 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:45.553 Initialization complete. Launching workers. 00:08:45.553 ======================================================== 00:08:45.553 Latency(us) 00:08:45.553 Device Information : IOPS MiB/s Average min max 00:08:45.553 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15274.00 59.66 4192.34 1006.45 17340.99 00:08:45.553 ======================================================== 00:08:45.553 Total : 15274.00 59.66 4192.34 1006.45 17340.99 00:08:45.553 00:08:45.553 21:25:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:45.553 21:25:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:45.553 21:25:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:45.553 21:25:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:08:45.553 21:25:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:45.553 21:25:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:08:45.553 21:25:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:45.553 21:25:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:45.553 rmmod nvme_tcp 00:08:45.553 rmmod nvme_fabrics 00:08:45.553 rmmod nvme_keyring 00:08:45.553 21:25:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:45.553 21:25:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:08:45.553 21:25:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:08:45.553 21:25:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1270364 ']' 00:08:45.553 21:25:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1270364 00:08:45.553 21:25:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@949 -- # '[' -z 1270364 ']' 00:08:45.553 21:25:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # kill -0 1270364 00:08:45.553 21:25:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # uname 00:08:45.553 21:25:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:45.553 21:25:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1270364 00:08:45.813 21:25:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@955 -- # process_name=nvmf 00:08:45.813 21:25:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@959 -- # '[' nvmf = sudo ']' 00:08:45.813 21:25:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1270364' 00:08:45.813 killing process with pid 1270364 00:08:45.813 21:25:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@968 -- # kill 1270364 00:08:45.813 21:25:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@973 -- # wait 1270364 00:08:45.813 nvmf threads initialize successfully 00:08:45.813 bdev subsystem init successfully 00:08:45.813 created a nvmf target service 00:08:45.813 create targets's poll groups done 00:08:45.813 all subsystems of target started 00:08:45.813 nvmf target is running 00:08:45.813 all subsystems of target stopped 00:08:45.813 destroy targets's poll groups done 00:08:45.813 destroyed the nvmf target service 00:08:45.813 bdev subsystem finish successfully 00:08:45.813 nvmf threads destroy successfully 00:08:45.813 21:25:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:45.813 21:25:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:45.813 21:25:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:45.813 21:25:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:45.813 21:25:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:45.813 21:25:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.813 21:25:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:45.813 21:25:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.350 21:25:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:48.350 21:25:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:48.350 21:25:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:48.350 21:25:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:48.350 00:08:48.350 real 0m20.374s 00:08:48.350 user 0m46.446s 00:08:48.350 sys 0m6.374s 00:08:48.350 21:25:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:48.350 21:25:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:48.350 ************************************ 00:08:48.350 END TEST nvmf_example 00:08:48.350 ************************************ 00:08:48.350 21:25:48 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:48.350 21:25:48 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:08:48.350 21:25:48 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:48.350 21:25:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:48.350 ************************************ 00:08:48.350 START TEST nvmf_filesystem 00:08:48.350 ************************************ 00:08:48.350 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:48.350 * Looking for test storage... 00:08:48.350 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:48.350 21:25:48 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:08:48.350 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:48.350 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:08:48.350 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:48.350 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:48.350 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:08:48.350 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:08:48.350 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:48.350 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:08:48.350 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:48.350 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:48.350 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:48.350 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:48.350 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:48.350 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:48.350 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:48.350 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:48.350 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:48.350 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:48.350 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:48.350 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:48.350 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:48.350 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:48.350 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:48.350 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:48.350 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:48.350 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:48.350 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:48.350 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:48.350 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:48.350 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:48.350 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:48.350 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:48.350 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:48.350 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:48.350 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:48.350 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:48.350 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:08:48.351 21:25:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:48.351 #define SPDK_CONFIG_H 00:08:48.351 #define SPDK_CONFIG_APPS 1 00:08:48.351 #define SPDK_CONFIG_ARCH native 00:08:48.351 #undef SPDK_CONFIG_ASAN 00:08:48.351 #undef SPDK_CONFIG_AVAHI 00:08:48.351 #undef SPDK_CONFIG_CET 00:08:48.351 #define SPDK_CONFIG_COVERAGE 1 00:08:48.351 #define SPDK_CONFIG_CROSS_PREFIX 00:08:48.351 #undef SPDK_CONFIG_CRYPTO 00:08:48.351 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:48.351 #undef SPDK_CONFIG_CUSTOMOCF 00:08:48.351 #undef SPDK_CONFIG_DAOS 00:08:48.351 #define SPDK_CONFIG_DAOS_DIR 00:08:48.351 #define SPDK_CONFIG_DEBUG 1 00:08:48.351 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:48.351 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:08:48.351 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:48.351 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:48.351 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:48.351 #undef SPDK_CONFIG_DPDK_UADK 00:08:48.351 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:48.351 #define SPDK_CONFIG_EXAMPLES 1 00:08:48.351 #undef SPDK_CONFIG_FC 00:08:48.351 #define SPDK_CONFIG_FC_PATH 00:08:48.351 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:48.351 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:48.351 #undef SPDK_CONFIG_FUSE 00:08:48.351 #undef SPDK_CONFIG_FUZZER 00:08:48.351 #define SPDK_CONFIG_FUZZER_LIB 00:08:48.351 #undef SPDK_CONFIG_GOLANG 00:08:48.351 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:48.351 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:48.351 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:48.351 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:08:48.351 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:48.351 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:48.351 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:48.351 #define SPDK_CONFIG_IDXD 1 00:08:48.351 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:48.351 #undef SPDK_CONFIG_IPSEC_MB 00:08:48.351 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:48.351 #define SPDK_CONFIG_ISAL 1 00:08:48.351 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:48.351 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:48.351 #define SPDK_CONFIG_LIBDIR 00:08:48.351 #undef SPDK_CONFIG_LTO 00:08:48.351 #define SPDK_CONFIG_MAX_LCORES 00:08:48.351 #define SPDK_CONFIG_NVME_CUSE 1 00:08:48.351 #undef SPDK_CONFIG_OCF 00:08:48.351 #define SPDK_CONFIG_OCF_PATH 00:08:48.351 #define SPDK_CONFIG_OPENSSL_PATH 00:08:48.351 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:48.351 #define SPDK_CONFIG_PGO_DIR 00:08:48.351 #undef SPDK_CONFIG_PGO_USE 00:08:48.351 #define SPDK_CONFIG_PREFIX /usr/local 00:08:48.351 #undef SPDK_CONFIG_RAID5F 00:08:48.351 #undef SPDK_CONFIG_RBD 00:08:48.351 #define SPDK_CONFIG_RDMA 1 00:08:48.351 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:48.351 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:48.351 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:48.351 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:48.351 #define SPDK_CONFIG_SHARED 1 00:08:48.351 #undef SPDK_CONFIG_SMA 00:08:48.351 #define SPDK_CONFIG_TESTS 1 00:08:48.351 #undef SPDK_CONFIG_TSAN 00:08:48.351 #define SPDK_CONFIG_UBLK 1 00:08:48.351 #define SPDK_CONFIG_UBSAN 1 00:08:48.351 #undef SPDK_CONFIG_UNIT_TESTS 00:08:48.351 #undef SPDK_CONFIG_URING 00:08:48.351 #define SPDK_CONFIG_URING_PATH 00:08:48.351 #undef SPDK_CONFIG_URING_ZNS 00:08:48.351 #undef SPDK_CONFIG_USDT 00:08:48.351 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:48.351 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:48.351 #define SPDK_CONFIG_VFIO_USER 1 00:08:48.351 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:48.351 #define SPDK_CONFIG_VHOST 1 00:08:48.351 #define SPDK_CONFIG_VIRTIO 1 00:08:48.351 #undef SPDK_CONFIG_VTUNE 00:08:48.352 #define SPDK_CONFIG_VTUNE_DIR 00:08:48.352 #define SPDK_CONFIG_WERROR 1 00:08:48.352 #define SPDK_CONFIG_WPDK_DIR 00:08:48.352 #undef SPDK_CONFIG_XNVME 00:08:48.352 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:08:48.352 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:08:48.353 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j112 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 1273105 ]] 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 1273105 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # set_test_storage 2147483648 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.LjQmwS 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.LjQmwS/tests/target /tmp/spdk.LjQmwS 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=956715008 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4327714816 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=83714809856 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=94501429248 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=10786619392 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=47195078656 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=47250714624 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=18890649600 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=18900287488 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9637888 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=47249625088 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=47250714624 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=1089536 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=9450135552 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=9450139648 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:08:48.354 * Looking for test storage... 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=83714809856 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=13001211904 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:48.354 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1681 -- # set -o errtrace 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # shopt -s extdebug 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # true 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1688 -- # xtrace_fd 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:08:48.354 21:25:48 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:48.355 21:25:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:08:48.355 21:25:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:48.355 21:25:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:48.355 21:25:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:48.355 21:25:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:48.355 21:25:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:48.355 21:25:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:48.355 21:25:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:48.355 21:25:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:48.355 21:25:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:48.355 21:25:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:48.355 21:25:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:08:48.355 21:25:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:08:48.355 21:25:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:48.355 21:25:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:48.355 21:25:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:48.355 21:25:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:48.355 21:25:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:48.355 21:25:48 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:48.355 21:25:48 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:48.355 21:25:48 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:48.355 21:25:48 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.355 21:25:48 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.355 21:25:48 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.355 21:25:48 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:48.355 21:25:48 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.355 21:25:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:08:48.355 21:25:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:48.355 21:25:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:48.355 21:25:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:48.355 21:25:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:48.355 21:25:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:48.355 21:25:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:48.355 21:25:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:48.355 21:25:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:48.355 21:25:48 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:48.355 21:25:48 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:48.355 21:25:48 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:08:48.355 21:25:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:48.355 21:25:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:48.355 21:25:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:48.355 21:25:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:48.355 21:25:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:48.355 21:25:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.355 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:48.355 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.355 21:25:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:48.355 21:25:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:48.355 21:25:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:48.355 21:25:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:54.922 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:54.922 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:54.922 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:54.922 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:54.922 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:54.922 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:54.922 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:54.922 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:54.922 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:54.922 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:08:54.922 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:54.922 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:08:54.922 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:54.922 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:08:54.922 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:54.922 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:54.922 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:54.922 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:54.922 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:54.922 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:54.922 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:54.922 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:54.922 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:54.922 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:54.923 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:54.923 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:54.923 Found net devices under 0000:af:00.0: cvl_0_0 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:54.923 Found net devices under 0000:af:00.1: cvl_0_1 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:54.923 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:54.923 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:08:54.923 00:08:54.923 --- 10.0.0.2 ping statistics --- 00:08:54.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.923 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:54.923 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:54.923 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:08:54.923 00:08:54.923 --- 10.0.0.1 ping statistics --- 00:08:54.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.923 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:54.923 ************************************ 00:08:54.923 START TEST nvmf_filesystem_no_in_capsule 00:08:54.923 ************************************ 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # nvmf_filesystem_part 0 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1276618 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1276618 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@830 -- # '[' -z 1276618 ']' 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:54.923 21:25:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:54.923 [2024-06-07 21:25:55.011851] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:08:54.923 [2024-06-07 21:25:55.011903] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.923 EAL: No free 2048 kB hugepages reported on node 1 00:08:54.923 [2024-06-07 21:25:55.106088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:55.183 [2024-06-07 21:25:55.200314] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:55.183 [2024-06-07 21:25:55.200356] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:55.183 [2024-06-07 21:25:55.200367] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:55.183 [2024-06-07 21:25:55.200375] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:55.183 [2024-06-07 21:25:55.200383] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:55.183 [2024-06-07 21:25:55.200436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:55.183 [2024-06-07 21:25:55.200454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:55.183 [2024-06-07 21:25:55.200574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:55.183 [2024-06-07 21:25:55.200575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.750 21:25:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:55.750 21:25:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@863 -- # return 0 00:08:55.750 21:25:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:55.750 21:25:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:55.750 21:25:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:55.750 21:25:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:55.750 21:25:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:55.750 21:25:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:55.750 21:25:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:55.750 21:25:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:55.750 [2024-06-07 21:25:56.001847] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:55.750 21:25:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:55.750 21:25:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:55.750 21:25:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:55.750 21:25:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:56.009 Malloc1 00:08:56.009 21:25:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:56.009 21:25:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:56.009 21:25:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:56.009 21:25:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:56.009 21:25:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:56.009 21:25:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:56.009 21:25:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:56.010 21:25:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:56.010 21:25:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:56.010 21:25:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:56.010 21:25:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:56.010 21:25:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:56.010 [2024-06-07 21:25:56.157555] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:56.010 21:25:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:56.010 21:25:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:56.010 21:25:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_name=Malloc1 00:08:56.010 21:25:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_info 00:08:56.010 21:25:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bs 00:08:56.010 21:25:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local nb 00:08:56.010 21:25:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:56.010 21:25:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:56.010 21:25:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:56.010 21:25:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:56.010 21:25:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:08:56.010 { 00:08:56.010 "name": "Malloc1", 00:08:56.010 "aliases": [ 00:08:56.010 "a776c096-f5a0-41fe-b469-53ea0e03c851" 00:08:56.010 ], 00:08:56.010 "product_name": "Malloc disk", 00:08:56.010 "block_size": 512, 00:08:56.010 "num_blocks": 1048576, 00:08:56.010 "uuid": "a776c096-f5a0-41fe-b469-53ea0e03c851", 00:08:56.010 "assigned_rate_limits": { 00:08:56.010 "rw_ios_per_sec": 0, 00:08:56.010 "rw_mbytes_per_sec": 0, 00:08:56.010 "r_mbytes_per_sec": 0, 00:08:56.010 "w_mbytes_per_sec": 0 00:08:56.010 }, 00:08:56.010 "claimed": true, 00:08:56.010 "claim_type": "exclusive_write", 00:08:56.010 "zoned": false, 00:08:56.010 "supported_io_types": { 00:08:56.010 "read": true, 00:08:56.010 "write": true, 00:08:56.010 "unmap": true, 00:08:56.010 "write_zeroes": true, 00:08:56.010 "flush": true, 00:08:56.010 "reset": true, 00:08:56.010 "compare": false, 00:08:56.010 "compare_and_write": false, 00:08:56.010 "abort": true, 00:08:56.010 "nvme_admin": false, 00:08:56.010 "nvme_io": false 00:08:56.010 }, 00:08:56.010 "memory_domains": [ 00:08:56.010 { 00:08:56.010 "dma_device_id": "system", 00:08:56.010 "dma_device_type": 1 00:08:56.010 }, 00:08:56.010 { 00:08:56.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.010 "dma_device_type": 2 00:08:56.010 } 00:08:56.010 ], 00:08:56.010 "driver_specific": {} 00:08:56.010 } 00:08:56.010 ]' 00:08:56.010 21:25:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:08:56.010 21:25:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bs=512 00:08:56.010 21:25:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:08:56.269 21:25:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # nb=1048576 00:08:56.269 21:25:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_size=512 00:08:56.269 21:25:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # echo 512 00:08:56.269 21:25:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:56.269 21:25:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:57.647 21:25:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:57.647 21:25:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1197 -- # local i=0 00:08:57.647 21:25:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:08:57.647 21:25:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:08:57.647 21:25:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # sleep 2 00:08:59.552 21:25:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:08:59.552 21:25:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:59.552 21:25:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:08:59.552 21:25:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:08:59.552 21:25:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:08:59.552 21:25:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # return 0 00:08:59.552 21:25:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:59.552 21:25:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:59.552 21:25:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:59.552 21:25:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:59.552 21:25:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:59.552 21:25:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:59.552 21:25:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:59.552 21:25:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:59.552 21:25:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:59.552 21:25:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:59.552 21:25:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:00.119 21:26:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:00.379 21:26:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:01.755 21:26:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:09:01.755 21:26:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:01.755 21:26:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:09:01.755 21:26:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:01.755 21:26:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:01.755 ************************************ 00:09:01.755 START TEST filesystem_ext4 00:09:01.755 ************************************ 00:09:01.755 21:26:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:01.755 21:26:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:01.755 21:26:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:01.755 21:26:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:01.755 21:26:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local fstype=ext4 00:09:01.755 21:26:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:09:01.755 21:26:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local i=0 00:09:01.755 21:26:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local force 00:09:01.755 21:26:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # '[' ext4 = ext4 ']' 00:09:01.755 21:26:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # force=-F 00:09:01.755 21:26:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:01.755 mke2fs 1.46.5 (30-Dec-2021) 00:09:01.755 Discarding device blocks: 0/522240 done 00:09:01.755 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:01.755 Filesystem UUID: 47ba1001-371f-4b0c-aef1-db7dcedf2b37 00:09:01.755 Superblock backups stored on blocks: 00:09:01.755 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:01.755 00:09:01.755 Allocating group tables: 0/64 done 00:09:01.755 Writing inode tables: 0/64 done 00:09:02.322 Creating journal (8192 blocks): done 00:09:03.510 Writing superblocks and filesystem accounting information: 0/64 6/64 done 00:09:03.510 00:09:03.510 21:26:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@944 -- # return 0 00:09:03.510 21:26:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:03.510 21:26:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:03.768 21:26:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:09:03.768 21:26:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:03.768 21:26:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:09:03.768 21:26:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:03.768 21:26:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:03.768 21:26:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1276618 00:09:03.768 21:26:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:03.768 21:26:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:03.768 21:26:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:03.768 21:26:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:03.768 00:09:03.768 real 0m2.235s 00:09:03.768 user 0m0.027s 00:09:03.768 sys 0m0.064s 00:09:03.768 21:26:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:03.768 21:26:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:03.768 ************************************ 00:09:03.768 END TEST filesystem_ext4 00:09:03.768 ************************************ 00:09:03.768 21:26:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:03.768 21:26:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:09:03.768 21:26:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:03.768 21:26:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:03.768 ************************************ 00:09:03.768 START TEST filesystem_btrfs 00:09:03.768 ************************************ 00:09:03.768 21:26:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:03.768 21:26:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:03.768 21:26:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:03.769 21:26:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:03.769 21:26:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local fstype=btrfs 00:09:03.769 21:26:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:09:03.769 21:26:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local i=0 00:09:03.769 21:26:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local force 00:09:03.769 21:26:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # '[' btrfs = ext4 ']' 00:09:03.769 21:26:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # force=-f 00:09:03.769 21:26:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:04.027 btrfs-progs v6.6.2 00:09:04.027 See https://btrfs.readthedocs.io for more information. 00:09:04.027 00:09:04.027 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:04.027 NOTE: several default settings have changed in version 5.15, please make sure 00:09:04.027 this does not affect your deployments: 00:09:04.027 - DUP for metadata (-m dup) 00:09:04.027 - enabled no-holes (-O no-holes) 00:09:04.027 - enabled free-space-tree (-R free-space-tree) 00:09:04.027 00:09:04.027 Label: (null) 00:09:04.027 UUID: c1267a5b-ab4a-4b06-8a4f-8530205d3c0d 00:09:04.027 Node size: 16384 00:09:04.027 Sector size: 4096 00:09:04.027 Filesystem size: 510.00MiB 00:09:04.027 Block group profiles: 00:09:04.027 Data: single 8.00MiB 00:09:04.027 Metadata: DUP 32.00MiB 00:09:04.027 System: DUP 8.00MiB 00:09:04.027 SSD detected: yes 00:09:04.027 Zoned device: no 00:09:04.027 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:09:04.027 Runtime features: free-space-tree 00:09:04.028 Checksum: crc32c 00:09:04.028 Number of devices: 1 00:09:04.028 Devices: 00:09:04.028 ID SIZE PATH 00:09:04.028 1 510.00MiB /dev/nvme0n1p1 00:09:04.028 00:09:04.028 21:26:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@944 -- # return 0 00:09:04.028 21:26:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:04.963 21:26:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:04.963 21:26:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:09:04.963 21:26:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:04.963 21:26:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:09:04.963 21:26:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:04.963 21:26:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:04.963 21:26:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1276618 00:09:04.963 21:26:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:04.963 21:26:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:04.963 21:26:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:04.963 21:26:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:04.963 00:09:04.963 real 0m1.216s 00:09:04.963 user 0m0.030s 00:09:04.963 sys 0m0.125s 00:09:04.963 21:26:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:04.963 21:26:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:04.963 ************************************ 00:09:04.963 END TEST filesystem_btrfs 00:09:04.963 ************************************ 00:09:04.963 21:26:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:09:04.963 21:26:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:09:04.963 21:26:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:04.963 21:26:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:04.963 ************************************ 00:09:04.963 START TEST filesystem_xfs 00:09:04.963 ************************************ 00:09:04.964 21:26:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create xfs nvme0n1 00:09:04.964 21:26:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:04.964 21:26:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:04.964 21:26:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:04.964 21:26:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local fstype=xfs 00:09:04.964 21:26:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:09:04.964 21:26:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local i=0 00:09:04.964 21:26:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local force 00:09:04.964 21:26:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # '[' xfs = ext4 ']' 00:09:04.964 21:26:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # force=-f 00:09:04.964 21:26:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:05.223 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:05.223 = sectsz=512 attr=2, projid32bit=1 00:09:05.223 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:05.223 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:05.223 data = bsize=4096 blocks=130560, imaxpct=25 00:09:05.223 = sunit=0 swidth=0 blks 00:09:05.223 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:05.223 log =internal log bsize=4096 blocks=16384, version=2 00:09:05.223 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:05.223 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:06.159 Discarding blocks...Done. 00:09:06.159 21:26:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@944 -- # return 0 00:09:06.159 21:26:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:08.693 21:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:08.693 21:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:09:08.693 21:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:08.693 21:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:09:08.693 21:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:09:08.693 21:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:08.693 21:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1276618 00:09:08.693 21:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:08.693 21:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:08.693 21:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:08.693 21:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:08.693 00:09:08.693 real 0m3.467s 00:09:08.693 user 0m0.031s 00:09:08.693 sys 0m0.065s 00:09:08.693 21:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:08.693 21:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:08.693 ************************************ 00:09:08.693 END TEST filesystem_xfs 00:09:08.693 ************************************ 00:09:08.694 21:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:08.694 21:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:08.694 21:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:08.694 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.694 21:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:08.694 21:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1218 -- # local i=0 00:09:08.694 21:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:09:08.694 21:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:08.694 21:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:09:08.694 21:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:08.694 21:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1230 -- # return 0 00:09:08.694 21:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:08.694 21:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:08.694 21:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:08.694 21:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:08.694 21:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:08.694 21:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1276618 00:09:08.694 21:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@949 -- # '[' -z 1276618 ']' 00:09:08.694 21:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # kill -0 1276618 00:09:08.694 21:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # uname 00:09:08.694 21:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:08.694 21:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1276618 00:09:08.953 21:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:08.953 21:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:08.953 21:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1276618' 00:09:08.953 killing process with pid 1276618 00:09:08.953 21:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # kill 1276618 00:09:08.953 21:26:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # wait 1276618 00:09:09.213 21:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:09.213 00:09:09.213 real 0m14.385s 00:09:09.213 user 0m56.427s 00:09:09.213 sys 0m1.361s 00:09:09.213 21:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:09.213 21:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:09.213 ************************************ 00:09:09.213 END TEST nvmf_filesystem_no_in_capsule 00:09:09.213 ************************************ 00:09:09.213 21:26:09 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:09:09.213 21:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:09.213 21:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:09.213 21:26:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:09.213 ************************************ 00:09:09.213 START TEST nvmf_filesystem_in_capsule 00:09:09.213 ************************************ 00:09:09.213 21:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # nvmf_filesystem_part 4096 00:09:09.213 21:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:09:09.213 21:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:09.213 21:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:09.213 21:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:09.213 21:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:09.213 21:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1279410 00:09:09.213 21:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1279410 00:09:09.213 21:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:09.213 21:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@830 -- # '[' -z 1279410 ']' 00:09:09.213 21:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.213 21:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:09.213 21:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.213 21:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:09.213 21:26:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:09.213 [2024-06-07 21:26:09.471291] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:09:09.213 [2024-06-07 21:26:09.471345] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:09.472 EAL: No free 2048 kB hugepages reported on node 1 00:09:09.473 [2024-06-07 21:26:09.565614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:09.473 [2024-06-07 21:26:09.657680] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:09.473 [2024-06-07 21:26:09.657726] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:09.473 [2024-06-07 21:26:09.657736] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:09.473 [2024-06-07 21:26:09.657745] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:09.473 [2024-06-07 21:26:09.657753] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:09.473 [2024-06-07 21:26:09.657810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.473 [2024-06-07 21:26:09.657911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:09.473 [2024-06-07 21:26:09.658053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:09.473 [2024-06-07 21:26:09.658054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.410 21:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:10.410 21:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@863 -- # return 0 00:09:10.410 21:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:10.410 21:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:10.410 21:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:10.410 21:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:10.410 21:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:10.410 21:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:09:10.410 21:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:10.410 21:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:10.410 [2024-06-07 21:26:10.456811] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:10.410 21:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:10.410 21:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:10.410 21:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:10.410 21:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:10.410 Malloc1 00:09:10.410 21:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:10.410 21:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:10.410 21:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:10.410 21:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:10.410 21:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:10.410 21:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:10.410 21:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:10.410 21:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:10.410 21:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:10.410 21:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:10.410 21:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:10.410 21:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:10.410 [2024-06-07 21:26:10.615345] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:10.410 21:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:10.410 21:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:10.410 21:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_name=Malloc1 00:09:10.410 21:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_info 00:09:10.410 21:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bs 00:09:10.410 21:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local nb 00:09:10.410 21:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:10.410 21:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:10.410 21:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:10.410 21:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:10.410 21:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:09:10.410 { 00:09:10.411 "name": "Malloc1", 00:09:10.411 "aliases": [ 00:09:10.411 "31655089-cf91-4e7a-b6b5-aadd0edb87d9" 00:09:10.411 ], 00:09:10.411 "product_name": "Malloc disk", 00:09:10.411 "block_size": 512, 00:09:10.411 "num_blocks": 1048576, 00:09:10.411 "uuid": "31655089-cf91-4e7a-b6b5-aadd0edb87d9", 00:09:10.411 "assigned_rate_limits": { 00:09:10.411 "rw_ios_per_sec": 0, 00:09:10.411 "rw_mbytes_per_sec": 0, 00:09:10.411 "r_mbytes_per_sec": 0, 00:09:10.411 "w_mbytes_per_sec": 0 00:09:10.411 }, 00:09:10.411 "claimed": true, 00:09:10.411 "claim_type": "exclusive_write", 00:09:10.411 "zoned": false, 00:09:10.411 "supported_io_types": { 00:09:10.411 "read": true, 00:09:10.411 "write": true, 00:09:10.411 "unmap": true, 00:09:10.411 "write_zeroes": true, 00:09:10.411 "flush": true, 00:09:10.411 "reset": true, 00:09:10.411 "compare": false, 00:09:10.411 "compare_and_write": false, 00:09:10.411 "abort": true, 00:09:10.411 "nvme_admin": false, 00:09:10.411 "nvme_io": false 00:09:10.411 }, 00:09:10.411 "memory_domains": [ 00:09:10.411 { 00:09:10.411 "dma_device_id": "system", 00:09:10.411 "dma_device_type": 1 00:09:10.411 }, 00:09:10.411 { 00:09:10.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.411 "dma_device_type": 2 00:09:10.411 } 00:09:10.411 ], 00:09:10.411 "driver_specific": {} 00:09:10.411 } 00:09:10.411 ]' 00:09:10.411 21:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:09:10.669 21:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bs=512 00:09:10.669 21:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:09:10.669 21:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # nb=1048576 00:09:10.669 21:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_size=512 00:09:10.669 21:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # echo 512 00:09:10.669 21:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:10.669 21:26:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:12.046 21:26:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:12.046 21:26:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1197 -- # local i=0 00:09:12.046 21:26:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:09:12.046 21:26:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:09:12.046 21:26:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # sleep 2 00:09:13.958 21:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:09:13.958 21:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:09:13.958 21:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:09:13.958 21:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:09:13.958 21:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:09:13.958 21:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # return 0 00:09:13.958 21:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:13.958 21:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:13.958 21:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:13.958 21:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:13.958 21:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:13.958 21:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:13.958 21:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:13.958 21:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:13.958 21:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:13.958 21:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:13.958 21:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:14.217 21:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:14.475 21:26:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:15.411 21:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:09:15.411 21:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:15.411 21:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:09:15.411 21:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:15.411 21:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:15.411 ************************************ 00:09:15.411 START TEST filesystem_in_capsule_ext4 00:09:15.411 ************************************ 00:09:15.411 21:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:15.411 21:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:15.411 21:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:15.411 21:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:15.411 21:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local fstype=ext4 00:09:15.411 21:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:09:15.411 21:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local i=0 00:09:15.411 21:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local force 00:09:15.411 21:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # '[' ext4 = ext4 ']' 00:09:15.411 21:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # force=-F 00:09:15.411 21:26:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:15.411 mke2fs 1.46.5 (30-Dec-2021) 00:09:15.670 Discarding device blocks: 0/522240 done 00:09:15.670 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:15.670 Filesystem UUID: e6c67c79-aa35-4743-a1f9-673c1834edf6 00:09:15.670 Superblock backups stored on blocks: 00:09:15.671 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:15.671 00:09:15.671 Allocating group tables: 0/64 done 00:09:15.671 Writing inode tables: 0/64 done 00:09:17.571 Creating journal (8192 blocks): done 00:09:18.087 Writing superblocks and filesystem accounting information: 0/6410/64 done 00:09:18.087 00:09:18.087 21:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@944 -- # return 0 00:09:18.087 21:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:18.346 21:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:18.346 21:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:09:18.346 21:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:18.346 21:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:09:18.346 21:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:18.346 21:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:18.346 21:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1279410 00:09:18.346 21:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:18.346 21:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:18.346 21:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:18.346 21:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:18.346 00:09:18.346 real 0m2.962s 00:09:18.346 user 0m0.024s 00:09:18.346 sys 0m0.068s 00:09:18.346 21:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:18.346 21:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:18.346 ************************************ 00:09:18.346 END TEST filesystem_in_capsule_ext4 00:09:18.346 ************************************ 00:09:18.604 21:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:18.604 21:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:09:18.604 21:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:18.604 21:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:18.604 ************************************ 00:09:18.604 START TEST filesystem_in_capsule_btrfs 00:09:18.604 ************************************ 00:09:18.604 21:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:18.605 21:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:18.605 21:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:18.605 21:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:18.605 21:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local fstype=btrfs 00:09:18.605 21:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:09:18.605 21:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local i=0 00:09:18.605 21:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local force 00:09:18.605 21:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # '[' btrfs = ext4 ']' 00:09:18.605 21:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # force=-f 00:09:18.605 21:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:18.863 btrfs-progs v6.6.2 00:09:18.863 See https://btrfs.readthedocs.io for more information. 00:09:18.863 00:09:18.863 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:18.863 NOTE: several default settings have changed in version 5.15, please make sure 00:09:18.863 this does not affect your deployments: 00:09:18.863 - DUP for metadata (-m dup) 00:09:18.863 - enabled no-holes (-O no-holes) 00:09:18.863 - enabled free-space-tree (-R free-space-tree) 00:09:18.863 00:09:18.863 Label: (null) 00:09:18.863 UUID: 7a6bb9b7-8640-4af6-a4ea-09433fa7826f 00:09:18.863 Node size: 16384 00:09:18.863 Sector size: 4096 00:09:18.863 Filesystem size: 510.00MiB 00:09:18.863 Block group profiles: 00:09:18.863 Data: single 8.00MiB 00:09:18.863 Metadata: DUP 32.00MiB 00:09:18.863 System: DUP 8.00MiB 00:09:18.863 SSD detected: yes 00:09:18.863 Zoned device: no 00:09:18.863 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:09:18.863 Runtime features: free-space-tree 00:09:18.863 Checksum: crc32c 00:09:18.863 Number of devices: 1 00:09:18.863 Devices: 00:09:18.863 ID SIZE PATH 00:09:18.863 1 510.00MiB /dev/nvme0n1p1 00:09:18.863 00:09:18.863 21:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@944 -- # return 0 00:09:18.863 21:26:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:19.122 21:26:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:19.122 21:26:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:09:19.122 21:26:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:19.122 21:26:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:09:19.122 21:26:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:19.122 21:26:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:19.122 21:26:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1279410 00:09:19.122 21:26:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:19.122 21:26:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:19.122 21:26:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:19.122 21:26:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:19.122 00:09:19.122 real 0m0.592s 00:09:19.122 user 0m0.030s 00:09:19.122 sys 0m0.125s 00:09:19.122 21:26:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:19.122 21:26:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:19.122 ************************************ 00:09:19.122 END TEST filesystem_in_capsule_btrfs 00:09:19.122 ************************************ 00:09:19.122 21:26:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:09:19.122 21:26:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:09:19.122 21:26:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:19.122 21:26:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:19.122 ************************************ 00:09:19.122 START TEST filesystem_in_capsule_xfs 00:09:19.122 ************************************ 00:09:19.122 21:26:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create xfs nvme0n1 00:09:19.122 21:26:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:19.122 21:26:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:19.122 21:26:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:19.122 21:26:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local fstype=xfs 00:09:19.122 21:26:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:09:19.122 21:26:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local i=0 00:09:19.122 21:26:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local force 00:09:19.122 21:26:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # '[' xfs = ext4 ']' 00:09:19.122 21:26:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # force=-f 00:09:19.122 21:26:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:19.380 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:19.380 = sectsz=512 attr=2, projid32bit=1 00:09:19.380 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:19.380 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:19.380 data = bsize=4096 blocks=130560, imaxpct=25 00:09:19.380 = sunit=0 swidth=0 blks 00:09:19.380 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:19.380 log =internal log bsize=4096 blocks=16384, version=2 00:09:19.380 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:19.380 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:19.947 Discarding blocks...Done. 00:09:19.947 21:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@944 -- # return 0 00:09:19.947 21:26:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:21.851 21:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:21.851 21:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:09:21.851 21:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:21.851 21:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:09:21.851 21:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:09:21.851 21:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:21.851 21:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1279410 00:09:21.851 21:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:21.851 21:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:21.851 21:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:21.851 21:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:21.851 00:09:21.851 real 0m2.667s 00:09:21.851 user 0m0.026s 00:09:21.851 sys 0m0.069s 00:09:21.851 21:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:21.851 21:26:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:21.851 ************************************ 00:09:21.851 END TEST filesystem_in_capsule_xfs 00:09:21.851 ************************************ 00:09:21.851 21:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:22.110 21:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:22.110 21:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:22.370 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.370 21:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:22.370 21:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1218 -- # local i=0 00:09:22.370 21:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:09:22.370 21:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:22.370 21:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:09:22.370 21:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:22.370 21:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1230 -- # return 0 00:09:22.370 21:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:22.370 21:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:22.370 21:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:22.370 21:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:22.370 21:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:22.370 21:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1279410 00:09:22.370 21:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@949 -- # '[' -z 1279410 ']' 00:09:22.370 21:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # kill -0 1279410 00:09:22.370 21:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # uname 00:09:22.370 21:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:22.370 21:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1279410 00:09:22.370 21:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:22.370 21:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:22.370 21:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1279410' 00:09:22.370 killing process with pid 1279410 00:09:22.370 21:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # kill 1279410 00:09:22.370 21:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # wait 1279410 00:09:22.630 21:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:22.630 00:09:22.630 real 0m13.448s 00:09:22.630 user 0m52.696s 00:09:22.630 sys 0m1.330s 00:09:22.630 21:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:22.630 21:26:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:22.630 ************************************ 00:09:22.630 END TEST nvmf_filesystem_in_capsule 00:09:22.630 ************************************ 00:09:22.630 21:26:22 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:09:22.630 21:26:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:22.630 21:26:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:09:22.630 21:26:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:22.630 21:26:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:09:22.889 21:26:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:22.889 21:26:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:22.889 rmmod nvme_tcp 00:09:22.889 rmmod nvme_fabrics 00:09:22.889 rmmod nvme_keyring 00:09:22.889 21:26:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:22.889 21:26:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:09:22.889 21:26:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:09:22.889 21:26:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:22.889 21:26:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:22.889 21:26:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:22.889 21:26:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:22.889 21:26:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:22.889 21:26:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:22.889 21:26:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.889 21:26:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:22.890 21:26:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.793 21:26:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:24.793 00:09:24.793 real 0m36.814s 00:09:24.793 user 1m50.980s 00:09:24.793 sys 0m7.822s 00:09:24.793 21:26:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:24.793 21:26:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:24.793 ************************************ 00:09:24.793 END TEST nvmf_filesystem 00:09:24.793 ************************************ 00:09:24.793 21:26:25 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:24.793 21:26:25 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:24.793 21:26:25 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:25.053 21:26:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:25.053 ************************************ 00:09:25.053 START TEST nvmf_target_discovery 00:09:25.053 ************************************ 00:09:25.053 21:26:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:25.053 * Looking for test storage... 00:09:25.053 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:25.053 21:26:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:25.053 21:26:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:09:25.053 21:26:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:25.053 21:26:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:25.053 21:26:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:25.053 21:26:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:25.053 21:26:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:25.053 21:26:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:25.053 21:26:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:25.053 21:26:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:25.053 21:26:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:25.053 21:26:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:25.053 21:26:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:09:25.053 21:26:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:09:25.054 21:26:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:25.054 21:26:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:25.054 21:26:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:25.054 21:26:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:25.054 21:26:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:25.054 21:26:25 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:25.054 21:26:25 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:25.054 21:26:25 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:25.054 21:26:25 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.054 21:26:25 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.054 21:26:25 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.054 21:26:25 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:09:25.054 21:26:25 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.054 21:26:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:09:25.054 21:26:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:25.054 21:26:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:25.054 21:26:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:25.054 21:26:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:25.054 21:26:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:25.054 21:26:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:25.054 21:26:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:25.054 21:26:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:25.054 21:26:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:09:25.054 21:26:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:09:25.054 21:26:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:09:25.054 21:26:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:09:25.054 21:26:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:09:25.054 21:26:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:25.054 21:26:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:25.054 21:26:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:25.054 21:26:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:25.054 21:26:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:25.054 21:26:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.054 21:26:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:25.054 21:26:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.054 21:26:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:25.054 21:26:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:25.054 21:26:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:09:25.054 21:26:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:31.627 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:31.627 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:31.627 Found net devices under 0000:af:00.0: cvl_0_0 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:31.627 Found net devices under 0000:af:00.1: cvl_0_1 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:31.627 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:31.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:09:31.627 00:09:31.627 --- 10.0.0.2 ping statistics --- 00:09:31.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.627 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:09:31.627 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:31.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:31.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:09:31.627 00:09:31.627 --- 10.0.0.1 ping statistics --- 00:09:31.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.627 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:09:31.628 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:31.628 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:09:31.628 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:31.628 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:31.628 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:31.628 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:31.628 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:31.628 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:31.628 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:31.628 21:26:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:09:31.628 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:31.628 21:26:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:31.628 21:26:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:31.628 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1286162 00:09:31.628 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1286162 00:09:31.628 21:26:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:31.628 21:26:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@830 -- # '[' -z 1286162 ']' 00:09:31.628 21:26:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.628 21:26:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:31.628 21:26:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.628 21:26:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:31.628 21:26:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:31.628 [2024-06-07 21:26:31.887156] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:09:31.628 [2024-06-07 21:26:31.887217] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:31.887 EAL: No free 2048 kB hugepages reported on node 1 00:09:31.887 [2024-06-07 21:26:31.983785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:31.887 [2024-06-07 21:26:32.075432] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:31.887 [2024-06-07 21:26:32.075475] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:31.887 [2024-06-07 21:26:32.075486] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:31.887 [2024-06-07 21:26:32.075495] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:31.887 [2024-06-07 21:26:32.075502] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:31.887 [2024-06-07 21:26:32.075545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.887 [2024-06-07 21:26:32.075644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:31.887 [2024-06-07 21:26:32.075758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.887 [2024-06-07 21:26:32.075758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:32.825 21:26:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:32.825 21:26:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@863 -- # return 0 00:09:32.825 21:26:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:32.825 21:26:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:32.825 21:26:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.825 21:26:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:32.825 21:26:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:32.825 21:26:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.825 21:26:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.825 [2024-06-07 21:26:32.878557] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:32.825 21:26:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.825 21:26:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:09:32.825 21:26:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:32.825 21:26:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:09:32.825 21:26:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.825 21:26:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.825 Null1 00:09:32.825 21:26:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.825 21:26:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:32.825 21:26:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.825 21:26:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.825 21:26:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.825 21:26:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:09:32.825 21:26:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.825 21:26:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.825 21:26:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.825 21:26:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:32.825 21:26:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.825 21:26:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.825 [2024-06-07 21:26:32.926833] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:32.825 21:26:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.825 21:26:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:32.825 21:26:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:09:32.825 21:26:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.825 21:26:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.825 Null2 00:09:32.825 21:26:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.825 21:26:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:09:32.826 21:26:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.826 21:26:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.826 21:26:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.826 21:26:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:09:32.826 21:26:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.826 21:26:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.826 21:26:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.826 21:26:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:32.826 21:26:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.826 21:26:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.826 21:26:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.826 21:26:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:32.826 21:26:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:09:32.826 21:26:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.826 21:26:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.826 Null3 00:09:32.826 21:26:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.826 21:26:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:09:32.826 21:26:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.826 21:26:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.826 21:26:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.826 21:26:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:09:32.826 21:26:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.826 21:26:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.826 21:26:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.826 21:26:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:09:32.826 21:26:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.826 21:26:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.826 21:26:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.826 21:26:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:32.826 21:26:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:09:32.826 21:26:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.826 21:26:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.826 Null4 00:09:32.826 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.826 21:26:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:09:32.826 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.826 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.826 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.826 21:26:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:09:32.826 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.826 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.826 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.826 21:26:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:09:32.826 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.826 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.826 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.826 21:26:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:32.826 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.826 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.826 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.826 21:26:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:09:32.826 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.826 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.826 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.826 21:26:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:09:33.086 00:09:33.086 Discovery Log Number of Records 6, Generation counter 6 00:09:33.086 =====Discovery Log Entry 0====== 00:09:33.086 trtype: tcp 00:09:33.086 adrfam: ipv4 00:09:33.086 subtype: current discovery subsystem 00:09:33.086 treq: not required 00:09:33.086 portid: 0 00:09:33.086 trsvcid: 4420 00:09:33.086 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:33.086 traddr: 10.0.0.2 00:09:33.086 eflags: explicit discovery connections, duplicate discovery information 00:09:33.086 sectype: none 00:09:33.086 =====Discovery Log Entry 1====== 00:09:33.086 trtype: tcp 00:09:33.086 adrfam: ipv4 00:09:33.086 subtype: nvme subsystem 00:09:33.086 treq: not required 00:09:33.086 portid: 0 00:09:33.086 trsvcid: 4420 00:09:33.086 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:33.086 traddr: 10.0.0.2 00:09:33.086 eflags: none 00:09:33.086 sectype: none 00:09:33.086 =====Discovery Log Entry 2====== 00:09:33.086 trtype: tcp 00:09:33.086 adrfam: ipv4 00:09:33.086 subtype: nvme subsystem 00:09:33.086 treq: not required 00:09:33.086 portid: 0 00:09:33.086 trsvcid: 4420 00:09:33.086 subnqn: nqn.2016-06.io.spdk:cnode2 00:09:33.086 traddr: 10.0.0.2 00:09:33.086 eflags: none 00:09:33.086 sectype: none 00:09:33.086 =====Discovery Log Entry 3====== 00:09:33.086 trtype: tcp 00:09:33.086 adrfam: ipv4 00:09:33.086 subtype: nvme subsystem 00:09:33.086 treq: not required 00:09:33.086 portid: 0 00:09:33.086 trsvcid: 4420 00:09:33.086 subnqn: nqn.2016-06.io.spdk:cnode3 00:09:33.086 traddr: 10.0.0.2 00:09:33.086 eflags: none 00:09:33.086 sectype: none 00:09:33.086 =====Discovery Log Entry 4====== 00:09:33.086 trtype: tcp 00:09:33.086 adrfam: ipv4 00:09:33.086 subtype: nvme subsystem 00:09:33.086 treq: not required 00:09:33.086 portid: 0 00:09:33.086 trsvcid: 4420 00:09:33.086 subnqn: nqn.2016-06.io.spdk:cnode4 00:09:33.086 traddr: 10.0.0.2 00:09:33.086 eflags: none 00:09:33.086 sectype: none 00:09:33.086 =====Discovery Log Entry 5====== 00:09:33.086 trtype: tcp 00:09:33.086 adrfam: ipv4 00:09:33.086 subtype: discovery subsystem referral 00:09:33.086 treq: not required 00:09:33.086 portid: 0 00:09:33.086 trsvcid: 4430 00:09:33.086 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:33.086 traddr: 10.0.0.2 00:09:33.086 eflags: none 00:09:33.086 sectype: none 00:09:33.086 21:26:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:09:33.086 Perform nvmf subsystem discovery via RPC 00:09:33.086 21:26:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:09:33.086 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:33.086 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:33.086 [ 00:09:33.086 { 00:09:33.086 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:33.086 "subtype": "Discovery", 00:09:33.086 "listen_addresses": [ 00:09:33.086 { 00:09:33.086 "trtype": "TCP", 00:09:33.086 "adrfam": "IPv4", 00:09:33.086 "traddr": "10.0.0.2", 00:09:33.086 "trsvcid": "4420" 00:09:33.086 } 00:09:33.086 ], 00:09:33.086 "allow_any_host": true, 00:09:33.086 "hosts": [] 00:09:33.086 }, 00:09:33.086 { 00:09:33.086 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:33.086 "subtype": "NVMe", 00:09:33.086 "listen_addresses": [ 00:09:33.086 { 00:09:33.086 "trtype": "TCP", 00:09:33.086 "adrfam": "IPv4", 00:09:33.086 "traddr": "10.0.0.2", 00:09:33.086 "trsvcid": "4420" 00:09:33.086 } 00:09:33.086 ], 00:09:33.086 "allow_any_host": true, 00:09:33.086 "hosts": [], 00:09:33.086 "serial_number": "SPDK00000000000001", 00:09:33.086 "model_number": "SPDK bdev Controller", 00:09:33.086 "max_namespaces": 32, 00:09:33.086 "min_cntlid": 1, 00:09:33.086 "max_cntlid": 65519, 00:09:33.086 "namespaces": [ 00:09:33.086 { 00:09:33.086 "nsid": 1, 00:09:33.086 "bdev_name": "Null1", 00:09:33.086 "name": "Null1", 00:09:33.086 "nguid": "C6A3F301E78C4C84892078D40F00B1E5", 00:09:33.086 "uuid": "c6a3f301-e78c-4c84-8920-78d40f00b1e5" 00:09:33.086 } 00:09:33.086 ] 00:09:33.086 }, 00:09:33.086 { 00:09:33.086 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:33.086 "subtype": "NVMe", 00:09:33.086 "listen_addresses": [ 00:09:33.086 { 00:09:33.086 "trtype": "TCP", 00:09:33.086 "adrfam": "IPv4", 00:09:33.086 "traddr": "10.0.0.2", 00:09:33.086 "trsvcid": "4420" 00:09:33.086 } 00:09:33.086 ], 00:09:33.086 "allow_any_host": true, 00:09:33.086 "hosts": [], 00:09:33.086 "serial_number": "SPDK00000000000002", 00:09:33.086 "model_number": "SPDK bdev Controller", 00:09:33.086 "max_namespaces": 32, 00:09:33.086 "min_cntlid": 1, 00:09:33.086 "max_cntlid": 65519, 00:09:33.086 "namespaces": [ 00:09:33.086 { 00:09:33.086 "nsid": 1, 00:09:33.086 "bdev_name": "Null2", 00:09:33.086 "name": "Null2", 00:09:33.086 "nguid": "B919C81C0B5A45C981FC1F17DDC537F0", 00:09:33.086 "uuid": "b919c81c-0b5a-45c9-81fc-1f17ddc537f0" 00:09:33.086 } 00:09:33.086 ] 00:09:33.086 }, 00:09:33.086 { 00:09:33.086 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:09:33.086 "subtype": "NVMe", 00:09:33.086 "listen_addresses": [ 00:09:33.086 { 00:09:33.086 "trtype": "TCP", 00:09:33.086 "adrfam": "IPv4", 00:09:33.086 "traddr": "10.0.0.2", 00:09:33.086 "trsvcid": "4420" 00:09:33.086 } 00:09:33.086 ], 00:09:33.086 "allow_any_host": true, 00:09:33.086 "hosts": [], 00:09:33.086 "serial_number": "SPDK00000000000003", 00:09:33.086 "model_number": "SPDK bdev Controller", 00:09:33.086 "max_namespaces": 32, 00:09:33.086 "min_cntlid": 1, 00:09:33.086 "max_cntlid": 65519, 00:09:33.086 "namespaces": [ 00:09:33.086 { 00:09:33.086 "nsid": 1, 00:09:33.086 "bdev_name": "Null3", 00:09:33.086 "name": "Null3", 00:09:33.086 "nguid": "D5073F40D6EF418FB9679AD26BE95D1F", 00:09:33.086 "uuid": "d5073f40-d6ef-418f-b967-9ad26be95d1f" 00:09:33.086 } 00:09:33.086 ] 00:09:33.086 }, 00:09:33.086 { 00:09:33.086 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:09:33.086 "subtype": "NVMe", 00:09:33.086 "listen_addresses": [ 00:09:33.086 { 00:09:33.086 "trtype": "TCP", 00:09:33.086 "adrfam": "IPv4", 00:09:33.086 "traddr": "10.0.0.2", 00:09:33.086 "trsvcid": "4420" 00:09:33.086 } 00:09:33.086 ], 00:09:33.086 "allow_any_host": true, 00:09:33.086 "hosts": [], 00:09:33.086 "serial_number": "SPDK00000000000004", 00:09:33.086 "model_number": "SPDK bdev Controller", 00:09:33.086 "max_namespaces": 32, 00:09:33.086 "min_cntlid": 1, 00:09:33.086 "max_cntlid": 65519, 00:09:33.086 "namespaces": [ 00:09:33.086 { 00:09:33.086 "nsid": 1, 00:09:33.086 "bdev_name": "Null4", 00:09:33.086 "name": "Null4", 00:09:33.086 "nguid": "B00108C8C25E4821817E552603385B52", 00:09:33.086 "uuid": "b00108c8-c25e-4821-817e-552603385b52" 00:09:33.086 } 00:09:33.086 ] 00:09:33.086 } 00:09:33.086 ] 00:09:33.086 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:33.086 21:26:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:09:33.086 21:26:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:33.086 21:26:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:33.086 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:33.086 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:33.086 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:33.086 21:26:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:09:33.086 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:33.086 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:33.086 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:33.086 21:26:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:33.086 21:26:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:09:33.086 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:33.086 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:33.086 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:33.086 21:26:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:09:33.086 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:33.086 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:33.086 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:33.086 21:26:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:33.086 21:26:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:09:33.086 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:33.086 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:33.086 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:33.087 21:26:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:09:33.087 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:33.087 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:33.087 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:33.087 21:26:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:33.087 21:26:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:09:33.087 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:33.087 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:33.087 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:33.087 21:26:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:09:33.087 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:33.087 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:33.087 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:33.087 21:26:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:09:33.087 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:33.087 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:33.087 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:33.087 21:26:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:09:33.087 21:26:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:09:33.087 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:33.087 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:33.087 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:33.087 21:26:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:09:33.087 21:26:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:09:33.087 21:26:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:09:33.087 21:26:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:09:33.087 21:26:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:33.087 21:26:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:09:33.087 21:26:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:33.087 21:26:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:09:33.087 21:26:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:33.087 21:26:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:33.087 rmmod nvme_tcp 00:09:33.087 rmmod nvme_fabrics 00:09:33.087 rmmod nvme_keyring 00:09:33.087 21:26:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:33.087 21:26:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:09:33.087 21:26:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:09:33.087 21:26:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1286162 ']' 00:09:33.087 21:26:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1286162 00:09:33.087 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@949 -- # '[' -z 1286162 ']' 00:09:33.087 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # kill -0 1286162 00:09:33.087 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # uname 00:09:33.346 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:33.346 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1286162 00:09:33.346 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:33.346 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:33.346 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1286162' 00:09:33.346 killing process with pid 1286162 00:09:33.346 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@968 -- # kill 1286162 00:09:33.346 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@973 -- # wait 1286162 00:09:33.346 21:26:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:33.346 21:26:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:33.346 21:26:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:33.346 21:26:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:33.346 21:26:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:33.346 21:26:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.346 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:33.346 21:26:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.882 21:26:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:35.882 00:09:35.882 real 0m10.573s 00:09:35.882 user 0m8.123s 00:09:35.882 sys 0m5.454s 00:09:35.882 21:26:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:35.882 21:26:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.882 ************************************ 00:09:35.882 END TEST nvmf_target_discovery 00:09:35.882 ************************************ 00:09:35.882 21:26:35 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:35.882 21:26:35 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:35.882 21:26:35 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:35.882 21:26:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:35.882 ************************************ 00:09:35.882 START TEST nvmf_referrals 00:09:35.882 ************************************ 00:09:35.882 21:26:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:35.882 * Looking for test storage... 00:09:35.882 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:35.882 21:26:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:35.882 21:26:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:09:35.882 21:26:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:35.882 21:26:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:35.882 21:26:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:35.882 21:26:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:35.882 21:26:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:35.882 21:26:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:35.882 21:26:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:35.882 21:26:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:35.882 21:26:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:35.882 21:26:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:35.882 21:26:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:09:35.882 21:26:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:09:35.882 21:26:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:35.882 21:26:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:35.882 21:26:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:35.882 21:26:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:35.882 21:26:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:35.882 21:26:35 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:35.883 21:26:35 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:35.883 21:26:35 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:35.883 21:26:35 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.883 21:26:35 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.883 21:26:35 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.883 21:26:35 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:09:35.883 21:26:35 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.883 21:26:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:09:35.883 21:26:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:35.883 21:26:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:35.883 21:26:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:35.883 21:26:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:35.883 21:26:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:35.883 21:26:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:35.883 21:26:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:35.883 21:26:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:35.883 21:26:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:09:35.883 21:26:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:09:35.883 21:26:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:09:35.883 21:26:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:09:35.883 21:26:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:09:35.883 21:26:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:09:35.883 21:26:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:09:35.883 21:26:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:35.883 21:26:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:35.883 21:26:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:35.883 21:26:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:35.883 21:26:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:35.883 21:26:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.883 21:26:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:35.883 21:26:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.883 21:26:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:35.883 21:26:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:35.883 21:26:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:09:35.883 21:26:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:42.534 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:42.534 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:42.534 Found net devices under 0000:af:00.0: cvl_0_0 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:42.534 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:42.535 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:42.535 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:42.535 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:42.535 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:42.535 Found net devices under 0000:af:00.1: cvl_0_1 00:09:42.535 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:42.535 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:42.535 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:09:42.535 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:42.535 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:42.535 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:42.535 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:42.535 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:42.535 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:42.535 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:42.535 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:42.535 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:42.535 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:42.535 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:42.535 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:42.535 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:42.535 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:42.535 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:42.535 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:42.535 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:42.535 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:42.535 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:42.535 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:42.535 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:42.535 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:42.535 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:42.535 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:42.535 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.316 ms 00:09:42.535 00:09:42.535 --- 10.0.0.2 ping statistics --- 00:09:42.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.535 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:09:42.535 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:42.535 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:42.535 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:09:42.535 00:09:42.535 --- 10.0.0.1 ping statistics --- 00:09:42.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.535 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:09:42.535 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:42.535 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:09:42.535 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:42.535 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:42.535 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:42.535 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:42.535 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:42.535 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:42.535 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:42.535 21:26:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:09:42.535 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:42.535 21:26:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:42.535 21:26:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:42.535 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1290558 00:09:42.535 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1290558 00:09:42.535 21:26:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:42.535 21:26:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@830 -- # '[' -z 1290558 ']' 00:09:42.535 21:26:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.535 21:26:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:42.535 21:26:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.535 21:26:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:42.535 21:26:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:42.535 [2024-06-07 21:26:42.490316] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:09:42.535 [2024-06-07 21:26:42.490374] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:42.535 EAL: No free 2048 kB hugepages reported on node 1 00:09:42.535 [2024-06-07 21:26:42.586622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:42.535 [2024-06-07 21:26:42.678257] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:42.535 [2024-06-07 21:26:42.678300] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:42.535 [2024-06-07 21:26:42.678310] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:42.535 [2024-06-07 21:26:42.678319] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:42.535 [2024-06-07 21:26:42.678327] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:42.535 [2024-06-07 21:26:42.678382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:42.535 [2024-06-07 21:26:42.678484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:42.535 [2024-06-07 21:26:42.678599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.535 [2024-06-07 21:26:42.678599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:43.471 21:26:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:43.471 21:26:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@863 -- # return 0 00:09:43.471 21:26:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:43.471 21:26:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:43.471 21:26:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:43.471 21:26:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:43.471 21:26:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:43.471 21:26:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.471 21:26:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:43.471 [2024-06-07 21:26:43.483877] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:43.471 21:26:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.471 21:26:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:09:43.471 21:26:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.471 21:26:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:43.471 [2024-06-07 21:26:43.500074] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:09:43.471 21:26:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.471 21:26:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:09:43.471 21:26:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.471 21:26:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:43.471 21:26:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.471 21:26:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:09:43.471 21:26:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.472 21:26:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:43.472 21:26:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.472 21:26:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:09:43.472 21:26:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.472 21:26:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:43.472 21:26:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.472 21:26:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:43.472 21:26:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:09:43.472 21:26:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.472 21:26:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:43.472 21:26:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.472 21:26:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:09:43.472 21:26:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:09:43.472 21:26:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:43.472 21:26:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:43.472 21:26:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:43.472 21:26:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:43.472 21:26:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.472 21:26:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:43.472 21:26:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.472 21:26:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:43.472 21:26:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:43.472 21:26:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:09:43.472 21:26:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:43.472 21:26:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:43.472 21:26:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:43.472 21:26:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:43.472 21:26:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:43.730 21:26:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:43.730 21:26:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:43.730 21:26:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:09:43.730 21:26:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.730 21:26:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:43.730 21:26:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.730 21:26:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:09:43.730 21:26:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.730 21:26:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:43.730 21:26:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.730 21:26:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:09:43.730 21:26:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.730 21:26:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:43.730 21:26:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.730 21:26:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:43.730 21:26:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.730 21:26:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:43.730 21:26:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:09:43.730 21:26:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.730 21:26:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:09:43.730 21:26:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:09:43.730 21:26:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:43.731 21:26:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:43.731 21:26:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:43.731 21:26:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:43.731 21:26:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:43.990 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:43.990 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:09:43.990 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:09:43.990 21:26:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.990 21:26:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:43.990 21:26:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.990 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:43.990 21:26:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.990 21:26:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:43.990 21:26:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.990 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:09:43.990 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:43.990 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:43.990 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:43.990 21:26:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.990 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:43.990 21:26:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:43.990 21:26:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.990 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:09:43.990 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:43.990 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:09:43.990 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:43.990 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:43.990 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:43.990 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:43.990 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:43.990 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:09:43.990 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:43.990 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:09:43.990 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:09:43.990 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:43.990 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:43.990 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:44.249 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:44.249 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:09:44.249 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:44.249 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:09:44.249 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:44.249 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:44.249 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:44.249 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:44.249 21:26:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:44.249 21:26:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:44.249 21:26:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:44.249 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:09:44.249 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:44.249 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:44.249 21:26:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:44.249 21:26:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:44.249 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:44.249 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:44.249 21:26:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:44.508 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:09:44.508 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:44.508 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:09:44.508 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:44.508 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:44.508 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:44.508 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:44.508 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:44.508 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:09:44.508 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:44.508 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:09:44.508 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:09:44.508 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:44.508 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:44.508 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:44.508 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:09:44.508 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:09:44.508 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:09:44.508 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:44.508 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:44.508 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:44.767 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:44.767 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:09:44.767 21:26:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:44.767 21:26:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:44.767 21:26:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:44.767 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:44.767 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:09:44.767 21:26:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:44.767 21:26:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:44.767 21:26:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:44.767 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:09:44.767 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:09:44.767 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:44.767 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:44.767 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:44.767 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:44.767 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:44.767 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:44.767 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:09:44.767 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:09:44.767 21:26:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:09:44.767 21:26:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:44.767 21:26:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:09:44.767 21:26:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:44.767 21:26:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:09:44.767 21:26:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:44.767 21:26:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:44.767 rmmod nvme_tcp 00:09:44.767 rmmod nvme_fabrics 00:09:45.026 rmmod nvme_keyring 00:09:45.027 21:26:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:45.027 21:26:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:09:45.027 21:26:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:09:45.027 21:26:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1290558 ']' 00:09:45.027 21:26:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1290558 00:09:45.027 21:26:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@949 -- # '[' -z 1290558 ']' 00:09:45.027 21:26:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # kill -0 1290558 00:09:45.027 21:26:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # uname 00:09:45.027 21:26:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:45.027 21:26:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1290558 00:09:45.027 21:26:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:45.027 21:26:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:45.027 21:26:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1290558' 00:09:45.027 killing process with pid 1290558 00:09:45.027 21:26:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@968 -- # kill 1290558 00:09:45.027 21:26:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@973 -- # wait 1290558 00:09:45.286 21:26:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:45.286 21:26:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:45.286 21:26:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:45.286 21:26:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:45.286 21:26:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:45.286 21:26:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.286 21:26:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:45.286 21:26:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.192 21:26:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:47.192 00:09:47.192 real 0m11.645s 00:09:47.192 user 0m13.287s 00:09:47.192 sys 0m5.687s 00:09:47.192 21:26:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:47.192 21:26:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:47.192 ************************************ 00:09:47.192 END TEST nvmf_referrals 00:09:47.192 ************************************ 00:09:47.192 21:26:47 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:47.192 21:26:47 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:47.192 21:26:47 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:47.192 21:26:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:47.192 ************************************ 00:09:47.192 START TEST nvmf_connect_disconnect 00:09:47.192 ************************************ 00:09:47.192 21:26:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:47.451 * Looking for test storage... 00:09:47.451 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:47.451 21:26:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:47.451 21:26:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:09:47.451 21:26:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:47.451 21:26:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:47.451 21:26:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:47.451 21:26:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:47.451 21:26:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:47.451 21:26:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:47.451 21:26:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:47.451 21:26:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:47.451 21:26:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:47.451 21:26:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:47.451 21:26:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:09:47.451 21:26:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:09:47.451 21:26:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:47.451 21:26:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:47.451 21:26:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:47.451 21:26:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:47.451 21:26:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:47.451 21:26:47 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:47.451 21:26:47 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:47.451 21:26:47 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:47.451 21:26:47 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.451 21:26:47 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.451 21:26:47 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.451 21:26:47 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:09:47.452 21:26:47 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.452 21:26:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:09:47.452 21:26:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:47.452 21:26:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:47.452 21:26:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:47.452 21:26:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:47.452 21:26:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:47.452 21:26:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:47.452 21:26:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:47.452 21:26:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:47.452 21:26:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:47.452 21:26:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:47.452 21:26:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:47.452 21:26:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:47.452 21:26:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:47.452 21:26:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:47.452 21:26:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:47.452 21:26:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:47.452 21:26:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.452 21:26:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:47.452 21:26:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.452 21:26:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:47.452 21:26:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:47.452 21:26:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:09:47.452 21:26:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:54.018 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:54.018 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:54.018 Found net devices under 0000:af:00.0: cvl_0_0 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:54.018 Found net devices under 0000:af:00.1: cvl_0_1 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:54.018 21:26:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:54.018 21:26:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:54.018 21:26:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:54.018 21:26:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:54.018 21:26:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:54.018 21:26:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:54.019 21:26:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:54.019 21:26:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:54.019 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:54.019 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:09:54.019 00:09:54.019 --- 10.0.0.2 ping statistics --- 00:09:54.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.019 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:09:54.019 21:26:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:54.019 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:54.019 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:09:54.019 00:09:54.019 --- 10.0.0.1 ping statistics --- 00:09:54.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.019 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:09:54.019 21:26:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:54.019 21:26:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:09:54.019 21:26:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:54.019 21:26:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:54.019 21:26:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:54.019 21:26:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:54.019 21:26:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:54.019 21:26:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:54.019 21:26:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:54.019 21:26:54 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:54.019 21:26:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:54.019 21:26:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:54.019 21:26:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:54.019 21:26:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1295201 00:09:54.019 21:26:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1295201 00:09:54.019 21:26:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:54.019 21:26:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@830 -- # '[' -z 1295201 ']' 00:09:54.019 21:26:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.019 21:26:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:54.019 21:26:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.019 21:26:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:54.019 21:26:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:54.019 [2024-06-07 21:26:54.250117] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:09:54.019 [2024-06-07 21:26:54.250171] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:54.278 EAL: No free 2048 kB hugepages reported on node 1 00:09:54.278 [2024-06-07 21:26:54.346984] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:54.278 [2024-06-07 21:26:54.438458] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:54.278 [2024-06-07 21:26:54.438497] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:54.278 [2024-06-07 21:26:54.438507] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:54.278 [2024-06-07 21:26:54.438517] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:54.278 [2024-06-07 21:26:54.438524] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:54.278 [2024-06-07 21:26:54.438575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:54.278 [2024-06-07 21:26:54.438676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:54.278 [2024-06-07 21:26:54.438788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:54.278 [2024-06-07 21:26:54.438789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.215 21:26:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:55.215 21:26:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@863 -- # return 0 00:09:55.215 21:26:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:55.215 21:26:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:55.215 21:26:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:55.215 21:26:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:55.215 21:26:55 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:55.215 21:26:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:55.215 21:26:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:55.215 [2024-06-07 21:26:55.238867] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:55.215 21:26:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:55.215 21:26:55 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:55.215 21:26:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:55.215 21:26:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:55.215 21:26:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:55.215 21:26:55 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:55.215 21:26:55 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:55.215 21:26:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:55.215 21:26:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:55.215 21:26:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:55.215 21:26:55 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:55.215 21:26:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:55.215 21:26:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:55.215 21:26:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:55.215 21:26:55 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:55.215 21:26:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:55.215 21:26:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:55.215 [2024-06-07 21:26:55.294863] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:55.215 21:26:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:55.215 21:26:55 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:09:55.215 21:26:55 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:09:55.215 21:26:55 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:09:58.504 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.695 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.993 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.279 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.568 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.568 21:27:12 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:10:12.568 21:27:12 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:10:12.568 21:27:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:12.568 21:27:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:10:12.568 21:27:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:12.568 21:27:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:10:12.568 21:27:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:12.568 21:27:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:12.568 rmmod nvme_tcp 00:10:12.568 rmmod nvme_fabrics 00:10:12.568 rmmod nvme_keyring 00:10:12.568 21:27:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:12.568 21:27:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:10:12.568 21:27:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:10:12.568 21:27:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1295201 ']' 00:10:12.568 21:27:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1295201 00:10:12.568 21:27:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@949 -- # '[' -z 1295201 ']' 00:10:12.568 21:27:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # kill -0 1295201 00:10:12.568 21:27:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # uname 00:10:12.568 21:27:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:10:12.568 21:27:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1295201 00:10:12.568 21:27:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:10:12.568 21:27:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:10:12.568 21:27:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1295201' 00:10:12.568 killing process with pid 1295201 00:10:12.568 21:27:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # kill 1295201 00:10:12.568 21:27:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # wait 1295201 00:10:12.828 21:27:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:12.828 21:27:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:12.828 21:27:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:12.828 21:27:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:12.828 21:27:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:12.828 21:27:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.828 21:27:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:12.828 21:27:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.734 21:27:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:14.734 00:10:14.734 real 0m27.449s 00:10:14.734 user 1m15.018s 00:10:14.734 sys 0m6.267s 00:10:14.734 21:27:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:14.734 21:27:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:14.734 ************************************ 00:10:14.734 END TEST nvmf_connect_disconnect 00:10:14.734 ************************************ 00:10:14.734 21:27:14 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:14.734 21:27:14 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:10:14.734 21:27:14 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:14.734 21:27:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:14.734 ************************************ 00:10:14.734 START TEST nvmf_multitarget 00:10:14.734 ************************************ 00:10:14.734 21:27:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:14.993 * Looking for test storage... 00:10:14.993 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:14.993 21:27:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:14.993 21:27:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:10:14.993 21:27:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:14.993 21:27:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:14.993 21:27:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:14.993 21:27:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:14.993 21:27:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:14.993 21:27:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:14.994 21:27:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:14.994 21:27:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:14.994 21:27:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:14.994 21:27:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:14.994 21:27:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:10:14.994 21:27:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:10:14.994 21:27:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:14.994 21:27:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:14.994 21:27:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:14.994 21:27:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:14.994 21:27:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:14.994 21:27:15 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:14.994 21:27:15 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:14.994 21:27:15 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:14.994 21:27:15 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.994 21:27:15 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.994 21:27:15 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.994 21:27:15 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:10:14.994 21:27:15 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.994 21:27:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:10:14.994 21:27:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:14.994 21:27:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:14.994 21:27:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:14.994 21:27:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:14.994 21:27:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:14.994 21:27:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:14.994 21:27:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:14.994 21:27:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:14.994 21:27:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:14.994 21:27:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:10:14.994 21:27:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:14.994 21:27:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:14.994 21:27:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:14.994 21:27:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:14.994 21:27:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:14.994 21:27:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.994 21:27:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:14.994 21:27:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.994 21:27:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:14.994 21:27:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:14.994 21:27:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:10:14.994 21:27:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:21.564 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:21.564 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:21.564 Found net devices under 0000:af:00.0: cvl_0_0 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:21.564 Found net devices under 0000:af:00.1: cvl_0_1 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:21.564 21:27:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:21.564 21:27:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:21.564 21:27:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:21.564 21:27:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:21.564 21:27:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:21.564 21:27:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:21.564 21:27:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:21.564 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:21.564 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:10:21.564 00:10:21.564 --- 10.0.0.2 ping statistics --- 00:10:21.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.564 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:10:21.564 21:27:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:21.564 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:21.564 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:10:21.564 00:10:21.564 --- 10.0.0.1 ping statistics --- 00:10:21.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.564 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:10:21.564 21:27:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:21.564 21:27:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:10:21.564 21:27:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:21.564 21:27:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:21.565 21:27:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:21.565 21:27:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:21.565 21:27:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:21.565 21:27:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:21.565 21:27:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:21.565 21:27:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:10:21.565 21:27:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:21.565 21:27:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@723 -- # xtrace_disable 00:10:21.565 21:27:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:21.565 21:27:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1302983 00:10:21.565 21:27:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1302983 00:10:21.565 21:27:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:21.565 21:27:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@830 -- # '[' -z 1302983 ']' 00:10:21.565 21:27:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.565 21:27:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:21.565 21:27:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.565 21:27:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:21.565 21:27:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:21.565 [2024-06-07 21:27:21.237247] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:10:21.565 [2024-06-07 21:27:21.237303] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:21.565 EAL: No free 2048 kB hugepages reported on node 1 00:10:21.565 [2024-06-07 21:27:21.333155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:21.565 [2024-06-07 21:27:21.425373] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:21.565 [2024-06-07 21:27:21.425413] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:21.565 [2024-06-07 21:27:21.425423] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:21.565 [2024-06-07 21:27:21.425435] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:21.565 [2024-06-07 21:27:21.425442] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:21.565 [2024-06-07 21:27:21.425492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:21.565 [2024-06-07 21:27:21.425592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:21.565 [2024-06-07 21:27:21.425626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.565 [2024-06-07 21:27:21.425626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:22.133 21:27:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:22.133 21:27:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@863 -- # return 0 00:10:22.133 21:27:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:22.133 21:27:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@729 -- # xtrace_disable 00:10:22.133 21:27:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:22.133 21:27:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:22.133 21:27:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:22.133 21:27:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:22.133 21:27:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:10:22.133 21:27:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:10:22.133 21:27:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:10:22.391 "nvmf_tgt_1" 00:10:22.391 21:27:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:10:22.391 "nvmf_tgt_2" 00:10:22.391 21:27:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:22.391 21:27:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:10:22.650 21:27:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:10:22.650 21:27:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:10:22.650 true 00:10:22.650 21:27:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:10:22.909 true 00:10:22.909 21:27:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:22.909 21:27:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:10:22.909 21:27:23 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:10:22.909 21:27:23 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:22.909 21:27:23 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:10:22.909 21:27:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:22.909 21:27:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:10:22.909 21:27:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:22.909 21:27:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:10:22.909 21:27:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:22.909 21:27:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:22.909 rmmod nvme_tcp 00:10:22.909 rmmod nvme_fabrics 00:10:22.909 rmmod nvme_keyring 00:10:22.909 21:27:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:22.909 21:27:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:10:22.909 21:27:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:10:22.909 21:27:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1302983 ']' 00:10:22.909 21:27:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1302983 00:10:22.909 21:27:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@949 -- # '[' -z 1302983 ']' 00:10:22.909 21:27:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # kill -0 1302983 00:10:22.909 21:27:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # uname 00:10:22.909 21:27:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:10:22.909 21:27:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1302983 00:10:23.169 21:27:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:10:23.169 21:27:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:10:23.169 21:27:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1302983' 00:10:23.169 killing process with pid 1302983 00:10:23.169 21:27:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@968 -- # kill 1302983 00:10:23.169 21:27:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@973 -- # wait 1302983 00:10:23.169 21:27:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:23.169 21:27:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:23.169 21:27:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:23.169 21:27:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:23.169 21:27:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:23.169 21:27:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:23.169 21:27:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:23.169 21:27:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:25.706 21:27:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:25.706 00:10:25.706 real 0m10.486s 00:10:25.706 user 0m10.267s 00:10:25.706 sys 0m5.140s 00:10:25.706 21:27:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:25.706 21:27:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:25.706 ************************************ 00:10:25.706 END TEST nvmf_multitarget 00:10:25.706 ************************************ 00:10:25.706 21:27:25 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:25.706 21:27:25 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:10:25.706 21:27:25 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:25.706 21:27:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:25.706 ************************************ 00:10:25.706 START TEST nvmf_rpc 00:10:25.706 ************************************ 00:10:25.706 21:27:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:25.706 * Looking for test storage... 00:10:25.706 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:25.706 21:27:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:25.706 21:27:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:10:25.706 21:27:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:25.706 21:27:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:25.706 21:27:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:25.706 21:27:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:25.706 21:27:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:25.706 21:27:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:25.706 21:27:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:25.706 21:27:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:25.706 21:27:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:25.706 21:27:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:25.706 21:27:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:10:25.706 21:27:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:10:25.706 21:27:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:25.706 21:27:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:25.706 21:27:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:25.706 21:27:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:25.706 21:27:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:25.706 21:27:25 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:25.706 21:27:25 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:25.706 21:27:25 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:25.706 21:27:25 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.706 21:27:25 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.706 21:27:25 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.706 21:27:25 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:10:25.707 21:27:25 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.707 21:27:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:10:25.707 21:27:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:25.707 21:27:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:25.707 21:27:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:25.707 21:27:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:25.707 21:27:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:25.707 21:27:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:25.707 21:27:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:25.707 21:27:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:25.707 21:27:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:10:25.707 21:27:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:10:25.707 21:27:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:25.707 21:27:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:25.707 21:27:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:25.707 21:27:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:25.707 21:27:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:25.707 21:27:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:25.707 21:27:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:25.707 21:27:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:25.707 21:27:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:25.707 21:27:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:25.707 21:27:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:10:25.707 21:27:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:32.328 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:32.328 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:32.328 Found net devices under 0000:af:00.0: cvl_0_0 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:32.328 Found net devices under 0000:af:00.1: cvl_0_1 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:32.328 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:32.329 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:32.329 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:32.329 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:32.329 21:27:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:32.329 21:27:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:32.329 21:27:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:32.329 21:27:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:32.329 21:27:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:32.329 21:27:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:32.329 21:27:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:32.329 21:27:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:32.329 21:27:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:32.329 21:27:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:32.329 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:32.329 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:10:32.329 00:10:32.329 --- 10.0.0.2 ping statistics --- 00:10:32.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.329 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:10:32.329 21:27:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:32.329 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:32.329 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:10:32.329 00:10:32.329 --- 10.0.0.1 ping statistics --- 00:10:32.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.329 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:10:32.329 21:27:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:32.329 21:27:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:10:32.329 21:27:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:32.329 21:27:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:32.329 21:27:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:32.329 21:27:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:32.329 21:27:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:32.329 21:27:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:32.329 21:27:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:32.329 21:27:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:10:32.329 21:27:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:32.329 21:27:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@723 -- # xtrace_disable 00:10:32.329 21:27:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.329 21:27:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1307545 00:10:32.329 21:27:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1307545 00:10:32.329 21:27:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:32.329 21:27:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@830 -- # '[' -z 1307545 ']' 00:10:32.329 21:27:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.329 21:27:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:32.329 21:27:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.329 21:27:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:32.329 21:27:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.329 [2024-06-07 21:27:32.361360] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:10:32.329 [2024-06-07 21:27:32.361415] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:32.329 EAL: No free 2048 kB hugepages reported on node 1 00:10:32.329 [2024-06-07 21:27:32.455713] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:32.329 [2024-06-07 21:27:32.546272] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:32.329 [2024-06-07 21:27:32.546313] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:32.329 [2024-06-07 21:27:32.546324] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:32.329 [2024-06-07 21:27:32.546338] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:32.329 [2024-06-07 21:27:32.546347] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:32.329 [2024-06-07 21:27:32.546404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:32.329 [2024-06-07 21:27:32.546503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:32.329 [2024-06-07 21:27:32.546618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.329 [2024-06-07 21:27:32.546618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:33.265 21:27:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:33.265 21:27:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@863 -- # return 0 00:10:33.265 21:27:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:33.265 21:27:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@729 -- # xtrace_disable 00:10:33.265 21:27:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:33.265 21:27:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:33.265 21:27:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:10:33.265 21:27:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:33.265 21:27:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:33.265 21:27:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:33.265 21:27:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:10:33.265 "tick_rate": 2200000000, 00:10:33.265 "poll_groups": [ 00:10:33.265 { 00:10:33.265 "name": "nvmf_tgt_poll_group_000", 00:10:33.265 "admin_qpairs": 0, 00:10:33.265 "io_qpairs": 0, 00:10:33.265 "current_admin_qpairs": 0, 00:10:33.265 "current_io_qpairs": 0, 00:10:33.265 "pending_bdev_io": 0, 00:10:33.265 "completed_nvme_io": 0, 00:10:33.265 "transports": [] 00:10:33.265 }, 00:10:33.265 { 00:10:33.265 "name": "nvmf_tgt_poll_group_001", 00:10:33.265 "admin_qpairs": 0, 00:10:33.265 "io_qpairs": 0, 00:10:33.265 "current_admin_qpairs": 0, 00:10:33.265 "current_io_qpairs": 0, 00:10:33.265 "pending_bdev_io": 0, 00:10:33.265 "completed_nvme_io": 0, 00:10:33.265 "transports": [] 00:10:33.265 }, 00:10:33.265 { 00:10:33.265 "name": "nvmf_tgt_poll_group_002", 00:10:33.265 "admin_qpairs": 0, 00:10:33.265 "io_qpairs": 0, 00:10:33.265 "current_admin_qpairs": 0, 00:10:33.265 "current_io_qpairs": 0, 00:10:33.265 "pending_bdev_io": 0, 00:10:33.265 "completed_nvme_io": 0, 00:10:33.265 "transports": [] 00:10:33.265 }, 00:10:33.265 { 00:10:33.265 "name": "nvmf_tgt_poll_group_003", 00:10:33.265 "admin_qpairs": 0, 00:10:33.265 "io_qpairs": 0, 00:10:33.265 "current_admin_qpairs": 0, 00:10:33.265 "current_io_qpairs": 0, 00:10:33.265 "pending_bdev_io": 0, 00:10:33.265 "completed_nvme_io": 0, 00:10:33.265 "transports": [] 00:10:33.265 } 00:10:33.265 ] 00:10:33.265 }' 00:10:33.265 21:27:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:10:33.265 21:27:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:10:33.265 21:27:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:10:33.265 21:27:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:10:33.265 21:27:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:10:33.265 21:27:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:10:33.265 21:27:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:10:33.265 21:27:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:33.265 21:27:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:33.265 21:27:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:33.265 [2024-06-07 21:27:33.458891] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:33.265 21:27:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:33.265 21:27:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:10:33.265 21:27:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:33.265 21:27:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:33.265 21:27:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:33.265 21:27:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:10:33.265 "tick_rate": 2200000000, 00:10:33.265 "poll_groups": [ 00:10:33.265 { 00:10:33.265 "name": "nvmf_tgt_poll_group_000", 00:10:33.265 "admin_qpairs": 0, 00:10:33.265 "io_qpairs": 0, 00:10:33.265 "current_admin_qpairs": 0, 00:10:33.265 "current_io_qpairs": 0, 00:10:33.265 "pending_bdev_io": 0, 00:10:33.265 "completed_nvme_io": 0, 00:10:33.265 "transports": [ 00:10:33.265 { 00:10:33.265 "trtype": "TCP" 00:10:33.265 } 00:10:33.265 ] 00:10:33.265 }, 00:10:33.265 { 00:10:33.265 "name": "nvmf_tgt_poll_group_001", 00:10:33.265 "admin_qpairs": 0, 00:10:33.265 "io_qpairs": 0, 00:10:33.265 "current_admin_qpairs": 0, 00:10:33.266 "current_io_qpairs": 0, 00:10:33.266 "pending_bdev_io": 0, 00:10:33.266 "completed_nvme_io": 0, 00:10:33.266 "transports": [ 00:10:33.266 { 00:10:33.266 "trtype": "TCP" 00:10:33.266 } 00:10:33.266 ] 00:10:33.266 }, 00:10:33.266 { 00:10:33.266 "name": "nvmf_tgt_poll_group_002", 00:10:33.266 "admin_qpairs": 0, 00:10:33.266 "io_qpairs": 0, 00:10:33.266 "current_admin_qpairs": 0, 00:10:33.266 "current_io_qpairs": 0, 00:10:33.266 "pending_bdev_io": 0, 00:10:33.266 "completed_nvme_io": 0, 00:10:33.266 "transports": [ 00:10:33.266 { 00:10:33.266 "trtype": "TCP" 00:10:33.266 } 00:10:33.266 ] 00:10:33.266 }, 00:10:33.266 { 00:10:33.266 "name": "nvmf_tgt_poll_group_003", 00:10:33.266 "admin_qpairs": 0, 00:10:33.266 "io_qpairs": 0, 00:10:33.266 "current_admin_qpairs": 0, 00:10:33.266 "current_io_qpairs": 0, 00:10:33.266 "pending_bdev_io": 0, 00:10:33.266 "completed_nvme_io": 0, 00:10:33.266 "transports": [ 00:10:33.266 { 00:10:33.266 "trtype": "TCP" 00:10:33.266 } 00:10:33.266 ] 00:10:33.266 } 00:10:33.266 ] 00:10:33.266 }' 00:10:33.266 21:27:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:10:33.266 21:27:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:33.266 21:27:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:33.266 21:27:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:33.525 21:27:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:10:33.525 21:27:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:10:33.525 21:27:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:33.525 21:27:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:33.525 21:27:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:33.525 21:27:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:10:33.525 21:27:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:10:33.525 21:27:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:10:33.525 21:27:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:10:33.525 21:27:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:33.525 21:27:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:33.525 21:27:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:33.525 Malloc1 00:10:33.525 21:27:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:33.525 21:27:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:33.525 21:27:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:33.525 21:27:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:33.525 21:27:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:33.525 21:27:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:33.525 21:27:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:33.525 21:27:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:33.525 21:27:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:33.525 21:27:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:10:33.525 21:27:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:33.525 21:27:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:33.525 21:27:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:33.525 21:27:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:33.525 21:27:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:33.525 21:27:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:33.525 [2024-06-07 21:27:33.639241] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:33.525 21:27:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:33.525 21:27:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:10:33.525 21:27:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:10:33.525 21:27:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:10:33.525 21:27:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:10:33.525 21:27:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:33.525 21:27:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:10:33.525 21:27:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:33.525 21:27:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:10:33.525 21:27:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:33.525 21:27:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:10:33.525 21:27:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:10:33.525 21:27:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:10:33.525 [2024-06-07 21:27:33.667830] ctrlr.c: 818:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562' 00:10:33.525 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:33.525 could not add new controller: failed to write to nvme-fabrics device 00:10:33.525 21:27:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:10:33.525 21:27:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:10:33.525 21:27:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:10:33.525 21:27:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:10:33.525 21:27:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:10:33.525 21:27:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:33.525 21:27:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:33.525 21:27:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:33.525 21:27:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:34.903 21:27:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:10:34.903 21:27:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:10:34.903 21:27:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:10:34.903 21:27:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:10:34.903 21:27:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:10:36.808 21:27:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:10:36.808 21:27:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:10:36.808 21:27:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:10:37.068 21:27:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:10:37.068 21:27:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:10:37.068 21:27:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:10:37.068 21:27:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:37.068 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.068 21:27:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:37.068 21:27:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:10:37.068 21:27:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:10:37.068 21:27:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:37.068 21:27:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:10:37.068 21:27:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:37.068 21:27:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:10:37.068 21:27:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:10:37.068 21:27:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:37.068 21:27:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.068 21:27:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:37.068 21:27:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:37.068 21:27:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:10:37.068 21:27:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:37.068 21:27:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:10:37.068 21:27:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:37.068 21:27:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:10:37.068 21:27:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:37.068 21:27:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:10:37.068 21:27:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:37.068 21:27:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:10:37.068 21:27:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:10:37.068 21:27:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:37.068 [2024-06-07 21:27:37.207181] ctrlr.c: 818:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562' 00:10:37.068 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:37.068 could not add new controller: failed to write to nvme-fabrics device 00:10:37.068 21:27:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:10:37.068 21:27:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:10:37.068 21:27:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:10:37.068 21:27:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:10:37.068 21:27:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:10:37.068 21:27:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:37.068 21:27:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.068 21:27:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:37.068 21:27:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:38.445 21:27:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:10:38.446 21:27:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:10:38.446 21:27:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:10:38.446 21:27:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:10:38.446 21:27:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:10:40.350 21:27:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:10:40.350 21:27:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:10:40.350 21:27:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:10:40.350 21:27:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:10:40.350 21:27:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:10:40.350 21:27:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:10:40.350 21:27:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:40.609 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.609 21:27:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:40.609 21:27:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:10:40.609 21:27:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:10:40.609 21:27:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:40.609 21:27:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:10:40.609 21:27:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:40.609 21:27:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:10:40.609 21:27:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:40.609 21:27:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:40.609 21:27:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:40.609 21:27:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:40.609 21:27:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:10:40.609 21:27:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:40.609 21:27:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:40.609 21:27:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:40.609 21:27:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:40.609 21:27:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:40.609 21:27:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:40.609 21:27:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:40.609 21:27:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:40.609 [2024-06-07 21:27:40.725074] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:40.609 21:27:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:40.609 21:27:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:40.609 21:27:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:40.609 21:27:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:40.609 21:27:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:40.610 21:27:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:40.610 21:27:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:40.610 21:27:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:40.610 21:27:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:40.610 21:27:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:41.986 21:27:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:41.986 21:27:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:10:41.986 21:27:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:10:41.986 21:27:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:10:41.986 21:27:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:10:43.891 21:27:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:10:43.891 21:27:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:10:43.891 21:27:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:10:43.891 21:27:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:10:43.891 21:27:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:10:43.891 21:27:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:10:43.891 21:27:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:43.891 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.891 21:27:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:43.891 21:27:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:10:43.891 21:27:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:10:43.891 21:27:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:43.891 21:27:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:10:43.891 21:27:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:43.891 21:27:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:10:43.891 21:27:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:43.891 21:27:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:43.891 21:27:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:43.891 21:27:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:43.891 21:27:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:43.891 21:27:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:43.891 21:27:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.150 21:27:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:44.150 21:27:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:44.150 21:27:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:44.150 21:27:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:44.150 21:27:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.150 21:27:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:44.150 21:27:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:44.150 21:27:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:44.150 21:27:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.150 [2024-06-07 21:27:44.182189] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:44.150 21:27:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:44.150 21:27:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:44.150 21:27:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:44.150 21:27:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.150 21:27:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:44.150 21:27:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:44.150 21:27:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:44.150 21:27:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.150 21:27:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:44.150 21:27:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:45.528 21:27:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:45.528 21:27:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:10:45.528 21:27:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:10:45.528 21:27:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:10:45.528 21:27:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:10:47.431 21:27:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:10:47.431 21:27:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:10:47.431 21:27:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:10:47.431 21:27:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:10:47.431 21:27:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:10:47.431 21:27:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:10:47.431 21:27:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:47.431 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.431 21:27:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:47.431 21:27:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:10:47.431 21:27:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:10:47.431 21:27:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:47.431 21:27:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:10:47.431 21:27:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:47.431 21:27:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:10:47.431 21:27:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:47.431 21:27:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:47.431 21:27:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.431 21:27:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:47.431 21:27:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:47.431 21:27:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:47.431 21:27:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.431 21:27:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:47.431 21:27:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:47.431 21:27:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:47.431 21:27:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:47.431 21:27:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.431 21:27:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:47.431 21:27:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:47.431 21:27:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:47.431 21:27:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.431 [2024-06-07 21:27:47.660598] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:47.431 21:27:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:47.431 21:27:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:47.431 21:27:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:47.431 21:27:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.431 21:27:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:47.431 21:27:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:47.431 21:27:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:47.431 21:27:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.431 21:27:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:47.431 21:27:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:48.809 21:27:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:48.809 21:27:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:10:48.809 21:27:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:10:48.809 21:27:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:10:48.809 21:27:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:10:50.713 21:27:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:10:50.713 21:27:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:10:50.713 21:27:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:10:50.713 21:27:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:10:50.713 21:27:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:10:50.713 21:27:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:10:50.713 21:27:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:50.973 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.973 21:27:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:50.973 21:27:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:10:50.973 21:27:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:10:50.973 21:27:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:50.973 21:27:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:10:50.973 21:27:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:50.973 21:27:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:10:50.973 21:27:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:50.973 21:27:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:50.973 21:27:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:50.973 21:27:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:50.973 21:27:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:50.973 21:27:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:50.973 21:27:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:50.973 21:27:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:50.973 21:27:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:50.973 21:27:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:50.973 21:27:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:50.973 21:27:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:50.973 21:27:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:50.973 21:27:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:50.973 21:27:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:50.973 21:27:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:50.973 [2024-06-07 21:27:51.097162] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:50.973 21:27:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:50.973 21:27:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:50.973 21:27:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:50.973 21:27:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:50.973 21:27:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:50.973 21:27:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:50.973 21:27:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:50.973 21:27:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:50.973 21:27:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:50.973 21:27:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:52.351 21:27:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:52.351 21:27:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:10:52.351 21:27:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:10:52.351 21:27:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:10:52.351 21:27:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:10:54.254 21:27:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:10:54.254 21:27:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:10:54.254 21:27:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:10:54.254 21:27:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:10:54.254 21:27:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:10:54.254 21:27:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:10:54.254 21:27:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:54.254 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.254 21:27:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:54.254 21:27:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:10:54.254 21:27:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:10:54.254 21:27:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:54.254 21:27:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:10:54.254 21:27:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:54.254 21:27:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:10:54.254 21:27:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:54.254 21:27:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:54.254 21:27:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.255 21:27:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:54.255 21:27:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:54.255 21:27:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:54.255 21:27:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.513 21:27:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:54.513 21:27:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:54.513 21:27:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:54.513 21:27:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:54.513 21:27:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.513 21:27:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:54.513 21:27:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:54.513 21:27:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:54.513 21:27:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.513 [2024-06-07 21:27:54.535585] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:54.513 21:27:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:54.513 21:27:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:54.513 21:27:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:54.513 21:27:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.513 21:27:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:54.513 21:27:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:54.513 21:27:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:54.513 21:27:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.513 21:27:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:54.513 21:27:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:55.894 21:27:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:55.894 21:27:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:10:55.894 21:27:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:10:55.894 21:27:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:10:55.894 21:27:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:10:57.798 21:27:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:10:57.798 21:27:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:10:57.798 21:27:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:10:57.798 21:27:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:10:57.798 21:27:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:10:57.798 21:27:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:10:57.798 21:27:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:57.798 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.798 21:27:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:57.798 21:27:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:10:57.798 21:27:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:10:57.798 21:27:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:57.798 21:27:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:10:57.798 21:27:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:57.798 21:27:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:10:57.798 21:27:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:57.798 21:27:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:57.798 21:27:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.798 21:27:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:57.798 21:27:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:57.798 21:27:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:57.798 21:27:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.798 21:27:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:57.798 21:27:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:10:57.798 21:27:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:57.798 21:27:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:57.798 21:27:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:57.798 21:27:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.798 21:27:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:57.798 21:27:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:57.798 21:27:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:57.799 21:27:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.799 [2024-06-07 21:27:57.978813] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:57.799 21:27:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:57.799 21:27:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:57.799 21:27:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:57.799 21:27:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.799 21:27:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:57.799 21:27:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:57.799 21:27:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:57.799 21:27:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.799 21:27:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:57.799 21:27:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:57.799 21:27:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:57.799 21:27:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.799 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:57.799 21:27:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:57.799 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:57.799 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.799 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:57.799 21:27:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:57.799 21:27:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:57.799 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:57.799 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.799 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:57.799 21:27:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:57.799 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:57.799 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.799 [2024-06-07 21:27:58.026944] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:57.799 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:57.799 21:27:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:57.799 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:57.799 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.799 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:57.799 21:27:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:57.799 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:57.799 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.799 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:57.799 21:27:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:57.799 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:57.799 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.799 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:57.799 21:27:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:57.799 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:57.799 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.059 [2024-06-07 21:27:58.079121] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.059 [2024-06-07 21:27:58.127308] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.059 [2024-06-07 21:27:58.175484] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:58.059 21:27:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:10:58.059 "tick_rate": 2200000000, 00:10:58.059 "poll_groups": [ 00:10:58.059 { 00:10:58.059 "name": "nvmf_tgt_poll_group_000", 00:10:58.059 "admin_qpairs": 2, 00:10:58.059 "io_qpairs": 196, 00:10:58.059 "current_admin_qpairs": 0, 00:10:58.059 "current_io_qpairs": 0, 00:10:58.059 "pending_bdev_io": 0, 00:10:58.059 "completed_nvme_io": 298, 00:10:58.059 "transports": [ 00:10:58.059 { 00:10:58.059 "trtype": "TCP" 00:10:58.059 } 00:10:58.059 ] 00:10:58.059 }, 00:10:58.059 { 00:10:58.060 "name": "nvmf_tgt_poll_group_001", 00:10:58.060 "admin_qpairs": 2, 00:10:58.060 "io_qpairs": 196, 00:10:58.060 "current_admin_qpairs": 0, 00:10:58.060 "current_io_qpairs": 0, 00:10:58.060 "pending_bdev_io": 0, 00:10:58.060 "completed_nvme_io": 295, 00:10:58.060 "transports": [ 00:10:58.060 { 00:10:58.060 "trtype": "TCP" 00:10:58.060 } 00:10:58.060 ] 00:10:58.060 }, 00:10:58.060 { 00:10:58.060 "name": "nvmf_tgt_poll_group_002", 00:10:58.060 "admin_qpairs": 1, 00:10:58.060 "io_qpairs": 196, 00:10:58.060 "current_admin_qpairs": 0, 00:10:58.060 "current_io_qpairs": 0, 00:10:58.060 "pending_bdev_io": 0, 00:10:58.060 "completed_nvme_io": 296, 00:10:58.060 "transports": [ 00:10:58.060 { 00:10:58.060 "trtype": "TCP" 00:10:58.060 } 00:10:58.060 ] 00:10:58.060 }, 00:10:58.060 { 00:10:58.060 "name": "nvmf_tgt_poll_group_003", 00:10:58.060 "admin_qpairs": 2, 00:10:58.060 "io_qpairs": 196, 00:10:58.060 "current_admin_qpairs": 0, 00:10:58.060 "current_io_qpairs": 0, 00:10:58.060 "pending_bdev_io": 0, 00:10:58.060 "completed_nvme_io": 245, 00:10:58.060 "transports": [ 00:10:58.060 { 00:10:58.060 "trtype": "TCP" 00:10:58.060 } 00:10:58.060 ] 00:10:58.060 } 00:10:58.060 ] 00:10:58.060 }' 00:10:58.060 21:27:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:10:58.060 21:27:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:58.060 21:27:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:58.060 21:27:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:58.060 21:27:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:10:58.060 21:27:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:10:58.060 21:27:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:58.060 21:27:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:58.060 21:27:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:58.319 21:27:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 784 > 0 )) 00:10:58.319 21:27:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:10:58.319 21:27:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:10:58.319 21:27:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:10:58.319 21:27:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:58.319 21:27:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:10:58.319 21:27:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:58.319 21:27:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:10:58.319 21:27:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:58.319 21:27:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:58.319 rmmod nvme_tcp 00:10:58.319 rmmod nvme_fabrics 00:10:58.319 rmmod nvme_keyring 00:10:58.319 21:27:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:58.319 21:27:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:10:58.319 21:27:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:10:58.319 21:27:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1307545 ']' 00:10:58.319 21:27:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1307545 00:10:58.319 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@949 -- # '[' -z 1307545 ']' 00:10:58.319 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # kill -0 1307545 00:10:58.319 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # uname 00:10:58.319 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:10:58.319 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1307545 00:10:58.319 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:10:58.319 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:10:58.319 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1307545' 00:10:58.319 killing process with pid 1307545 00:10:58.319 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@968 -- # kill 1307545 00:10:58.319 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@973 -- # wait 1307545 00:10:58.578 21:27:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:58.578 21:27:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:58.578 21:27:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:58.578 21:27:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:58.578 21:27:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:58.578 21:27:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:58.578 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:58.578 21:27:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.527 21:28:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:00.527 00:11:00.527 real 0m35.203s 00:11:00.527 user 1m46.104s 00:11:00.527 sys 0m6.907s 00:11:00.527 21:28:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:00.527 21:28:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.527 ************************************ 00:11:00.527 END TEST nvmf_rpc 00:11:00.527 ************************************ 00:11:00.527 21:28:00 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:00.527 21:28:00 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:11:00.527 21:28:00 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:00.527 21:28:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:00.786 ************************************ 00:11:00.786 START TEST nvmf_invalid 00:11:00.786 ************************************ 00:11:00.786 21:28:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:00.786 * Looking for test storage... 00:11:00.786 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:00.786 21:28:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:00.786 21:28:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:11:00.786 21:28:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:00.786 21:28:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:00.786 21:28:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:00.786 21:28:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:00.786 21:28:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:00.786 21:28:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:00.786 21:28:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:00.786 21:28:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:00.786 21:28:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:00.786 21:28:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:00.786 21:28:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:11:00.786 21:28:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:11:00.786 21:28:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:00.786 21:28:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:00.786 21:28:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:00.786 21:28:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:00.786 21:28:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:00.786 21:28:00 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:00.786 21:28:00 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:00.787 21:28:00 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:00.787 21:28:00 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.787 21:28:00 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.787 21:28:00 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.787 21:28:00 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:11:00.787 21:28:00 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.787 21:28:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:11:00.787 21:28:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:00.787 21:28:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:00.787 21:28:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:00.787 21:28:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:00.787 21:28:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:00.787 21:28:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:00.787 21:28:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:00.787 21:28:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:00.787 21:28:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:00.787 21:28:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:00.787 21:28:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:00.787 21:28:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:11:00.787 21:28:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:11:00.787 21:28:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:11:00.787 21:28:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:00.787 21:28:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:00.787 21:28:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:00.787 21:28:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:00.787 21:28:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:00.787 21:28:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.787 21:28:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:00.787 21:28:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.787 21:28:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:00.787 21:28:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:00.787 21:28:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:11:00.787 21:28:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:07.356 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:07.356 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:11:07.356 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:07.356 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:07.356 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:07.356 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:07.356 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:07.356 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:11:07.356 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:07.356 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:11:07.356 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:11:07.356 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:11:07.356 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:11:07.356 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:11:07.356 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:11:07.356 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:07.356 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:07.356 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:07.356 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:07.356 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:07.356 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:07.356 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:07.356 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:07.356 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:07.356 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:07.356 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:07.356 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:07.356 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:07.356 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:07.356 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:07.356 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:07.356 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:07.356 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:07.356 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:07.356 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:07.356 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:07.356 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:07.356 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:07.356 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:07.356 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:07.356 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:07.356 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:07.356 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:07.356 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:07.356 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:07.356 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:07.356 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:07.356 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:07.356 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:07.356 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:07.356 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:07.357 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:07.357 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:07.357 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:07.357 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:07.357 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:07.357 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:07.357 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:07.357 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:07.357 Found net devices under 0000:af:00.0: cvl_0_0 00:11:07.357 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:07.357 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:07.357 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:07.357 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:07.357 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:07.357 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:07.357 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:07.357 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:07.357 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:07.357 Found net devices under 0000:af:00.1: cvl_0_1 00:11:07.357 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:07.357 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:07.357 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:11:07.357 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:07.357 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:07.357 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:07.357 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:07.357 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:07.357 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:07.357 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:07.357 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:07.357 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:07.357 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:07.357 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:07.357 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:07.357 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:07.357 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:07.357 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:07.357 21:28:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:07.357 21:28:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:07.357 21:28:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:07.357 21:28:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:07.357 21:28:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:07.357 21:28:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:07.357 21:28:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:07.357 21:28:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:07.357 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:07.357 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:11:07.357 00:11:07.357 --- 10.0.0.2 ping statistics --- 00:11:07.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.357 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:11:07.357 21:28:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:07.357 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:07.357 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:11:07.357 00:11:07.357 --- 10.0.0.1 ping statistics --- 00:11:07.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.357 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:11:07.357 21:28:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:07.357 21:28:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:11:07.357 21:28:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:07.357 21:28:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:07.357 21:28:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:07.357 21:28:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:07.357 21:28:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:07.357 21:28:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:07.357 21:28:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:07.357 21:28:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:07.357 21:28:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:07.357 21:28:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@723 -- # xtrace_disable 00:11:07.357 21:28:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:07.357 21:28:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1316336 00:11:07.357 21:28:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1316336 00:11:07.357 21:28:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:07.357 21:28:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@830 -- # '[' -z 1316336 ']' 00:11:07.357 21:28:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.357 21:28:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:07.357 21:28:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.357 21:28:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:07.357 21:28:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:07.357 [2024-06-07 21:28:07.328924] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:11:07.357 [2024-06-07 21:28:07.328980] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:07.357 EAL: No free 2048 kB hugepages reported on node 1 00:11:07.357 [2024-06-07 21:28:07.425352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:07.357 [2024-06-07 21:28:07.515240] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:07.357 [2024-06-07 21:28:07.515285] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:07.357 [2024-06-07 21:28:07.515296] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:07.357 [2024-06-07 21:28:07.515304] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:07.357 [2024-06-07 21:28:07.515311] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:07.357 [2024-06-07 21:28:07.515367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:07.357 [2024-06-07 21:28:07.515464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:07.357 [2024-06-07 21:28:07.515577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.357 [2024-06-07 21:28:07.515577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:08.293 21:28:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:08.293 21:28:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@863 -- # return 0 00:11:08.293 21:28:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:08.293 21:28:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@729 -- # xtrace_disable 00:11:08.293 21:28:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:08.293 21:28:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:08.293 21:28:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:08.293 21:28:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode15577 00:11:08.293 [2024-06-07 21:28:08.448007] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:11:08.293 21:28:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:11:08.293 { 00:11:08.293 "nqn": "nqn.2016-06.io.spdk:cnode15577", 00:11:08.293 "tgt_name": "foobar", 00:11:08.293 "method": "nvmf_create_subsystem", 00:11:08.293 "req_id": 1 00:11:08.293 } 00:11:08.293 Got JSON-RPC error response 00:11:08.293 response: 00:11:08.293 { 00:11:08.293 "code": -32603, 00:11:08.293 "message": "Unable to find target foobar" 00:11:08.293 }' 00:11:08.293 21:28:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:11:08.293 { 00:11:08.293 "nqn": "nqn.2016-06.io.spdk:cnode15577", 00:11:08.293 "tgt_name": "foobar", 00:11:08.293 "method": "nvmf_create_subsystem", 00:11:08.293 "req_id": 1 00:11:08.293 } 00:11:08.293 Got JSON-RPC error response 00:11:08.293 response: 00:11:08.293 { 00:11:08.293 "code": -32603, 00:11:08.293 "message": "Unable to find target foobar" 00:11:08.293 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:11:08.293 21:28:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:11:08.293 21:28:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode5755 00:11:08.551 [2024-06-07 21:28:08.705003] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5755: invalid serial number 'SPDKISFASTANDAWESOME' 00:11:08.551 21:28:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:11:08.551 { 00:11:08.551 "nqn": "nqn.2016-06.io.spdk:cnode5755", 00:11:08.551 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:08.551 "method": "nvmf_create_subsystem", 00:11:08.551 "req_id": 1 00:11:08.551 } 00:11:08.551 Got JSON-RPC error response 00:11:08.551 response: 00:11:08.551 { 00:11:08.551 "code": -32602, 00:11:08.551 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:08.551 }' 00:11:08.551 21:28:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:11:08.552 { 00:11:08.552 "nqn": "nqn.2016-06.io.spdk:cnode5755", 00:11:08.552 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:08.552 "method": "nvmf_create_subsystem", 00:11:08.552 "req_id": 1 00:11:08.552 } 00:11:08.552 Got JSON-RPC error response 00:11:08.552 response: 00:11:08.552 { 00:11:08.552 "code": -32602, 00:11:08.552 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:08.552 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:08.552 21:28:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:11:08.552 21:28:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode5644 00:11:08.810 [2024-06-07 21:28:08.953830] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5644: invalid model number 'SPDK_Controller' 00:11:08.810 21:28:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:11:08.810 { 00:11:08.810 "nqn": "nqn.2016-06.io.spdk:cnode5644", 00:11:08.810 "model_number": "SPDK_Controller\u001f", 00:11:08.810 "method": "nvmf_create_subsystem", 00:11:08.810 "req_id": 1 00:11:08.810 } 00:11:08.810 Got JSON-RPC error response 00:11:08.810 response: 00:11:08.810 { 00:11:08.810 "code": -32602, 00:11:08.810 "message": "Invalid MN SPDK_Controller\u001f" 00:11:08.810 }' 00:11:08.810 21:28:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:11:08.810 { 00:11:08.810 "nqn": "nqn.2016-06.io.spdk:cnode5644", 00:11:08.810 "model_number": "SPDK_Controller\u001f", 00:11:08.810 "method": "nvmf_create_subsystem", 00:11:08.810 "req_id": 1 00:11:08.810 } 00:11:08.810 Got JSON-RPC error response 00:11:08.810 response: 00:11:08.810 { 00:11:08.810 "code": -32602, 00:11:08.810 "message": "Invalid MN SPDK_Controller\u001f" 00:11:08.810 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:08.810 21:28:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:11:08.810 21:28:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:11:08.810 21:28:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:08.810 21:28:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:08.810 21:28:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:08.810 21:28:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:08.810 21:28:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:08.810 21:28:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:11:08.810 21:28:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:11:08.810 21:28:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:11:08.810 21:28:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:08.810 21:28:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:08.810 21:28:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:11:08.811 21:28:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:08.811 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:11:09.070 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:11:09.070 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:11:09.070 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:09.070 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:09.070 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:11:09.070 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:11:09.070 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:11:09.070 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:09.070 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:09.070 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:11:09.070 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:11:09.070 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:11:09.070 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:09.070 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:09.070 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:11:09.070 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:11:09.070 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:11:09.070 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:09.070 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:09.070 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:11:09.070 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:11:09.070 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:11:09.070 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:09.070 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:09.070 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:11:09.070 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:11:09.070 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:11:09.070 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:09.070 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:09.070 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:11:09.070 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:11:09.070 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:11:09.070 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:09.070 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:09.070 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ / == \- ]] 00:11:09.070 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '/m!~o\Dt%!^Oup:M^v4@' 00:11:09.070 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '/m!~o\Dt%!^Oup:M^v4@' nqn.2016-06.io.spdk:cnode2264 00:11:09.329 [2024-06-07 21:28:09.347197] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2264: invalid serial number '/m!~o\Dt%!^Oup:M^v4@' 00:11:09.329 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:11:09.329 { 00:11:09.329 "nqn": "nqn.2016-06.io.spdk:cnode2264", 00:11:09.329 "serial_number": "/m!~o\\D\u007ft%!^Oup:M^v4@", 00:11:09.329 "method": "nvmf_create_subsystem", 00:11:09.329 "req_id": 1 00:11:09.329 } 00:11:09.329 Got JSON-RPC error response 00:11:09.329 response: 00:11:09.329 { 00:11:09.329 "code": -32602, 00:11:09.329 "message": "Invalid SN /m!~o\\D\u007ft%!^Oup:M^v4@" 00:11:09.329 }' 00:11:09.329 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:11:09.329 { 00:11:09.329 "nqn": "nqn.2016-06.io.spdk:cnode2264", 00:11:09.329 "serial_number": "/m!~o\\D\u007ft%!^Oup:M^v4@", 00:11:09.329 "method": "nvmf_create_subsystem", 00:11:09.329 "req_id": 1 00:11:09.329 } 00:11:09.329 Got JSON-RPC error response 00:11:09.329 response: 00:11:09.329 { 00:11:09.329 "code": -32602, 00:11:09.329 "message": "Invalid SN /m!~o\\D\u007ft%!^Oup:M^v4@" 00:11:09.329 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:09.329 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:11:09.329 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:11:09.330 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:11:09.331 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:09.331 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:09.331 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:11:09.331 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:11:09.331 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:11:09.331 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:09.331 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:09.331 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:11:09.331 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:11:09.331 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:11:09.331 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:09.331 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:09.331 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:11:09.331 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:11:09.331 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:11:09.331 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:09.331 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:09.331 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:11:09.331 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:11:09.331 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:11:09.331 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:09.331 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:09.331 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:11:09.331 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:11:09.331 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:11:09.331 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:09.331 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:09.331 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:11:09.331 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:11:09.331 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:11:09.331 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:09.331 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:09.331 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:11:09.331 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:11:09.331 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:11:09.331 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:09.331 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:09.331 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:11:09.331 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:11:09.606 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:11:09.606 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:09.606 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:09.606 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:11:09.606 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:11:09.606 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:11:09.606 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:09.606 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:09.606 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:11:09.606 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:11:09.606 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:11:09.606 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:09.606 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:09.606 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:11:09.606 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:11:09.606 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:11:09.606 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:09.606 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:09.606 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:11:09.606 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:11:09.606 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:11:09.606 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:09.606 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:09.606 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:11:09.606 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:11:09.606 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:11:09.606 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:09.606 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:09.606 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:11:09.606 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:11:09.606 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:11:09.606 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:09.606 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:09.606 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ h == \- ]] 00:11:09.606 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'hg-x?`S.,NVFCT7`CNeb1U>O"|s|iN9-w]T4D[d' 00:11:09.606 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'hg-x?`S.,NVFCT7`CNeb1U>O"|s|iN9-w]T4D[d' nqn.2016-06.io.spdk:cnode20193 00:11:09.606 [2024-06-07 21:28:09.865041] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20193: invalid model number 'hg-x?`S.,NVFCT7`CNeb1U>O"|s|iN9-w]T4D[d' 00:11:09.864 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:11:09.864 { 00:11:09.864 "nqn": "nqn.2016-06.io.spdk:cnode20193", 00:11:09.864 "model_number": "hg-x?`S.,NVFCT7`C\u007fNeb1U>O\"\u007f|s|iN9-w]T4D[d", 00:11:09.864 "method": "nvmf_create_subsystem", 00:11:09.864 "req_id": 1 00:11:09.864 } 00:11:09.864 Got JSON-RPC error response 00:11:09.864 response: 00:11:09.864 { 00:11:09.864 "code": -32602, 00:11:09.864 "message": "Invalid MN hg-x?`S.,NVFCT7`C\u007fNeb1U>O\"\u007f|s|iN9-w]T4D[d" 00:11:09.864 }' 00:11:09.864 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:11:09.864 { 00:11:09.864 "nqn": "nqn.2016-06.io.spdk:cnode20193", 00:11:09.864 "model_number": "hg-x?`S.,NVFCT7`C\u007fNeb1U>O\"\u007f|s|iN9-w]T4D[d", 00:11:09.864 "method": "nvmf_create_subsystem", 00:11:09.864 "req_id": 1 00:11:09.864 } 00:11:09.864 Got JSON-RPC error response 00:11:09.864 response: 00:11:09.864 { 00:11:09.864 "code": -32602, 00:11:09.864 "message": "Invalid MN hg-x?`S.,NVFCT7`C\u007fNeb1U>O\"\u007f|s|iN9-w]T4D[d" 00:11:09.864 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:09.864 21:28:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:11:09.864 [2024-06-07 21:28:10.126122] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:10.122 21:28:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:11:10.122 21:28:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:11:10.122 21:28:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:11:10.122 21:28:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:11:10.122 21:28:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:11:10.122 21:28:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:11:10.380 [2024-06-07 21:28:10.575706] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:11:10.380 21:28:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:11:10.380 { 00:11:10.380 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:10.380 "listen_address": { 00:11:10.380 "trtype": "tcp", 00:11:10.380 "traddr": "", 00:11:10.380 "trsvcid": "4421" 00:11:10.380 }, 00:11:10.380 "method": "nvmf_subsystem_remove_listener", 00:11:10.380 "req_id": 1 00:11:10.380 } 00:11:10.380 Got JSON-RPC error response 00:11:10.380 response: 00:11:10.380 { 00:11:10.380 "code": -32602, 00:11:10.380 "message": "Invalid parameters" 00:11:10.380 }' 00:11:10.380 21:28:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:11:10.380 { 00:11:10.380 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:10.380 "listen_address": { 00:11:10.380 "trtype": "tcp", 00:11:10.380 "traddr": "", 00:11:10.380 "trsvcid": "4421" 00:11:10.380 }, 00:11:10.380 "method": "nvmf_subsystem_remove_listener", 00:11:10.380 "req_id": 1 00:11:10.380 } 00:11:10.380 Got JSON-RPC error response 00:11:10.380 response: 00:11:10.380 { 00:11:10.380 "code": -32602, 00:11:10.380 "message": "Invalid parameters" 00:11:10.380 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:11:10.380 21:28:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12580 -i 0 00:11:10.639 [2024-06-07 21:28:10.836602] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12580: invalid cntlid range [0-65519] 00:11:10.639 21:28:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:11:10.639 { 00:11:10.639 "nqn": "nqn.2016-06.io.spdk:cnode12580", 00:11:10.639 "min_cntlid": 0, 00:11:10.639 "method": "nvmf_create_subsystem", 00:11:10.639 "req_id": 1 00:11:10.639 } 00:11:10.639 Got JSON-RPC error response 00:11:10.639 response: 00:11:10.639 { 00:11:10.639 "code": -32602, 00:11:10.639 "message": "Invalid cntlid range [0-65519]" 00:11:10.639 }' 00:11:10.639 21:28:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:11:10.639 { 00:11:10.639 "nqn": "nqn.2016-06.io.spdk:cnode12580", 00:11:10.639 "min_cntlid": 0, 00:11:10.639 "method": "nvmf_create_subsystem", 00:11:10.639 "req_id": 1 00:11:10.639 } 00:11:10.639 Got JSON-RPC error response 00:11:10.639 response: 00:11:10.639 { 00:11:10.639 "code": -32602, 00:11:10.639 "message": "Invalid cntlid range [0-65519]" 00:11:10.639 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:10.639 21:28:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22192 -i 65520 00:11:10.898 [2024-06-07 21:28:11.097534] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22192: invalid cntlid range [65520-65519] 00:11:10.898 21:28:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:11:10.898 { 00:11:10.898 "nqn": "nqn.2016-06.io.spdk:cnode22192", 00:11:10.898 "min_cntlid": 65520, 00:11:10.898 "method": "nvmf_create_subsystem", 00:11:10.898 "req_id": 1 00:11:10.898 } 00:11:10.898 Got JSON-RPC error response 00:11:10.898 response: 00:11:10.898 { 00:11:10.898 "code": -32602, 00:11:10.898 "message": "Invalid cntlid range [65520-65519]" 00:11:10.898 }' 00:11:10.898 21:28:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:11:10.898 { 00:11:10.898 "nqn": "nqn.2016-06.io.spdk:cnode22192", 00:11:10.898 "min_cntlid": 65520, 00:11:10.898 "method": "nvmf_create_subsystem", 00:11:10.898 "req_id": 1 00:11:10.898 } 00:11:10.898 Got JSON-RPC error response 00:11:10.898 response: 00:11:10.898 { 00:11:10.898 "code": -32602, 00:11:10.898 "message": "Invalid cntlid range [65520-65519]" 00:11:10.898 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:10.898 21:28:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11119 -I 0 00:11:11.156 [2024-06-07 21:28:11.346411] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11119: invalid cntlid range [1-0] 00:11:11.156 21:28:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:11:11.156 { 00:11:11.156 "nqn": "nqn.2016-06.io.spdk:cnode11119", 00:11:11.156 "max_cntlid": 0, 00:11:11.156 "method": "nvmf_create_subsystem", 00:11:11.156 "req_id": 1 00:11:11.156 } 00:11:11.156 Got JSON-RPC error response 00:11:11.156 response: 00:11:11.156 { 00:11:11.156 "code": -32602, 00:11:11.156 "message": "Invalid cntlid range [1-0]" 00:11:11.156 }' 00:11:11.156 21:28:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:11:11.156 { 00:11:11.156 "nqn": "nqn.2016-06.io.spdk:cnode11119", 00:11:11.156 "max_cntlid": 0, 00:11:11.156 "method": "nvmf_create_subsystem", 00:11:11.156 "req_id": 1 00:11:11.156 } 00:11:11.156 Got JSON-RPC error response 00:11:11.156 response: 00:11:11.156 { 00:11:11.156 "code": -32602, 00:11:11.156 "message": "Invalid cntlid range [1-0]" 00:11:11.156 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:11.156 21:28:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22549 -I 65520 00:11:11.414 [2024-06-07 21:28:11.595304] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22549: invalid cntlid range [1-65520] 00:11:11.414 21:28:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:11:11.414 { 00:11:11.414 "nqn": "nqn.2016-06.io.spdk:cnode22549", 00:11:11.414 "max_cntlid": 65520, 00:11:11.414 "method": "nvmf_create_subsystem", 00:11:11.414 "req_id": 1 00:11:11.414 } 00:11:11.414 Got JSON-RPC error response 00:11:11.414 response: 00:11:11.414 { 00:11:11.414 "code": -32602, 00:11:11.414 "message": "Invalid cntlid range [1-65520]" 00:11:11.414 }' 00:11:11.414 21:28:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:11:11.414 { 00:11:11.414 "nqn": "nqn.2016-06.io.spdk:cnode22549", 00:11:11.414 "max_cntlid": 65520, 00:11:11.415 "method": "nvmf_create_subsystem", 00:11:11.415 "req_id": 1 00:11:11.415 } 00:11:11.415 Got JSON-RPC error response 00:11:11.415 response: 00:11:11.415 { 00:11:11.415 "code": -32602, 00:11:11.415 "message": "Invalid cntlid range [1-65520]" 00:11:11.415 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:11.415 21:28:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9182 -i 6 -I 5 00:11:11.673 [2024-06-07 21:28:11.775944] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9182: invalid cntlid range [6-5] 00:11:11.673 21:28:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:11:11.673 { 00:11:11.673 "nqn": "nqn.2016-06.io.spdk:cnode9182", 00:11:11.673 "min_cntlid": 6, 00:11:11.673 "max_cntlid": 5, 00:11:11.673 "method": "nvmf_create_subsystem", 00:11:11.673 "req_id": 1 00:11:11.673 } 00:11:11.673 Got JSON-RPC error response 00:11:11.673 response: 00:11:11.673 { 00:11:11.673 "code": -32602, 00:11:11.673 "message": "Invalid cntlid range [6-5]" 00:11:11.673 }' 00:11:11.673 21:28:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:11:11.673 { 00:11:11.673 "nqn": "nqn.2016-06.io.spdk:cnode9182", 00:11:11.673 "min_cntlid": 6, 00:11:11.673 "max_cntlid": 5, 00:11:11.673 "method": "nvmf_create_subsystem", 00:11:11.673 "req_id": 1 00:11:11.673 } 00:11:11.673 Got JSON-RPC error response 00:11:11.673 response: 00:11:11.673 { 00:11:11.673 "code": -32602, 00:11:11.673 "message": "Invalid cntlid range [6-5]" 00:11:11.673 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:11.673 21:28:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:11:11.673 21:28:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:11:11.673 { 00:11:11.673 "name": "foobar", 00:11:11.673 "method": "nvmf_delete_target", 00:11:11.673 "req_id": 1 00:11:11.673 } 00:11:11.673 Got JSON-RPC error response 00:11:11.673 response: 00:11:11.673 { 00:11:11.673 "code": -32602, 00:11:11.673 "message": "The specified target doesn'\''t exist, cannot delete it." 00:11:11.673 }' 00:11:11.673 21:28:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:11:11.673 { 00:11:11.673 "name": "foobar", 00:11:11.673 "method": "nvmf_delete_target", 00:11:11.673 "req_id": 1 00:11:11.673 } 00:11:11.673 Got JSON-RPC error response 00:11:11.673 response: 00:11:11.673 { 00:11:11.673 "code": -32602, 00:11:11.673 "message": "The specified target doesn't exist, cannot delete it." 00:11:11.673 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:11:11.673 21:28:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:11:11.673 21:28:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:11:11.673 21:28:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:11.673 21:28:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:11:11.673 21:28:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:11.673 21:28:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:11:11.673 21:28:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:11.673 21:28:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:11.673 rmmod nvme_tcp 00:11:11.673 rmmod nvme_fabrics 00:11:11.673 rmmod nvme_keyring 00:11:11.932 21:28:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:11.932 21:28:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:11:11.932 21:28:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:11:11.932 21:28:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 1316336 ']' 00:11:11.932 21:28:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 1316336 00:11:11.932 21:28:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@949 -- # '[' -z 1316336 ']' 00:11:11.932 21:28:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # kill -0 1316336 00:11:11.932 21:28:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # uname 00:11:11.932 21:28:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:11.932 21:28:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1316336 00:11:11.932 21:28:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:11:11.932 21:28:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:11:11.932 21:28:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1316336' 00:11:11.932 killing process with pid 1316336 00:11:11.932 21:28:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@968 -- # kill 1316336 00:11:11.932 21:28:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@973 -- # wait 1316336 00:11:12.191 21:28:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:12.191 21:28:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:12.191 21:28:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:12.191 21:28:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:12.191 21:28:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:12.191 21:28:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.191 21:28:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:12.191 21:28:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.094 21:28:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:14.094 00:11:14.094 real 0m13.483s 00:11:14.094 user 0m23.394s 00:11:14.094 sys 0m5.853s 00:11:14.094 21:28:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:14.094 21:28:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:14.094 ************************************ 00:11:14.094 END TEST nvmf_invalid 00:11:14.094 ************************************ 00:11:14.094 21:28:14 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:11:14.094 21:28:14 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:11:14.094 21:28:14 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:14.094 21:28:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:14.094 ************************************ 00:11:14.094 START TEST nvmf_abort 00:11:14.094 ************************************ 00:11:14.094 21:28:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:11:14.353 * Looking for test storage... 00:11:14.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:14.353 21:28:14 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:14.353 21:28:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:11:14.353 21:28:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:14.353 21:28:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:14.353 21:28:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:14.353 21:28:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:14.353 21:28:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:14.353 21:28:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:14.353 21:28:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:14.353 21:28:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:14.353 21:28:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:14.353 21:28:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:14.353 21:28:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:11:14.353 21:28:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:11:14.354 21:28:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:14.354 21:28:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:14.354 21:28:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:14.354 21:28:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:14.354 21:28:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:14.354 21:28:14 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:14.354 21:28:14 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:14.354 21:28:14 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:14.354 21:28:14 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.354 21:28:14 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.354 21:28:14 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.354 21:28:14 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:11:14.354 21:28:14 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.354 21:28:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:11:14.354 21:28:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:14.354 21:28:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:14.354 21:28:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:14.354 21:28:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:14.354 21:28:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:14.354 21:28:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:14.354 21:28:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:14.354 21:28:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:14.354 21:28:14 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:14.354 21:28:14 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:11:14.354 21:28:14 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:11:14.354 21:28:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:14.354 21:28:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:14.354 21:28:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:14.354 21:28:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:14.354 21:28:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:14.354 21:28:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.354 21:28:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:14.354 21:28:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.354 21:28:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:14.354 21:28:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:14.354 21:28:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:11:14.354 21:28:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:20.920 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:20.920 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:20.920 Found net devices under 0000:af:00.0: cvl_0_0 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:20.920 Found net devices under 0000:af:00.1: cvl_0_1 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:20.920 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:11:20.921 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:20.921 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:20.921 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:20.921 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:20.921 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:20.921 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:20.921 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:20.921 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:20.921 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:20.921 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:20.921 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:20.921 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:20.921 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:20.921 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:20.921 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:20.921 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:20.921 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:20.921 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:20.921 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:20.921 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:20.921 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:20.921 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:20.921 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:20.921 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:20.921 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:11:20.921 00:11:20.921 --- 10.0.0.2 ping statistics --- 00:11:20.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.921 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:11:20.921 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:20.921 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:20.921 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:11:20.921 00:11:20.921 --- 10.0.0.1 ping statistics --- 00:11:20.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.921 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:11:20.921 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:20.921 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:11:20.921 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:20.921 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:20.921 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:20.921 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:20.921 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:20.921 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:20.921 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:20.921 21:28:20 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:11:20.921 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:20.921 21:28:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@723 -- # xtrace_disable 00:11:20.921 21:28:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:20.921 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1321423 00:11:20.921 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1321423 00:11:20.921 21:28:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@830 -- # '[' -z 1321423 ']' 00:11:20.921 21:28:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:20.921 21:28:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:20.921 21:28:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:20.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:20.921 21:28:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:20.921 21:28:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:20.921 21:28:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:20.921 [2024-06-07 21:28:20.938459] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:11:20.921 [2024-06-07 21:28:20.938512] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:20.921 EAL: No free 2048 kB hugepages reported on node 1 00:11:20.921 [2024-06-07 21:28:21.024935] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:20.921 [2024-06-07 21:28:21.116093] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:20.921 [2024-06-07 21:28:21.116135] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:20.921 [2024-06-07 21:28:21.116145] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:20.921 [2024-06-07 21:28:21.116154] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:20.921 [2024-06-07 21:28:21.116161] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:20.921 [2024-06-07 21:28:21.116266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:20.921 [2024-06-07 21:28:21.116388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:20.921 [2024-06-07 21:28:21.116389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:21.855 21:28:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:21.855 21:28:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@863 -- # return 0 00:11:21.855 21:28:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:21.855 21:28:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@729 -- # xtrace_disable 00:11:21.855 21:28:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:21.855 21:28:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:21.855 21:28:21 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:11:21.855 21:28:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:21.855 21:28:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:21.855 [2024-06-07 21:28:21.915097] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:21.855 21:28:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:21.855 21:28:21 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:11:21.855 21:28:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:21.855 21:28:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:21.855 Malloc0 00:11:21.855 21:28:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:21.855 21:28:21 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:21.855 21:28:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:21.855 21:28:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:21.855 Delay0 00:11:21.855 21:28:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:21.855 21:28:21 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:21.855 21:28:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:21.855 21:28:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:21.855 21:28:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:21.855 21:28:21 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:11:21.855 21:28:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:21.855 21:28:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:21.855 21:28:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:21.855 21:28:21 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:21.855 21:28:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:21.855 21:28:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:21.855 [2024-06-07 21:28:21.980881] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:21.855 21:28:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:21.855 21:28:21 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:21.855 21:28:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:21.855 21:28:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:21.855 21:28:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:21.855 21:28:21 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:11:21.855 EAL: No free 2048 kB hugepages reported on node 1 00:11:22.114 [2024-06-07 21:28:22.138228] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:24.018 Initializing NVMe Controllers 00:11:24.018 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:24.018 controller IO queue size 128 less than required 00:11:24.018 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:11:24.018 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:11:24.018 Initialization complete. Launching workers. 00:11:24.018 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28967 00:11:24.018 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29028, failed to submit 62 00:11:24.018 success 28971, unsuccess 57, failed 0 00:11:24.018 21:28:24 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:24.018 21:28:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:24.018 21:28:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:24.018 21:28:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:24.018 21:28:24 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:11:24.018 21:28:24 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:11:24.018 21:28:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:24.018 21:28:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:11:24.018 21:28:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:24.018 21:28:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:11:24.018 21:28:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:24.018 21:28:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:24.018 rmmod nvme_tcp 00:11:24.018 rmmod nvme_fabrics 00:11:24.018 rmmod nvme_keyring 00:11:24.018 21:28:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:24.277 21:28:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:11:24.277 21:28:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:11:24.277 21:28:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1321423 ']' 00:11:24.277 21:28:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1321423 00:11:24.277 21:28:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@949 -- # '[' -z 1321423 ']' 00:11:24.277 21:28:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # kill -0 1321423 00:11:24.277 21:28:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # uname 00:11:24.277 21:28:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:24.277 21:28:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1321423 00:11:24.277 21:28:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:11:24.277 21:28:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:11:24.277 21:28:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1321423' 00:11:24.277 killing process with pid 1321423 00:11:24.277 21:28:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@968 -- # kill 1321423 00:11:24.277 21:28:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@973 -- # wait 1321423 00:11:24.535 21:28:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:24.535 21:28:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:24.535 21:28:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:24.535 21:28:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:24.535 21:28:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:24.535 21:28:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:24.536 21:28:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:24.536 21:28:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.441 21:28:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:26.441 00:11:26.441 real 0m12.267s 00:11:26.441 user 0m14.001s 00:11:26.441 sys 0m5.712s 00:11:26.441 21:28:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:26.441 21:28:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:26.441 ************************************ 00:11:26.441 END TEST nvmf_abort 00:11:26.441 ************************************ 00:11:26.441 21:28:26 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:11:26.441 21:28:26 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:11:26.441 21:28:26 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:26.441 21:28:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:26.441 ************************************ 00:11:26.441 START TEST nvmf_ns_hotplug_stress 00:11:26.441 ************************************ 00:11:26.441 21:28:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:11:26.701 * Looking for test storage... 00:11:26.701 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:26.701 21:28:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:26.701 21:28:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:11:26.701 21:28:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:26.701 21:28:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:26.701 21:28:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:26.701 21:28:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:26.701 21:28:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:26.701 21:28:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:26.701 21:28:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:26.701 21:28:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:26.701 21:28:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:26.701 21:28:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:26.701 21:28:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:11:26.701 21:28:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:11:26.701 21:28:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:26.701 21:28:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:26.701 21:28:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:26.701 21:28:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:26.701 21:28:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:26.701 21:28:26 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:26.701 21:28:26 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:26.701 21:28:26 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:26.701 21:28:26 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.701 21:28:26 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.701 21:28:26 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.701 21:28:26 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:11:26.701 21:28:26 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.701 21:28:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:11:26.701 21:28:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:26.701 21:28:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:26.701 21:28:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:26.701 21:28:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:26.701 21:28:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:26.701 21:28:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:26.701 21:28:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:26.701 21:28:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:26.701 21:28:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:26.701 21:28:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:11:26.701 21:28:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:26.701 21:28:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:26.701 21:28:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:26.701 21:28:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:26.701 21:28:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:26.701 21:28:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.701 21:28:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:26.701 21:28:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.701 21:28:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:26.701 21:28:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:26.701 21:28:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:11:26.701 21:28:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:33.270 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:33.270 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:33.270 Found net devices under 0000:af:00.0: cvl_0_0 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:33.270 Found net devices under 0000:af:00.1: cvl_0_1 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:33.270 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:33.271 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:33.271 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:33.271 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:33.271 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:33.271 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:33.271 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:33.271 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:33.271 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:33.271 21:28:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:33.271 21:28:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:33.271 21:28:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:33.271 21:28:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:33.271 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:33.271 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:11:33.271 00:11:33.271 --- 10.0.0.2 ping statistics --- 00:11:33.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.271 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:11:33.271 21:28:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:33.271 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:33.271 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:11:33.271 00:11:33.271 --- 10.0.0.1 ping statistics --- 00:11:33.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.271 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:11:33.271 21:28:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:33.271 21:28:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:11:33.271 21:28:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:33.271 21:28:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:33.271 21:28:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:33.271 21:28:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:33.271 21:28:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:33.271 21:28:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:33.271 21:28:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:33.271 21:28:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:11:33.271 21:28:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:33.271 21:28:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@723 -- # xtrace_disable 00:11:33.271 21:28:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:33.271 21:28:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1325985 00:11:33.271 21:28:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1325985 00:11:33.271 21:28:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:33.271 21:28:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@830 -- # '[' -z 1325985 ']' 00:11:33.271 21:28:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.271 21:28:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:33.271 21:28:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.271 21:28:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:33.271 21:28:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:33.271 [2024-06-07 21:28:33.183658] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:11:33.271 [2024-06-07 21:28:33.183714] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:33.271 EAL: No free 2048 kB hugepages reported on node 1 00:11:33.271 [2024-06-07 21:28:33.271175] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:33.271 [2024-06-07 21:28:33.358382] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:33.271 [2024-06-07 21:28:33.358427] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:33.271 [2024-06-07 21:28:33.358438] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:33.271 [2024-06-07 21:28:33.358447] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:33.271 [2024-06-07 21:28:33.358455] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:33.271 [2024-06-07 21:28:33.358564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:33.271 [2024-06-07 21:28:33.358669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:33.271 [2024-06-07 21:28:33.358670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:34.208 21:28:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:34.208 21:28:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@863 -- # return 0 00:11:34.208 21:28:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:34.208 21:28:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@729 -- # xtrace_disable 00:11:34.208 21:28:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:34.208 21:28:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:34.208 21:28:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:11:34.208 21:28:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:34.208 [2024-06-07 21:28:34.385822] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:34.208 21:28:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:34.467 21:28:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:34.467 [2024-06-07 21:28:34.732156] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:34.726 21:28:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:34.726 21:28:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:11:34.985 Malloc0 00:11:34.985 21:28:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:35.243 Delay0 00:11:35.243 21:28:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:35.502 21:28:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:11:35.502 NULL1 00:11:35.502 21:28:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:35.761 21:28:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:11:35.761 21:28:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1326539 00:11:35.761 21:28:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1326539 00:11:35.761 21:28:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:35.761 EAL: No free 2048 kB hugepages reported on node 1 00:11:37.139 Read completed with error (sct=0, sc=11) 00:11:37.139 21:28:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:37.139 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:37.139 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:37.139 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:37.139 21:28:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:11:37.139 21:28:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:11:37.398 true 00:11:37.398 21:28:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1326539 00:11:37.398 21:28:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:38.431 21:28:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:38.431 21:28:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:11:38.431 21:28:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:11:38.690 true 00:11:38.690 21:28:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1326539 00:11:38.690 21:28:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:38.690 21:28:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:38.949 21:28:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:11:38.949 21:28:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:11:38.949 true 00:11:39.208 21:28:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1326539 00:11:39.208 21:28:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:40.144 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:40.145 21:28:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:40.145 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:40.145 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:40.403 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:40.403 21:28:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:11:40.403 21:28:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:11:40.403 true 00:11:40.662 21:28:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1326539 00:11:40.662 21:28:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:41.598 21:28:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:41.598 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:41.598 21:28:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:11:41.598 21:28:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:11:41.598 true 00:11:41.598 21:28:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1326539 00:11:41.598 21:28:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:41.856 21:28:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:42.113 21:28:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:11:42.114 21:28:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:11:42.114 true 00:11:42.371 21:28:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1326539 00:11:42.371 21:28:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:42.630 21:28:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:42.630 21:28:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:11:42.630 21:28:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:11:42.889 true 00:11:42.889 21:28:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1326539 00:11:42.889 21:28:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:42.889 21:28:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:43.147 21:28:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:11:43.147 21:28:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:11:43.405 true 00:11:43.405 21:28:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1326539 00:11:43.405 21:28:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:44.781 21:28:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:44.781 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:44.781 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:44.781 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:44.781 21:28:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:11:44.781 21:28:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:11:44.781 true 00:11:44.781 21:28:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1326539 00:11:44.781 21:28:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:45.715 21:28:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:45.973 21:28:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:11:45.973 21:28:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:11:45.973 true 00:11:45.973 21:28:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1326539 00:11:45.973 21:28:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:46.232 21:28:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:46.490 21:28:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:11:46.490 21:28:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:11:46.490 true 00:11:46.749 21:28:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1326539 00:11:46.749 21:28:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:47.685 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:47.944 21:28:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:47.944 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:47.944 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:47.944 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:47.944 21:28:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:11:47.944 21:28:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:11:48.204 true 00:11:48.204 21:28:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1326539 00:11:48.204 21:28:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:49.141 21:28:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:49.141 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:49.141 21:28:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:11:49.141 21:28:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:11:49.399 true 00:11:49.399 21:28:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1326539 00:11:49.399 21:28:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:49.658 21:28:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:49.917 21:28:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:11:49.917 21:28:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:11:49.917 true 00:11:49.917 21:28:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1326539 00:11:49.917 21:28:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:51.295 21:28:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:51.295 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:51.295 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:51.554 21:28:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:11:51.554 21:28:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:11:51.554 true 00:11:51.554 21:28:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1326539 00:11:51.554 21:28:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:51.813 21:28:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:52.072 21:28:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:11:52.072 21:28:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:11:52.330 true 00:11:52.330 21:28:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1326539 00:11:52.330 21:28:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:52.330 21:28:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:52.330 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:52.330 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:52.589 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:52.589 21:28:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:11:52.589 21:28:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:11:52.848 true 00:11:52.848 21:28:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1326539 00:11:52.848 21:28:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:53.782 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:53.782 21:28:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:53.782 21:28:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:11:53.782 21:28:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:11:53.782 true 00:11:54.040 21:28:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1326539 00:11:54.040 21:28:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:54.040 21:28:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:54.299 21:28:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:11:54.299 21:28:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:11:54.557 true 00:11:54.557 21:28:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1326539 00:11:54.557 21:28:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:55.934 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:55.934 21:28:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:55.934 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:55.934 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:55.934 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:55.934 21:28:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:11:55.934 21:28:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:11:55.934 true 00:11:56.193 21:28:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1326539 00:11:56.193 21:28:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:57.130 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:57.130 21:28:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:57.130 21:28:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:11:57.130 21:28:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:11:57.389 true 00:11:57.389 21:28:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1326539 00:11:57.389 21:28:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:57.389 21:28:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:57.648 21:28:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:11:57.648 21:28:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:11:57.905 true 00:11:57.905 21:28:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1326539 00:11:57.905 21:28:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:59.281 21:28:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:59.281 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:59.281 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:59.281 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:59.281 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:59.281 21:28:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:11:59.281 21:28:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:11:59.281 true 00:11:59.281 21:28:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1326539 00:11:59.281 21:28:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:00.216 21:29:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:00.475 21:29:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:12:00.475 21:29:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:12:00.734 true 00:12:00.734 21:29:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1326539 00:12:00.734 21:29:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:00.734 21:29:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:00.993 21:29:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:12:00.993 21:29:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:12:00.993 true 00:12:01.252 21:29:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1326539 00:12:01.252 21:29:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:02.189 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:02.447 21:29:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:02.448 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:02.448 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:02.448 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:02.448 21:29:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:12:02.448 21:29:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:12:02.706 true 00:12:02.706 21:29:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1326539 00:12:02.706 21:29:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:03.643 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:03.643 21:29:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:03.643 21:29:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:12:03.643 21:29:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:12:03.902 true 00:12:03.902 21:29:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1326539 00:12:03.902 21:29:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:03.902 21:29:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:04.161 21:29:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:12:04.161 21:29:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:12:04.470 true 00:12:04.470 21:29:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1326539 00:12:04.470 21:29:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:04.753 21:29:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:04.753 21:29:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:12:05.012 21:29:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:12:05.012 true 00:12:05.012 21:29:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1326539 00:12:05.012 21:29:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:05.272 21:29:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:05.272 21:29:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:12:05.272 21:29:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:12:05.531 true 00:12:05.531 21:29:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1326539 00:12:05.531 21:29:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:05.789 21:29:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:05.789 21:29:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:12:05.789 21:29:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:12:06.047 Initializing NVMe Controllers 00:12:06.047 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:06.047 Controller IO queue size 128, less than required. 00:12:06.047 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:06.047 Controller IO queue size 128, less than required. 00:12:06.047 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:06.047 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:06.047 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:12:06.048 Initialization complete. Launching workers. 00:12:06.048 ======================================================== 00:12:06.048 Latency(us) 00:12:06.048 Device Information : IOPS MiB/s Average min max 00:12:06.048 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1293.26 0.63 61853.22 2419.02 1053136.32 00:12:06.048 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17563.86 8.58 7287.81 1977.40 574253.07 00:12:06.048 ======================================================== 00:12:06.048 Total : 18857.12 9.21 11030.01 1977.40 1053136.32 00:12:06.048 00:12:06.048 true 00:12:06.048 21:29:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1326539 00:12:06.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1326539) - No such process 00:12:06.048 21:29:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1326539 00:12:06.048 21:29:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:06.306 21:29:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:06.306 21:29:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:12:06.306 21:29:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:12:06.306 21:29:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:12:06.306 21:29:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:06.306 21:29:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:12:06.565 null0 00:12:06.565 21:29:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:06.565 21:29:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:06.565 21:29:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:12:06.565 null1 00:12:06.824 21:29:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:06.824 21:29:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:06.824 21:29:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:12:06.824 null2 00:12:06.824 21:29:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:06.824 21:29:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:06.824 21:29:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:12:07.082 null3 00:12:07.083 21:29:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:07.083 21:29:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:07.083 21:29:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:12:07.083 null4 00:12:07.083 21:29:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:07.083 21:29:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:07.083 21:29:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:12:07.341 null5 00:12:07.341 21:29:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:07.341 21:29:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:07.341 21:29:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:12:07.599 null6 00:12:07.599 21:29:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:07.599 21:29:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:07.599 21:29:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:12:07.857 null7 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1332385 1332386 1332388 1332390 1332392 1332394 1332396 1332398 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:07.858 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:08.116 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:08.116 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:08.117 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:08.117 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:08.117 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:08.117 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:08.117 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:08.117 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:08.375 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:08.375 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:08.375 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:08.375 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:08.375 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:08.375 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:08.375 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:08.375 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:08.375 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:08.375 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:08.375 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:08.375 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:08.375 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:08.375 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:08.375 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:08.375 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:08.375 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:08.375 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:08.375 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:08.375 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:08.375 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:08.375 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:08.375 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:08.375 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:08.633 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:08.633 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:08.633 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:08.633 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:08.633 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:08.633 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:08.633 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:08.633 21:29:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:08.891 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:08.891 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:08.891 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:08.891 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:08.891 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:08.891 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:08.891 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:08.891 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:08.891 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:08.891 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:08.891 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:08.891 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:08.891 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:08.891 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:08.891 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:08.891 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:08.891 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:08.891 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:08.891 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:08.891 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:08.891 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:08.891 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:08.891 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:08.891 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:09.150 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:09.150 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:09.150 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:09.150 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:09.150 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:09.150 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:09.150 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:09.150 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:09.409 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:09.409 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:09.409 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:09.409 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:09.409 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:09.409 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:09.409 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:09.409 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:09.409 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:09.409 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:09.409 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:09.409 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:09.409 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:09.409 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:09.409 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:09.409 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:09.409 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:09.409 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:09.409 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:09.409 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:09.409 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:09.409 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:09.409 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:09.409 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:09.668 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:09.668 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:09.668 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:09.668 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:09.668 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:09.668 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:09.668 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:09.668 21:29:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:09.927 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:09.927 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:09.927 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:09.927 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:09.927 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:09.927 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:09.927 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:09.927 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:09.927 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:09.927 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:09.927 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:09.927 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:09.927 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:09.927 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:09.927 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:09.927 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:09.927 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:09.927 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:09.927 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:09.927 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:09.927 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:09.927 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:09.927 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:09.927 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:10.186 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:10.186 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:10.186 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:10.186 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:10.186 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:10.186 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:10.186 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:10.186 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:10.445 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:10.445 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:10.445 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:10.445 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:10.445 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:10.445 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:10.445 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:10.445 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:10.445 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:10.445 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:10.445 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:10.445 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:10.445 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:10.445 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:10.445 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:10.445 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:10.445 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:10.445 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:10.445 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:10.445 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:10.445 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:10.445 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:10.445 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:10.445 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:10.703 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:10.703 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:10.703 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:10.703 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:10.703 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:10.703 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:10.703 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:10.703 21:29:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:10.962 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:10.962 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:10.962 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:10.962 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:10.962 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:10.962 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:10.962 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:10.962 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:10.962 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:10.962 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:10.962 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:10.962 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:10.962 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:10.962 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:10.962 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:10.962 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:10.962 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:10.962 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:10.962 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:10.962 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:10.962 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:10.962 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:10.962 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:10.962 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:11.221 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:11.221 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:11.221 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:11.221 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:11.221 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:11.221 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:11.221 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:11.221 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:11.480 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:11.480 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:11.480 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:11.480 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:11.480 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:11.480 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:11.480 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:11.480 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:11.480 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:11.480 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:11.480 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:11.480 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:11.480 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:11.480 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:11.480 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:11.480 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:11.480 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:11.480 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:11.480 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:11.480 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:11.480 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:11.480 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:11.480 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:11.480 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:11.480 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:11.480 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:11.739 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:11.739 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:11.739 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:11.739 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:11.739 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:11.739 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:11.739 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:11.739 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:11.739 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:11.739 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:11.739 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:11.739 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:11.739 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:11.739 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:11.739 21:29:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:11.997 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:11.997 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:11.997 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:11.997 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:11.997 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:11.997 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:11.997 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:11.997 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:11.997 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:11.997 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:11.997 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:11.997 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:11.997 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:11.997 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:11.997 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:11.997 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:11.997 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:11.997 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:12.255 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:12.255 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:12.255 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:12.255 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:12.255 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:12.255 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:12.255 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:12.255 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:12.255 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:12.255 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:12.256 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:12.256 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:12.256 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:12.256 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:12.514 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:12.514 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:12.514 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:12.514 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:12.514 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:12.514 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:12.514 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:12.515 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:12.515 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:12.515 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:12.515 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:12.515 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:12.515 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:12.515 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:12.515 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:12.515 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:12.515 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:12.515 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:12.515 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:12.515 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:12.773 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:12.774 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:12.774 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:12.774 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:12.774 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:12.774 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:12.774 21:29:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:12.774 21:29:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:12.774 21:29:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:13.032 21:29:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:13.032 21:29:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:13.032 21:29:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:13.032 21:29:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:13.032 21:29:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:13.032 21:29:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:13.032 21:29:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:13.032 21:29:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:13.032 21:29:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:13.032 21:29:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:13.032 21:29:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:13.032 21:29:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:12:13.032 21:29:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:13.032 21:29:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:12:13.032 21:29:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:13.032 21:29:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:12:13.032 21:29:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:13.032 21:29:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:13.032 rmmod nvme_tcp 00:12:13.032 rmmod nvme_fabrics 00:12:13.032 rmmod nvme_keyring 00:12:13.032 21:29:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:13.032 21:29:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:12:13.032 21:29:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:12:13.032 21:29:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1325985 ']' 00:12:13.032 21:29:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1325985 00:12:13.032 21:29:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@949 -- # '[' -z 1325985 ']' 00:12:13.032 21:29:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # kill -0 1325985 00:12:13.032 21:29:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # uname 00:12:13.032 21:29:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:13.032 21:29:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1325985 00:12:13.032 21:29:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:12:13.032 21:29:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:12:13.032 21:29:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1325985' 00:12:13.032 killing process with pid 1325985 00:12:13.032 21:29:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # kill 1325985 00:12:13.032 21:29:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # wait 1325985 00:12:13.290 21:29:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:13.290 21:29:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:13.290 21:29:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:13.290 21:29:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:13.290 21:29:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:13.290 21:29:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.290 21:29:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:13.290 21:29:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.824 21:29:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:15.824 00:12:15.824 real 0m48.825s 00:12:15.824 user 3m23.127s 00:12:15.824 sys 0m16.144s 00:12:15.824 21:29:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:15.824 21:29:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:15.824 ************************************ 00:12:15.824 END TEST nvmf_ns_hotplug_stress 00:12:15.824 ************************************ 00:12:15.824 21:29:15 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:15.824 21:29:15 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:12:15.824 21:29:15 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:15.824 21:29:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:15.824 ************************************ 00:12:15.824 START TEST nvmf_connect_stress 00:12:15.824 ************************************ 00:12:15.824 21:29:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:15.824 * Looking for test storage... 00:12:15.824 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:15.824 21:29:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:15.824 21:29:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:15.824 21:29:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:15.824 21:29:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:15.824 21:29:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:15.824 21:29:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:15.824 21:29:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:15.824 21:29:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:15.824 21:29:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:15.824 21:29:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:15.824 21:29:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:15.824 21:29:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:15.824 21:29:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:12:15.824 21:29:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:12:15.824 21:29:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:15.825 21:29:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:15.825 21:29:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:15.825 21:29:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:15.825 21:29:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:15.825 21:29:15 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:15.825 21:29:15 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:15.825 21:29:15 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:15.825 21:29:15 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.825 21:29:15 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.825 21:29:15 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.825 21:29:15 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:15.825 21:29:15 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.825 21:29:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:12:15.825 21:29:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:15.825 21:29:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:15.825 21:29:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:15.825 21:29:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:15.825 21:29:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:15.825 21:29:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:15.825 21:29:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:15.825 21:29:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:15.825 21:29:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:15.825 21:29:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:15.825 21:29:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:15.825 21:29:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:15.825 21:29:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:15.825 21:29:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:15.825 21:29:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.825 21:29:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:15.825 21:29:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.825 21:29:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:15.825 21:29:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:15.825 21:29:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:12:15.825 21:29:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.396 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:22.396 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:12:22.396 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:22.396 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:22.396 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:22.396 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:22.396 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:22.396 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:12:22.396 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:22.396 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:12:22.396 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:12:22.396 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:12:22.396 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:12:22.396 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:12:22.396 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:12:22.396 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:22.396 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:22.396 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:22.396 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:22.396 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:22.397 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:22.397 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:22.397 Found net devices under 0000:af:00.0: cvl_0_0 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:22.397 Found net devices under 0000:af:00.1: cvl_0_1 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:22.397 21:29:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:22.397 21:29:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:22.397 21:29:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:22.397 21:29:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:22.397 21:29:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:22.397 21:29:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:22.397 21:29:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:22.397 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:22.397 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 00:12:22.397 00:12:22.397 --- 10.0.0.2 ping statistics --- 00:12:22.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.397 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:12:22.397 21:29:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:22.397 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:22.397 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:12:22.397 00:12:22.397 --- 10.0.0.1 ping statistics --- 00:12:22.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.397 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:12:22.397 21:29:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:22.397 21:29:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:12:22.397 21:29:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:22.397 21:29:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:22.397 21:29:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:22.397 21:29:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:22.397 21:29:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:22.397 21:29:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:22.397 21:29:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:22.397 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:22.397 21:29:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:22.397 21:29:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@723 -- # xtrace_disable 00:12:22.397 21:29:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.397 21:29:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1337433 00:12:22.397 21:29:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1337433 00:12:22.397 21:29:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@830 -- # '[' -z 1337433 ']' 00:12:22.397 21:29:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.397 21:29:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:22.397 21:29:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.397 21:29:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:22.397 21:29:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.397 21:29:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:22.397 [2024-06-07 21:29:22.205164] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:12:22.397 [2024-06-07 21:29:22.205202] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:22.397 EAL: No free 2048 kB hugepages reported on node 1 00:12:22.397 [2024-06-07 21:29:22.277936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:22.398 [2024-06-07 21:29:22.375422] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:22.398 [2024-06-07 21:29:22.375461] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:22.398 [2024-06-07 21:29:22.375472] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:22.398 [2024-06-07 21:29:22.375480] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:22.398 [2024-06-07 21:29:22.375488] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:22.398 [2024-06-07 21:29:22.375597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:22.398 [2024-06-07 21:29:22.379044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:22.398 [2024-06-07 21:29:22.379049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@863 -- # return 0 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@729 -- # xtrace_disable 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.398 [2024-06-07 21:29:22.527568] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.398 [2024-06-07 21:29:22.557178] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.398 NULL1 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1337621 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.398 EAL: No free 2048 kB hugepages reported on node 1 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.398 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1337621 00:12:22.657 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:22.657 21:29:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:22.657 21:29:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.926 21:29:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:22.926 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1337621 00:12:22.926 21:29:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:22.926 21:29:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:22.926 21:29:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:23.191 21:29:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:23.191 21:29:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1337621 00:12:23.191 21:29:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:23.191 21:29:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:23.191 21:29:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:23.450 21:29:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:23.450 21:29:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1337621 00:12:23.450 21:29:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:23.450 21:29:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:23.450 21:29:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:23.708 21:29:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:23.708 21:29:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1337621 00:12:23.708 21:29:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:23.708 21:29:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:23.708 21:29:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:24.275 21:29:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:24.275 21:29:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1337621 00:12:24.275 21:29:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:24.275 21:29:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:24.275 21:29:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:24.533 21:29:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:24.533 21:29:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1337621 00:12:24.533 21:29:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:24.533 21:29:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:24.533 21:29:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:24.792 21:29:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:24.792 21:29:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1337621 00:12:24.792 21:29:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:24.792 21:29:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:24.792 21:29:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:25.051 21:29:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:25.051 21:29:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1337621 00:12:25.051 21:29:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:25.051 21:29:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:25.051 21:29:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:25.310 21:29:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:25.310 21:29:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1337621 00:12:25.310 21:29:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:25.310 21:29:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:25.310 21:29:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:25.878 21:29:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:25.878 21:29:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1337621 00:12:25.878 21:29:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:25.878 21:29:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:25.878 21:29:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:26.137 21:29:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:26.137 21:29:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1337621 00:12:26.137 21:29:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:26.137 21:29:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:26.137 21:29:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:26.396 21:29:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:26.396 21:29:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1337621 00:12:26.396 21:29:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:26.396 21:29:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:26.396 21:29:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:26.654 21:29:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:26.654 21:29:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1337621 00:12:26.654 21:29:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:26.654 21:29:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:26.654 21:29:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:26.913 21:29:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:26.913 21:29:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1337621 00:12:26.913 21:29:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:26.913 21:29:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:26.913 21:29:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.481 21:29:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:27.481 21:29:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1337621 00:12:27.481 21:29:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:27.481 21:29:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:27.481 21:29:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.740 21:29:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:27.740 21:29:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1337621 00:12:27.740 21:29:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:27.740 21:29:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:27.740 21:29:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.999 21:29:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:27.999 21:29:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1337621 00:12:27.999 21:29:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:27.999 21:29:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:27.999 21:29:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:28.258 21:29:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:28.258 21:29:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1337621 00:12:28.258 21:29:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:28.258 21:29:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:28.258 21:29:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:28.517 21:29:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:28.517 21:29:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1337621 00:12:28.517 21:29:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:28.517 21:29:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:28.517 21:29:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:29.085 21:29:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:29.085 21:29:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1337621 00:12:29.085 21:29:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:29.085 21:29:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:29.085 21:29:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:29.344 21:29:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:29.344 21:29:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1337621 00:12:29.344 21:29:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:29.344 21:29:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:29.344 21:29:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:29.603 21:29:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:29.603 21:29:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1337621 00:12:29.603 21:29:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:29.603 21:29:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:29.603 21:29:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:29.861 21:29:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:29.861 21:29:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1337621 00:12:29.861 21:29:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:29.861 21:29:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:29.861 21:29:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:30.429 21:29:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:30.429 21:29:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1337621 00:12:30.429 21:29:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:30.429 21:29:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:30.429 21:29:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:30.687 21:29:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:30.687 21:29:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1337621 00:12:30.687 21:29:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:30.687 21:29:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:30.687 21:29:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:30.946 21:29:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:30.946 21:29:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1337621 00:12:30.946 21:29:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:30.946 21:29:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:30.946 21:29:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:31.204 21:29:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:31.204 21:29:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1337621 00:12:31.204 21:29:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:31.204 21:29:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:31.204 21:29:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:31.461 21:29:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:31.462 21:29:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1337621 00:12:31.462 21:29:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:31.462 21:29:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:31.462 21:29:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:32.028 21:29:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:32.028 21:29:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1337621 00:12:32.028 21:29:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:32.028 21:29:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:32.028 21:29:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:32.288 21:29:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:32.288 21:29:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1337621 00:12:32.288 21:29:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:32.288 21:29:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:32.288 21:29:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:32.566 21:29:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:32.566 21:29:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1337621 00:12:32.566 21:29:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:32.566 21:29:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:32.566 21:29:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:32.567 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:32.839 21:29:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:32.839 21:29:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1337621 00:12:32.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1337621) - No such process 00:12:32.839 21:29:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1337621 00:12:32.839 21:29:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:32.839 21:29:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:32.839 21:29:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:32.839 21:29:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:32.839 21:29:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:12:32.839 21:29:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:32.839 21:29:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:12:32.839 21:29:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:32.839 21:29:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:32.839 rmmod nvme_tcp 00:12:32.839 rmmod nvme_fabrics 00:12:32.839 rmmod nvme_keyring 00:12:32.839 21:29:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:32.839 21:29:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:12:32.839 21:29:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:12:32.839 21:29:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1337433 ']' 00:12:32.839 21:29:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1337433 00:12:32.839 21:29:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@949 -- # '[' -z 1337433 ']' 00:12:32.839 21:29:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # kill -0 1337433 00:12:32.839 21:29:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # uname 00:12:32.839 21:29:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:32.839 21:29:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1337433 00:12:33.097 21:29:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:12:33.097 21:29:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:12:33.097 21:29:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1337433' 00:12:33.097 killing process with pid 1337433 00:12:33.097 21:29:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@968 -- # kill 1337433 00:12:33.097 21:29:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@973 -- # wait 1337433 00:12:33.097 21:29:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:33.097 21:29:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:33.097 21:29:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:33.097 21:29:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:33.097 21:29:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:33.097 21:29:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.097 21:29:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:33.097 21:29:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.631 21:29:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:35.631 00:12:35.631 real 0m19.800s 00:12:35.631 user 0m40.625s 00:12:35.631 sys 0m8.630s 00:12:35.631 21:29:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:35.631 21:29:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:35.631 ************************************ 00:12:35.631 END TEST nvmf_connect_stress 00:12:35.631 ************************************ 00:12:35.631 21:29:35 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:35.631 21:29:35 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:12:35.631 21:29:35 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:35.631 21:29:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:35.631 ************************************ 00:12:35.631 START TEST nvmf_fused_ordering 00:12:35.631 ************************************ 00:12:35.631 21:29:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:35.631 * Looking for test storage... 00:12:35.631 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:35.631 21:29:35 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:35.631 21:29:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:35.631 21:29:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:35.631 21:29:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:35.631 21:29:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:35.631 21:29:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:35.631 21:29:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:35.631 21:29:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:35.631 21:29:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:35.631 21:29:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:35.631 21:29:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:35.631 21:29:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:35.631 21:29:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:12:35.631 21:29:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:12:35.631 21:29:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:35.631 21:29:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:35.631 21:29:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:35.631 21:29:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:35.631 21:29:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:35.631 21:29:35 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:35.631 21:29:35 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:35.631 21:29:35 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:35.631 21:29:35 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.631 21:29:35 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.631 21:29:35 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.631 21:29:35 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:35.631 21:29:35 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.631 21:29:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:12:35.631 21:29:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:35.631 21:29:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:35.631 21:29:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:35.631 21:29:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:35.631 21:29:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:35.631 21:29:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:35.631 21:29:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:35.631 21:29:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:35.631 21:29:35 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:35.631 21:29:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:35.631 21:29:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:35.631 21:29:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:35.631 21:29:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:35.631 21:29:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:35.631 21:29:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.632 21:29:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:35.632 21:29:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.632 21:29:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:35.632 21:29:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:35.632 21:29:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:12:35.632 21:29:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:42.204 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:42.204 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:42.204 Found net devices under 0000:af:00.0: cvl_0_0 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:42.204 Found net devices under 0000:af:00.1: cvl_0_1 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:12:42.204 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:42.205 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:42.205 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:42.205 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:42.205 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:42.205 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:42.205 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:42.205 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:42.205 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:42.205 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:42.205 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:42.205 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:42.205 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:42.205 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:42.205 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:42.205 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:42.205 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:42.205 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:42.205 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:42.205 21:29:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:42.205 21:29:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:42.205 21:29:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:42.205 21:29:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:42.205 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:42.205 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:12:42.205 00:12:42.205 --- 10.0.0.2 ping statistics --- 00:12:42.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.205 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:12:42.205 21:29:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:42.205 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:42.205 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:12:42.205 00:12:42.205 --- 10.0.0.1 ping statistics --- 00:12:42.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.205 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:12:42.205 21:29:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:42.205 21:29:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:12:42.205 21:29:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:42.205 21:29:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:42.205 21:29:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:42.205 21:29:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:42.205 21:29:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:42.205 21:29:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:42.205 21:29:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:42.205 21:29:42 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:42.205 21:29:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:42.205 21:29:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@723 -- # xtrace_disable 00:12:42.205 21:29:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:42.205 21:29:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1343492 00:12:42.205 21:29:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1343492 00:12:42.205 21:29:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:42.205 21:29:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@830 -- # '[' -z 1343492 ']' 00:12:42.205 21:29:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.205 21:29:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:42.205 21:29:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.205 21:29:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:42.205 21:29:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:42.205 [2024-06-07 21:29:42.193947] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:12:42.205 [2024-06-07 21:29:42.194003] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:42.205 EAL: No free 2048 kB hugepages reported on node 1 00:12:42.205 [2024-06-07 21:29:42.282011] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.205 [2024-06-07 21:29:42.370599] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:42.205 [2024-06-07 21:29:42.370641] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:42.205 [2024-06-07 21:29:42.370651] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:42.205 [2024-06-07 21:29:42.370660] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:42.205 [2024-06-07 21:29:42.370667] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:42.205 [2024-06-07 21:29:42.370690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:43.143 21:29:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:43.143 21:29:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@863 -- # return 0 00:12:43.143 21:29:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:43.143 21:29:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@729 -- # xtrace_disable 00:12:43.143 21:29:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:43.143 21:29:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:43.143 21:29:43 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:43.143 21:29:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:43.143 21:29:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:43.143 [2024-06-07 21:29:43.175242] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:43.143 21:29:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:43.143 21:29:43 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:43.143 21:29:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:43.143 21:29:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:43.143 21:29:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:43.143 21:29:43 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.143 21:29:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:43.143 21:29:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:43.143 [2024-06-07 21:29:43.195399] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.143 21:29:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:43.143 21:29:43 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:43.143 21:29:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:43.143 21:29:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:43.143 NULL1 00:12:43.143 21:29:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:43.143 21:29:43 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:43.143 21:29:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:43.143 21:29:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:43.143 21:29:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:43.143 21:29:43 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:43.143 21:29:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:43.143 21:29:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:43.144 21:29:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:43.144 21:29:43 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:43.144 [2024-06-07 21:29:43.249440] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:12:43.144 [2024-06-07 21:29:43.249485] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1343767 ] 00:12:43.144 EAL: No free 2048 kB hugepages reported on node 1 00:12:43.712 Attached to nqn.2016-06.io.spdk:cnode1 00:12:43.712 Namespace ID: 1 size: 1GB 00:12:43.712 fused_ordering(0) 00:12:43.712 fused_ordering(1) 00:12:43.712 fused_ordering(2) 00:12:43.712 fused_ordering(3) 00:12:43.712 fused_ordering(4) 00:12:43.712 fused_ordering(5) 00:12:43.712 fused_ordering(6) 00:12:43.712 fused_ordering(7) 00:12:43.712 fused_ordering(8) 00:12:43.712 fused_ordering(9) 00:12:43.712 fused_ordering(10) 00:12:43.712 fused_ordering(11) 00:12:43.712 fused_ordering(12) 00:12:43.712 fused_ordering(13) 00:12:43.712 fused_ordering(14) 00:12:43.712 fused_ordering(15) 00:12:43.712 fused_ordering(16) 00:12:43.712 fused_ordering(17) 00:12:43.712 fused_ordering(18) 00:12:43.712 fused_ordering(19) 00:12:43.712 fused_ordering(20) 00:12:43.712 fused_ordering(21) 00:12:43.712 fused_ordering(22) 00:12:43.712 fused_ordering(23) 00:12:43.712 fused_ordering(24) 00:12:43.712 fused_ordering(25) 00:12:43.712 fused_ordering(26) 00:12:43.712 fused_ordering(27) 00:12:43.712 fused_ordering(28) 00:12:43.712 fused_ordering(29) 00:12:43.712 fused_ordering(30) 00:12:43.712 fused_ordering(31) 00:12:43.712 fused_ordering(32) 00:12:43.712 fused_ordering(33) 00:12:43.712 fused_ordering(34) 00:12:43.712 fused_ordering(35) 00:12:43.712 fused_ordering(36) 00:12:43.712 fused_ordering(37) 00:12:43.712 fused_ordering(38) 00:12:43.712 fused_ordering(39) 00:12:43.712 fused_ordering(40) 00:12:43.712 fused_ordering(41) 00:12:43.712 fused_ordering(42) 00:12:43.712 fused_ordering(43) 00:12:43.712 fused_ordering(44) 00:12:43.712 fused_ordering(45) 00:12:43.712 fused_ordering(46) 00:12:43.712 fused_ordering(47) 00:12:43.712 fused_ordering(48) 00:12:43.712 fused_ordering(49) 00:12:43.712 fused_ordering(50) 00:12:43.712 fused_ordering(51) 00:12:43.712 fused_ordering(52) 00:12:43.712 fused_ordering(53) 00:12:43.712 fused_ordering(54) 00:12:43.712 fused_ordering(55) 00:12:43.712 fused_ordering(56) 00:12:43.712 fused_ordering(57) 00:12:43.712 fused_ordering(58) 00:12:43.712 fused_ordering(59) 00:12:43.712 fused_ordering(60) 00:12:43.712 fused_ordering(61) 00:12:43.712 fused_ordering(62) 00:12:43.712 fused_ordering(63) 00:12:43.712 fused_ordering(64) 00:12:43.712 fused_ordering(65) 00:12:43.712 fused_ordering(66) 00:12:43.712 fused_ordering(67) 00:12:43.712 fused_ordering(68) 00:12:43.712 fused_ordering(69) 00:12:43.712 fused_ordering(70) 00:12:43.712 fused_ordering(71) 00:12:43.712 fused_ordering(72) 00:12:43.712 fused_ordering(73) 00:12:43.712 fused_ordering(74) 00:12:43.712 fused_ordering(75) 00:12:43.712 fused_ordering(76) 00:12:43.712 fused_ordering(77) 00:12:43.712 fused_ordering(78) 00:12:43.712 fused_ordering(79) 00:12:43.712 fused_ordering(80) 00:12:43.712 fused_ordering(81) 00:12:43.712 fused_ordering(82) 00:12:43.712 fused_ordering(83) 00:12:43.712 fused_ordering(84) 00:12:43.712 fused_ordering(85) 00:12:43.712 fused_ordering(86) 00:12:43.712 fused_ordering(87) 00:12:43.712 fused_ordering(88) 00:12:43.712 fused_ordering(89) 00:12:43.712 fused_ordering(90) 00:12:43.712 fused_ordering(91) 00:12:43.712 fused_ordering(92) 00:12:43.712 fused_ordering(93) 00:12:43.712 fused_ordering(94) 00:12:43.712 fused_ordering(95) 00:12:43.712 fused_ordering(96) 00:12:43.712 fused_ordering(97) 00:12:43.712 fused_ordering(98) 00:12:43.712 fused_ordering(99) 00:12:43.712 fused_ordering(100) 00:12:43.712 fused_ordering(101) 00:12:43.712 fused_ordering(102) 00:12:43.712 fused_ordering(103) 00:12:43.712 fused_ordering(104) 00:12:43.712 fused_ordering(105) 00:12:43.712 fused_ordering(106) 00:12:43.712 fused_ordering(107) 00:12:43.712 fused_ordering(108) 00:12:43.712 fused_ordering(109) 00:12:43.712 fused_ordering(110) 00:12:43.712 fused_ordering(111) 00:12:43.712 fused_ordering(112) 00:12:43.712 fused_ordering(113) 00:12:43.712 fused_ordering(114) 00:12:43.712 fused_ordering(115) 00:12:43.712 fused_ordering(116) 00:12:43.712 fused_ordering(117) 00:12:43.712 fused_ordering(118) 00:12:43.712 fused_ordering(119) 00:12:43.712 fused_ordering(120) 00:12:43.712 fused_ordering(121) 00:12:43.712 fused_ordering(122) 00:12:43.712 fused_ordering(123) 00:12:43.712 fused_ordering(124) 00:12:43.712 fused_ordering(125) 00:12:43.712 fused_ordering(126) 00:12:43.712 fused_ordering(127) 00:12:43.713 fused_ordering(128) 00:12:43.713 fused_ordering(129) 00:12:43.713 fused_ordering(130) 00:12:43.713 fused_ordering(131) 00:12:43.713 fused_ordering(132) 00:12:43.713 fused_ordering(133) 00:12:43.713 fused_ordering(134) 00:12:43.713 fused_ordering(135) 00:12:43.713 fused_ordering(136) 00:12:43.713 fused_ordering(137) 00:12:43.713 fused_ordering(138) 00:12:43.713 fused_ordering(139) 00:12:43.713 fused_ordering(140) 00:12:43.713 fused_ordering(141) 00:12:43.713 fused_ordering(142) 00:12:43.713 fused_ordering(143) 00:12:43.713 fused_ordering(144) 00:12:43.713 fused_ordering(145) 00:12:43.713 fused_ordering(146) 00:12:43.713 fused_ordering(147) 00:12:43.713 fused_ordering(148) 00:12:43.713 fused_ordering(149) 00:12:43.713 fused_ordering(150) 00:12:43.713 fused_ordering(151) 00:12:43.713 fused_ordering(152) 00:12:43.713 fused_ordering(153) 00:12:43.713 fused_ordering(154) 00:12:43.713 fused_ordering(155) 00:12:43.713 fused_ordering(156) 00:12:43.713 fused_ordering(157) 00:12:43.713 fused_ordering(158) 00:12:43.713 fused_ordering(159) 00:12:43.713 fused_ordering(160) 00:12:43.713 fused_ordering(161) 00:12:43.713 fused_ordering(162) 00:12:43.713 fused_ordering(163) 00:12:43.713 fused_ordering(164) 00:12:43.713 fused_ordering(165) 00:12:43.713 fused_ordering(166) 00:12:43.713 fused_ordering(167) 00:12:43.713 fused_ordering(168) 00:12:43.713 fused_ordering(169) 00:12:43.713 fused_ordering(170) 00:12:43.713 fused_ordering(171) 00:12:43.713 fused_ordering(172) 00:12:43.713 fused_ordering(173) 00:12:43.713 fused_ordering(174) 00:12:43.713 fused_ordering(175) 00:12:43.713 fused_ordering(176) 00:12:43.713 fused_ordering(177) 00:12:43.713 fused_ordering(178) 00:12:43.713 fused_ordering(179) 00:12:43.713 fused_ordering(180) 00:12:43.713 fused_ordering(181) 00:12:43.713 fused_ordering(182) 00:12:43.713 fused_ordering(183) 00:12:43.713 fused_ordering(184) 00:12:43.713 fused_ordering(185) 00:12:43.713 fused_ordering(186) 00:12:43.713 fused_ordering(187) 00:12:43.713 fused_ordering(188) 00:12:43.713 fused_ordering(189) 00:12:43.713 fused_ordering(190) 00:12:43.713 fused_ordering(191) 00:12:43.713 fused_ordering(192) 00:12:43.713 fused_ordering(193) 00:12:43.713 fused_ordering(194) 00:12:43.713 fused_ordering(195) 00:12:43.713 fused_ordering(196) 00:12:43.713 fused_ordering(197) 00:12:43.713 fused_ordering(198) 00:12:43.713 fused_ordering(199) 00:12:43.713 fused_ordering(200) 00:12:43.713 fused_ordering(201) 00:12:43.713 fused_ordering(202) 00:12:43.713 fused_ordering(203) 00:12:43.713 fused_ordering(204) 00:12:43.713 fused_ordering(205) 00:12:44.282 fused_ordering(206) 00:12:44.282 fused_ordering(207) 00:12:44.282 fused_ordering(208) 00:12:44.282 fused_ordering(209) 00:12:44.282 fused_ordering(210) 00:12:44.282 fused_ordering(211) 00:12:44.282 fused_ordering(212) 00:12:44.282 fused_ordering(213) 00:12:44.282 fused_ordering(214) 00:12:44.282 fused_ordering(215) 00:12:44.282 fused_ordering(216) 00:12:44.282 fused_ordering(217) 00:12:44.282 fused_ordering(218) 00:12:44.282 fused_ordering(219) 00:12:44.282 fused_ordering(220) 00:12:44.282 fused_ordering(221) 00:12:44.282 fused_ordering(222) 00:12:44.282 fused_ordering(223) 00:12:44.282 fused_ordering(224) 00:12:44.282 fused_ordering(225) 00:12:44.282 fused_ordering(226) 00:12:44.282 fused_ordering(227) 00:12:44.282 fused_ordering(228) 00:12:44.282 fused_ordering(229) 00:12:44.282 fused_ordering(230) 00:12:44.282 fused_ordering(231) 00:12:44.282 fused_ordering(232) 00:12:44.282 fused_ordering(233) 00:12:44.282 fused_ordering(234) 00:12:44.282 fused_ordering(235) 00:12:44.282 fused_ordering(236) 00:12:44.282 fused_ordering(237) 00:12:44.282 fused_ordering(238) 00:12:44.282 fused_ordering(239) 00:12:44.282 fused_ordering(240) 00:12:44.282 fused_ordering(241) 00:12:44.282 fused_ordering(242) 00:12:44.282 fused_ordering(243) 00:12:44.282 fused_ordering(244) 00:12:44.282 fused_ordering(245) 00:12:44.282 fused_ordering(246) 00:12:44.282 fused_ordering(247) 00:12:44.282 fused_ordering(248) 00:12:44.282 fused_ordering(249) 00:12:44.282 fused_ordering(250) 00:12:44.282 fused_ordering(251) 00:12:44.282 fused_ordering(252) 00:12:44.282 fused_ordering(253) 00:12:44.282 fused_ordering(254) 00:12:44.282 fused_ordering(255) 00:12:44.282 fused_ordering(256) 00:12:44.282 fused_ordering(257) 00:12:44.282 fused_ordering(258) 00:12:44.282 fused_ordering(259) 00:12:44.282 fused_ordering(260) 00:12:44.282 fused_ordering(261) 00:12:44.282 fused_ordering(262) 00:12:44.282 fused_ordering(263) 00:12:44.282 fused_ordering(264) 00:12:44.282 fused_ordering(265) 00:12:44.282 fused_ordering(266) 00:12:44.282 fused_ordering(267) 00:12:44.282 fused_ordering(268) 00:12:44.282 fused_ordering(269) 00:12:44.282 fused_ordering(270) 00:12:44.282 fused_ordering(271) 00:12:44.282 fused_ordering(272) 00:12:44.282 fused_ordering(273) 00:12:44.282 fused_ordering(274) 00:12:44.282 fused_ordering(275) 00:12:44.282 fused_ordering(276) 00:12:44.282 fused_ordering(277) 00:12:44.282 fused_ordering(278) 00:12:44.282 fused_ordering(279) 00:12:44.282 fused_ordering(280) 00:12:44.282 fused_ordering(281) 00:12:44.282 fused_ordering(282) 00:12:44.282 fused_ordering(283) 00:12:44.282 fused_ordering(284) 00:12:44.282 fused_ordering(285) 00:12:44.282 fused_ordering(286) 00:12:44.282 fused_ordering(287) 00:12:44.282 fused_ordering(288) 00:12:44.282 fused_ordering(289) 00:12:44.282 fused_ordering(290) 00:12:44.282 fused_ordering(291) 00:12:44.282 fused_ordering(292) 00:12:44.282 fused_ordering(293) 00:12:44.282 fused_ordering(294) 00:12:44.282 fused_ordering(295) 00:12:44.282 fused_ordering(296) 00:12:44.282 fused_ordering(297) 00:12:44.282 fused_ordering(298) 00:12:44.282 fused_ordering(299) 00:12:44.282 fused_ordering(300) 00:12:44.282 fused_ordering(301) 00:12:44.282 fused_ordering(302) 00:12:44.282 fused_ordering(303) 00:12:44.282 fused_ordering(304) 00:12:44.282 fused_ordering(305) 00:12:44.282 fused_ordering(306) 00:12:44.282 fused_ordering(307) 00:12:44.282 fused_ordering(308) 00:12:44.282 fused_ordering(309) 00:12:44.282 fused_ordering(310) 00:12:44.282 fused_ordering(311) 00:12:44.282 fused_ordering(312) 00:12:44.282 fused_ordering(313) 00:12:44.282 fused_ordering(314) 00:12:44.282 fused_ordering(315) 00:12:44.282 fused_ordering(316) 00:12:44.282 fused_ordering(317) 00:12:44.282 fused_ordering(318) 00:12:44.282 fused_ordering(319) 00:12:44.282 fused_ordering(320) 00:12:44.282 fused_ordering(321) 00:12:44.282 fused_ordering(322) 00:12:44.282 fused_ordering(323) 00:12:44.282 fused_ordering(324) 00:12:44.282 fused_ordering(325) 00:12:44.282 fused_ordering(326) 00:12:44.282 fused_ordering(327) 00:12:44.282 fused_ordering(328) 00:12:44.282 fused_ordering(329) 00:12:44.282 fused_ordering(330) 00:12:44.282 fused_ordering(331) 00:12:44.282 fused_ordering(332) 00:12:44.282 fused_ordering(333) 00:12:44.282 fused_ordering(334) 00:12:44.282 fused_ordering(335) 00:12:44.282 fused_ordering(336) 00:12:44.282 fused_ordering(337) 00:12:44.282 fused_ordering(338) 00:12:44.282 fused_ordering(339) 00:12:44.282 fused_ordering(340) 00:12:44.282 fused_ordering(341) 00:12:44.282 fused_ordering(342) 00:12:44.282 fused_ordering(343) 00:12:44.282 fused_ordering(344) 00:12:44.282 fused_ordering(345) 00:12:44.282 fused_ordering(346) 00:12:44.282 fused_ordering(347) 00:12:44.282 fused_ordering(348) 00:12:44.282 fused_ordering(349) 00:12:44.282 fused_ordering(350) 00:12:44.282 fused_ordering(351) 00:12:44.282 fused_ordering(352) 00:12:44.282 fused_ordering(353) 00:12:44.282 fused_ordering(354) 00:12:44.282 fused_ordering(355) 00:12:44.282 fused_ordering(356) 00:12:44.282 fused_ordering(357) 00:12:44.282 fused_ordering(358) 00:12:44.282 fused_ordering(359) 00:12:44.282 fused_ordering(360) 00:12:44.282 fused_ordering(361) 00:12:44.282 fused_ordering(362) 00:12:44.282 fused_ordering(363) 00:12:44.282 fused_ordering(364) 00:12:44.282 fused_ordering(365) 00:12:44.282 fused_ordering(366) 00:12:44.282 fused_ordering(367) 00:12:44.282 fused_ordering(368) 00:12:44.282 fused_ordering(369) 00:12:44.282 fused_ordering(370) 00:12:44.282 fused_ordering(371) 00:12:44.282 fused_ordering(372) 00:12:44.282 fused_ordering(373) 00:12:44.282 fused_ordering(374) 00:12:44.282 fused_ordering(375) 00:12:44.282 fused_ordering(376) 00:12:44.282 fused_ordering(377) 00:12:44.282 fused_ordering(378) 00:12:44.282 fused_ordering(379) 00:12:44.282 fused_ordering(380) 00:12:44.282 fused_ordering(381) 00:12:44.282 fused_ordering(382) 00:12:44.282 fused_ordering(383) 00:12:44.282 fused_ordering(384) 00:12:44.282 fused_ordering(385) 00:12:44.282 fused_ordering(386) 00:12:44.282 fused_ordering(387) 00:12:44.282 fused_ordering(388) 00:12:44.282 fused_ordering(389) 00:12:44.282 fused_ordering(390) 00:12:44.282 fused_ordering(391) 00:12:44.282 fused_ordering(392) 00:12:44.282 fused_ordering(393) 00:12:44.282 fused_ordering(394) 00:12:44.282 fused_ordering(395) 00:12:44.282 fused_ordering(396) 00:12:44.282 fused_ordering(397) 00:12:44.282 fused_ordering(398) 00:12:44.282 fused_ordering(399) 00:12:44.282 fused_ordering(400) 00:12:44.282 fused_ordering(401) 00:12:44.282 fused_ordering(402) 00:12:44.283 fused_ordering(403) 00:12:44.283 fused_ordering(404) 00:12:44.283 fused_ordering(405) 00:12:44.283 fused_ordering(406) 00:12:44.283 fused_ordering(407) 00:12:44.283 fused_ordering(408) 00:12:44.283 fused_ordering(409) 00:12:44.283 fused_ordering(410) 00:12:44.541 fused_ordering(411) 00:12:44.541 fused_ordering(412) 00:12:44.541 fused_ordering(413) 00:12:44.541 fused_ordering(414) 00:12:44.541 fused_ordering(415) 00:12:44.542 fused_ordering(416) 00:12:44.542 fused_ordering(417) 00:12:44.542 fused_ordering(418) 00:12:44.542 fused_ordering(419) 00:12:44.542 fused_ordering(420) 00:12:44.542 fused_ordering(421) 00:12:44.542 fused_ordering(422) 00:12:44.542 fused_ordering(423) 00:12:44.542 fused_ordering(424) 00:12:44.542 fused_ordering(425) 00:12:44.542 fused_ordering(426) 00:12:44.542 fused_ordering(427) 00:12:44.542 fused_ordering(428) 00:12:44.542 fused_ordering(429) 00:12:44.542 fused_ordering(430) 00:12:44.542 fused_ordering(431) 00:12:44.542 fused_ordering(432) 00:12:44.542 fused_ordering(433) 00:12:44.542 fused_ordering(434) 00:12:44.542 fused_ordering(435) 00:12:44.542 fused_ordering(436) 00:12:44.542 fused_ordering(437) 00:12:44.542 fused_ordering(438) 00:12:44.542 fused_ordering(439) 00:12:44.542 fused_ordering(440) 00:12:44.542 fused_ordering(441) 00:12:44.542 fused_ordering(442) 00:12:44.542 fused_ordering(443) 00:12:44.542 fused_ordering(444) 00:12:44.542 fused_ordering(445) 00:12:44.542 fused_ordering(446) 00:12:44.542 fused_ordering(447) 00:12:44.542 fused_ordering(448) 00:12:44.542 fused_ordering(449) 00:12:44.542 fused_ordering(450) 00:12:44.542 fused_ordering(451) 00:12:44.542 fused_ordering(452) 00:12:44.542 fused_ordering(453) 00:12:44.542 fused_ordering(454) 00:12:44.542 fused_ordering(455) 00:12:44.542 fused_ordering(456) 00:12:44.542 fused_ordering(457) 00:12:44.542 fused_ordering(458) 00:12:44.542 fused_ordering(459) 00:12:44.542 fused_ordering(460) 00:12:44.542 fused_ordering(461) 00:12:44.542 fused_ordering(462) 00:12:44.542 fused_ordering(463) 00:12:44.542 fused_ordering(464) 00:12:44.542 fused_ordering(465) 00:12:44.542 fused_ordering(466) 00:12:44.542 fused_ordering(467) 00:12:44.542 fused_ordering(468) 00:12:44.542 fused_ordering(469) 00:12:44.542 fused_ordering(470) 00:12:44.542 fused_ordering(471) 00:12:44.542 fused_ordering(472) 00:12:44.542 fused_ordering(473) 00:12:44.542 fused_ordering(474) 00:12:44.542 fused_ordering(475) 00:12:44.542 fused_ordering(476) 00:12:44.542 fused_ordering(477) 00:12:44.542 fused_ordering(478) 00:12:44.542 fused_ordering(479) 00:12:44.542 fused_ordering(480) 00:12:44.542 fused_ordering(481) 00:12:44.542 fused_ordering(482) 00:12:44.542 fused_ordering(483) 00:12:44.542 fused_ordering(484) 00:12:44.542 fused_ordering(485) 00:12:44.542 fused_ordering(486) 00:12:44.542 fused_ordering(487) 00:12:44.542 fused_ordering(488) 00:12:44.542 fused_ordering(489) 00:12:44.542 fused_ordering(490) 00:12:44.542 fused_ordering(491) 00:12:44.542 fused_ordering(492) 00:12:44.542 fused_ordering(493) 00:12:44.542 fused_ordering(494) 00:12:44.542 fused_ordering(495) 00:12:44.542 fused_ordering(496) 00:12:44.542 fused_ordering(497) 00:12:44.542 fused_ordering(498) 00:12:44.542 fused_ordering(499) 00:12:44.542 fused_ordering(500) 00:12:44.542 fused_ordering(501) 00:12:44.542 fused_ordering(502) 00:12:44.542 fused_ordering(503) 00:12:44.542 fused_ordering(504) 00:12:44.542 fused_ordering(505) 00:12:44.542 fused_ordering(506) 00:12:44.542 fused_ordering(507) 00:12:44.542 fused_ordering(508) 00:12:44.542 fused_ordering(509) 00:12:44.542 fused_ordering(510) 00:12:44.542 fused_ordering(511) 00:12:44.542 fused_ordering(512) 00:12:44.542 fused_ordering(513) 00:12:44.542 fused_ordering(514) 00:12:44.542 fused_ordering(515) 00:12:44.542 fused_ordering(516) 00:12:44.542 fused_ordering(517) 00:12:44.542 fused_ordering(518) 00:12:44.542 fused_ordering(519) 00:12:44.542 fused_ordering(520) 00:12:44.542 fused_ordering(521) 00:12:44.542 fused_ordering(522) 00:12:44.542 fused_ordering(523) 00:12:44.542 fused_ordering(524) 00:12:44.542 fused_ordering(525) 00:12:44.542 fused_ordering(526) 00:12:44.542 fused_ordering(527) 00:12:44.542 fused_ordering(528) 00:12:44.542 fused_ordering(529) 00:12:44.542 fused_ordering(530) 00:12:44.542 fused_ordering(531) 00:12:44.542 fused_ordering(532) 00:12:44.542 fused_ordering(533) 00:12:44.542 fused_ordering(534) 00:12:44.542 fused_ordering(535) 00:12:44.542 fused_ordering(536) 00:12:44.542 fused_ordering(537) 00:12:44.542 fused_ordering(538) 00:12:44.542 fused_ordering(539) 00:12:44.542 fused_ordering(540) 00:12:44.542 fused_ordering(541) 00:12:44.542 fused_ordering(542) 00:12:44.542 fused_ordering(543) 00:12:44.542 fused_ordering(544) 00:12:44.542 fused_ordering(545) 00:12:44.542 fused_ordering(546) 00:12:44.542 fused_ordering(547) 00:12:44.542 fused_ordering(548) 00:12:44.542 fused_ordering(549) 00:12:44.542 fused_ordering(550) 00:12:44.542 fused_ordering(551) 00:12:44.542 fused_ordering(552) 00:12:44.542 fused_ordering(553) 00:12:44.542 fused_ordering(554) 00:12:44.542 fused_ordering(555) 00:12:44.542 fused_ordering(556) 00:12:44.542 fused_ordering(557) 00:12:44.542 fused_ordering(558) 00:12:44.542 fused_ordering(559) 00:12:44.542 fused_ordering(560) 00:12:44.542 fused_ordering(561) 00:12:44.542 fused_ordering(562) 00:12:44.542 fused_ordering(563) 00:12:44.542 fused_ordering(564) 00:12:44.542 fused_ordering(565) 00:12:44.542 fused_ordering(566) 00:12:44.542 fused_ordering(567) 00:12:44.542 fused_ordering(568) 00:12:44.542 fused_ordering(569) 00:12:44.542 fused_ordering(570) 00:12:44.542 fused_ordering(571) 00:12:44.542 fused_ordering(572) 00:12:44.542 fused_ordering(573) 00:12:44.542 fused_ordering(574) 00:12:44.542 fused_ordering(575) 00:12:44.542 fused_ordering(576) 00:12:44.542 fused_ordering(577) 00:12:44.542 fused_ordering(578) 00:12:44.542 fused_ordering(579) 00:12:44.542 fused_ordering(580) 00:12:44.542 fused_ordering(581) 00:12:44.542 fused_ordering(582) 00:12:44.542 fused_ordering(583) 00:12:44.542 fused_ordering(584) 00:12:44.542 fused_ordering(585) 00:12:44.542 fused_ordering(586) 00:12:44.542 fused_ordering(587) 00:12:44.542 fused_ordering(588) 00:12:44.542 fused_ordering(589) 00:12:44.542 fused_ordering(590) 00:12:44.542 fused_ordering(591) 00:12:44.542 fused_ordering(592) 00:12:44.542 fused_ordering(593) 00:12:44.542 fused_ordering(594) 00:12:44.542 fused_ordering(595) 00:12:44.542 fused_ordering(596) 00:12:44.542 fused_ordering(597) 00:12:44.542 fused_ordering(598) 00:12:44.542 fused_ordering(599) 00:12:44.542 fused_ordering(600) 00:12:44.542 fused_ordering(601) 00:12:44.542 fused_ordering(602) 00:12:44.542 fused_ordering(603) 00:12:44.542 fused_ordering(604) 00:12:44.542 fused_ordering(605) 00:12:44.542 fused_ordering(606) 00:12:44.542 fused_ordering(607) 00:12:44.542 fused_ordering(608) 00:12:44.542 fused_ordering(609) 00:12:44.542 fused_ordering(610) 00:12:44.542 fused_ordering(611) 00:12:44.542 fused_ordering(612) 00:12:44.542 fused_ordering(613) 00:12:44.542 fused_ordering(614) 00:12:44.542 fused_ordering(615) 00:12:45.480 fused_ordering(616) 00:12:45.480 fused_ordering(617) 00:12:45.480 fused_ordering(618) 00:12:45.480 fused_ordering(619) 00:12:45.480 fused_ordering(620) 00:12:45.480 fused_ordering(621) 00:12:45.480 fused_ordering(622) 00:12:45.480 fused_ordering(623) 00:12:45.480 fused_ordering(624) 00:12:45.480 fused_ordering(625) 00:12:45.480 fused_ordering(626) 00:12:45.480 fused_ordering(627) 00:12:45.480 fused_ordering(628) 00:12:45.480 fused_ordering(629) 00:12:45.480 fused_ordering(630) 00:12:45.480 fused_ordering(631) 00:12:45.480 fused_ordering(632) 00:12:45.480 fused_ordering(633) 00:12:45.480 fused_ordering(634) 00:12:45.480 fused_ordering(635) 00:12:45.480 fused_ordering(636) 00:12:45.480 fused_ordering(637) 00:12:45.480 fused_ordering(638) 00:12:45.480 fused_ordering(639) 00:12:45.480 fused_ordering(640) 00:12:45.480 fused_ordering(641) 00:12:45.480 fused_ordering(642) 00:12:45.480 fused_ordering(643) 00:12:45.480 fused_ordering(644) 00:12:45.480 fused_ordering(645) 00:12:45.480 fused_ordering(646) 00:12:45.480 fused_ordering(647) 00:12:45.480 fused_ordering(648) 00:12:45.480 fused_ordering(649) 00:12:45.480 fused_ordering(650) 00:12:45.480 fused_ordering(651) 00:12:45.480 fused_ordering(652) 00:12:45.480 fused_ordering(653) 00:12:45.480 fused_ordering(654) 00:12:45.480 fused_ordering(655) 00:12:45.480 fused_ordering(656) 00:12:45.480 fused_ordering(657) 00:12:45.480 fused_ordering(658) 00:12:45.480 fused_ordering(659) 00:12:45.480 fused_ordering(660) 00:12:45.480 fused_ordering(661) 00:12:45.480 fused_ordering(662) 00:12:45.480 fused_ordering(663) 00:12:45.480 fused_ordering(664) 00:12:45.480 fused_ordering(665) 00:12:45.480 fused_ordering(666) 00:12:45.480 fused_ordering(667) 00:12:45.480 fused_ordering(668) 00:12:45.480 fused_ordering(669) 00:12:45.480 fused_ordering(670) 00:12:45.480 fused_ordering(671) 00:12:45.480 fused_ordering(672) 00:12:45.480 fused_ordering(673) 00:12:45.480 fused_ordering(674) 00:12:45.480 fused_ordering(675) 00:12:45.480 fused_ordering(676) 00:12:45.480 fused_ordering(677) 00:12:45.480 fused_ordering(678) 00:12:45.480 fused_ordering(679) 00:12:45.480 fused_ordering(680) 00:12:45.480 fused_ordering(681) 00:12:45.480 fused_ordering(682) 00:12:45.480 fused_ordering(683) 00:12:45.480 fused_ordering(684) 00:12:45.480 fused_ordering(685) 00:12:45.480 fused_ordering(686) 00:12:45.480 fused_ordering(687) 00:12:45.480 fused_ordering(688) 00:12:45.480 fused_ordering(689) 00:12:45.480 fused_ordering(690) 00:12:45.480 fused_ordering(691) 00:12:45.480 fused_ordering(692) 00:12:45.480 fused_ordering(693) 00:12:45.480 fused_ordering(694) 00:12:45.480 fused_ordering(695) 00:12:45.480 fused_ordering(696) 00:12:45.480 fused_ordering(697) 00:12:45.480 fused_ordering(698) 00:12:45.480 fused_ordering(699) 00:12:45.480 fused_ordering(700) 00:12:45.480 fused_ordering(701) 00:12:45.480 fused_ordering(702) 00:12:45.480 fused_ordering(703) 00:12:45.480 fused_ordering(704) 00:12:45.480 fused_ordering(705) 00:12:45.480 fused_ordering(706) 00:12:45.480 fused_ordering(707) 00:12:45.480 fused_ordering(708) 00:12:45.480 fused_ordering(709) 00:12:45.480 fused_ordering(710) 00:12:45.480 fused_ordering(711) 00:12:45.480 fused_ordering(712) 00:12:45.480 fused_ordering(713) 00:12:45.480 fused_ordering(714) 00:12:45.480 fused_ordering(715) 00:12:45.480 fused_ordering(716) 00:12:45.480 fused_ordering(717) 00:12:45.480 fused_ordering(718) 00:12:45.480 fused_ordering(719) 00:12:45.480 fused_ordering(720) 00:12:45.480 fused_ordering(721) 00:12:45.480 fused_ordering(722) 00:12:45.480 fused_ordering(723) 00:12:45.480 fused_ordering(724) 00:12:45.480 fused_ordering(725) 00:12:45.480 fused_ordering(726) 00:12:45.480 fused_ordering(727) 00:12:45.480 fused_ordering(728) 00:12:45.480 fused_ordering(729) 00:12:45.480 fused_ordering(730) 00:12:45.480 fused_ordering(731) 00:12:45.480 fused_ordering(732) 00:12:45.480 fused_ordering(733) 00:12:45.480 fused_ordering(734) 00:12:45.480 fused_ordering(735) 00:12:45.480 fused_ordering(736) 00:12:45.480 fused_ordering(737) 00:12:45.480 fused_ordering(738) 00:12:45.480 fused_ordering(739) 00:12:45.480 fused_ordering(740) 00:12:45.480 fused_ordering(741) 00:12:45.480 fused_ordering(742) 00:12:45.480 fused_ordering(743) 00:12:45.480 fused_ordering(744) 00:12:45.480 fused_ordering(745) 00:12:45.480 fused_ordering(746) 00:12:45.480 fused_ordering(747) 00:12:45.480 fused_ordering(748) 00:12:45.480 fused_ordering(749) 00:12:45.480 fused_ordering(750) 00:12:45.480 fused_ordering(751) 00:12:45.480 fused_ordering(752) 00:12:45.481 fused_ordering(753) 00:12:45.481 fused_ordering(754) 00:12:45.481 fused_ordering(755) 00:12:45.481 fused_ordering(756) 00:12:45.481 fused_ordering(757) 00:12:45.481 fused_ordering(758) 00:12:45.481 fused_ordering(759) 00:12:45.481 fused_ordering(760) 00:12:45.481 fused_ordering(761) 00:12:45.481 fused_ordering(762) 00:12:45.481 fused_ordering(763) 00:12:45.481 fused_ordering(764) 00:12:45.481 fused_ordering(765) 00:12:45.481 fused_ordering(766) 00:12:45.481 fused_ordering(767) 00:12:45.481 fused_ordering(768) 00:12:45.481 fused_ordering(769) 00:12:45.481 fused_ordering(770) 00:12:45.481 fused_ordering(771) 00:12:45.481 fused_ordering(772) 00:12:45.481 fused_ordering(773) 00:12:45.481 fused_ordering(774) 00:12:45.481 fused_ordering(775) 00:12:45.481 fused_ordering(776) 00:12:45.481 fused_ordering(777) 00:12:45.481 fused_ordering(778) 00:12:45.481 fused_ordering(779) 00:12:45.481 fused_ordering(780) 00:12:45.481 fused_ordering(781) 00:12:45.481 fused_ordering(782) 00:12:45.481 fused_ordering(783) 00:12:45.481 fused_ordering(784) 00:12:45.481 fused_ordering(785) 00:12:45.481 fused_ordering(786) 00:12:45.481 fused_ordering(787) 00:12:45.481 fused_ordering(788) 00:12:45.481 fused_ordering(789) 00:12:45.481 fused_ordering(790) 00:12:45.481 fused_ordering(791) 00:12:45.481 fused_ordering(792) 00:12:45.481 fused_ordering(793) 00:12:45.481 fused_ordering(794) 00:12:45.481 fused_ordering(795) 00:12:45.481 fused_ordering(796) 00:12:45.481 fused_ordering(797) 00:12:45.481 fused_ordering(798) 00:12:45.481 fused_ordering(799) 00:12:45.481 fused_ordering(800) 00:12:45.481 fused_ordering(801) 00:12:45.481 fused_ordering(802) 00:12:45.481 fused_ordering(803) 00:12:45.481 fused_ordering(804) 00:12:45.481 fused_ordering(805) 00:12:45.481 fused_ordering(806) 00:12:45.481 fused_ordering(807) 00:12:45.481 fused_ordering(808) 00:12:45.481 fused_ordering(809) 00:12:45.481 fused_ordering(810) 00:12:45.481 fused_ordering(811) 00:12:45.481 fused_ordering(812) 00:12:45.481 fused_ordering(813) 00:12:45.481 fused_ordering(814) 00:12:45.481 fused_ordering(815) 00:12:45.481 fused_ordering(816) 00:12:45.481 fused_ordering(817) 00:12:45.481 fused_ordering(818) 00:12:45.481 fused_ordering(819) 00:12:45.481 fused_ordering(820) 00:12:46.050 fused_ordering(821) 00:12:46.050 fused_ordering(822) 00:12:46.050 fused_ordering(823) 00:12:46.050 fused_ordering(824) 00:12:46.050 fused_ordering(825) 00:12:46.050 fused_ordering(826) 00:12:46.050 fused_ordering(827) 00:12:46.050 fused_ordering(828) 00:12:46.050 fused_ordering(829) 00:12:46.050 fused_ordering(830) 00:12:46.050 fused_ordering(831) 00:12:46.050 fused_ordering(832) 00:12:46.050 fused_ordering(833) 00:12:46.050 fused_ordering(834) 00:12:46.050 fused_ordering(835) 00:12:46.050 fused_ordering(836) 00:12:46.050 fused_ordering(837) 00:12:46.050 fused_ordering(838) 00:12:46.050 fused_ordering(839) 00:12:46.050 fused_ordering(840) 00:12:46.050 fused_ordering(841) 00:12:46.050 fused_ordering(842) 00:12:46.050 fused_ordering(843) 00:12:46.050 fused_ordering(844) 00:12:46.050 fused_ordering(845) 00:12:46.050 fused_ordering(846) 00:12:46.050 fused_ordering(847) 00:12:46.050 fused_ordering(848) 00:12:46.050 fused_ordering(849) 00:12:46.050 fused_ordering(850) 00:12:46.050 fused_ordering(851) 00:12:46.050 fused_ordering(852) 00:12:46.050 fused_ordering(853) 00:12:46.050 fused_ordering(854) 00:12:46.050 fused_ordering(855) 00:12:46.050 fused_ordering(856) 00:12:46.050 fused_ordering(857) 00:12:46.050 fused_ordering(858) 00:12:46.050 fused_ordering(859) 00:12:46.050 fused_ordering(860) 00:12:46.050 fused_ordering(861) 00:12:46.050 fused_ordering(862) 00:12:46.050 fused_ordering(863) 00:12:46.050 fused_ordering(864) 00:12:46.050 fused_ordering(865) 00:12:46.050 fused_ordering(866) 00:12:46.050 fused_ordering(867) 00:12:46.050 fused_ordering(868) 00:12:46.050 fused_ordering(869) 00:12:46.050 fused_ordering(870) 00:12:46.050 fused_ordering(871) 00:12:46.050 fused_ordering(872) 00:12:46.050 fused_ordering(873) 00:12:46.050 fused_ordering(874) 00:12:46.050 fused_ordering(875) 00:12:46.050 fused_ordering(876) 00:12:46.050 fused_ordering(877) 00:12:46.050 fused_ordering(878) 00:12:46.050 fused_ordering(879) 00:12:46.050 fused_ordering(880) 00:12:46.050 fused_ordering(881) 00:12:46.050 fused_ordering(882) 00:12:46.050 fused_ordering(883) 00:12:46.050 fused_ordering(884) 00:12:46.050 fused_ordering(885) 00:12:46.050 fused_ordering(886) 00:12:46.050 fused_ordering(887) 00:12:46.050 fused_ordering(888) 00:12:46.050 fused_ordering(889) 00:12:46.050 fused_ordering(890) 00:12:46.050 fused_ordering(891) 00:12:46.050 fused_ordering(892) 00:12:46.050 fused_ordering(893) 00:12:46.050 fused_ordering(894) 00:12:46.050 fused_ordering(895) 00:12:46.050 fused_ordering(896) 00:12:46.050 fused_ordering(897) 00:12:46.050 fused_ordering(898) 00:12:46.050 fused_ordering(899) 00:12:46.050 fused_ordering(900) 00:12:46.050 fused_ordering(901) 00:12:46.050 fused_ordering(902) 00:12:46.050 fused_ordering(903) 00:12:46.050 fused_ordering(904) 00:12:46.050 fused_ordering(905) 00:12:46.050 fused_ordering(906) 00:12:46.050 fused_ordering(907) 00:12:46.050 fused_ordering(908) 00:12:46.050 fused_ordering(909) 00:12:46.050 fused_ordering(910) 00:12:46.050 fused_ordering(911) 00:12:46.050 fused_ordering(912) 00:12:46.050 fused_ordering(913) 00:12:46.050 fused_ordering(914) 00:12:46.050 fused_ordering(915) 00:12:46.050 fused_ordering(916) 00:12:46.050 fused_ordering(917) 00:12:46.050 fused_ordering(918) 00:12:46.050 fused_ordering(919) 00:12:46.050 fused_ordering(920) 00:12:46.050 fused_ordering(921) 00:12:46.050 fused_ordering(922) 00:12:46.050 fused_ordering(923) 00:12:46.050 fused_ordering(924) 00:12:46.050 fused_ordering(925) 00:12:46.050 fused_ordering(926) 00:12:46.050 fused_ordering(927) 00:12:46.050 fused_ordering(928) 00:12:46.050 fused_ordering(929) 00:12:46.050 fused_ordering(930) 00:12:46.050 fused_ordering(931) 00:12:46.050 fused_ordering(932) 00:12:46.050 fused_ordering(933) 00:12:46.050 fused_ordering(934) 00:12:46.050 fused_ordering(935) 00:12:46.050 fused_ordering(936) 00:12:46.050 fused_ordering(937) 00:12:46.050 fused_ordering(938) 00:12:46.050 fused_ordering(939) 00:12:46.050 fused_ordering(940) 00:12:46.050 fused_ordering(941) 00:12:46.050 fused_ordering(942) 00:12:46.050 fused_ordering(943) 00:12:46.050 fused_ordering(944) 00:12:46.050 fused_ordering(945) 00:12:46.050 fused_ordering(946) 00:12:46.050 fused_ordering(947) 00:12:46.050 fused_ordering(948) 00:12:46.050 fused_ordering(949) 00:12:46.050 fused_ordering(950) 00:12:46.050 fused_ordering(951) 00:12:46.050 fused_ordering(952) 00:12:46.050 fused_ordering(953) 00:12:46.050 fused_ordering(954) 00:12:46.050 fused_ordering(955) 00:12:46.050 fused_ordering(956) 00:12:46.050 fused_ordering(957) 00:12:46.050 fused_ordering(958) 00:12:46.050 fused_ordering(959) 00:12:46.050 fused_ordering(960) 00:12:46.050 fused_ordering(961) 00:12:46.050 fused_ordering(962) 00:12:46.050 fused_ordering(963) 00:12:46.050 fused_ordering(964) 00:12:46.050 fused_ordering(965) 00:12:46.050 fused_ordering(966) 00:12:46.050 fused_ordering(967) 00:12:46.050 fused_ordering(968) 00:12:46.050 fused_ordering(969) 00:12:46.050 fused_ordering(970) 00:12:46.050 fused_ordering(971) 00:12:46.050 fused_ordering(972) 00:12:46.050 fused_ordering(973) 00:12:46.050 fused_ordering(974) 00:12:46.050 fused_ordering(975) 00:12:46.050 fused_ordering(976) 00:12:46.050 fused_ordering(977) 00:12:46.050 fused_ordering(978) 00:12:46.050 fused_ordering(979) 00:12:46.050 fused_ordering(980) 00:12:46.050 fused_ordering(981) 00:12:46.050 fused_ordering(982) 00:12:46.050 fused_ordering(983) 00:12:46.050 fused_ordering(984) 00:12:46.050 fused_ordering(985) 00:12:46.050 fused_ordering(986) 00:12:46.050 fused_ordering(987) 00:12:46.050 fused_ordering(988) 00:12:46.050 fused_ordering(989) 00:12:46.050 fused_ordering(990) 00:12:46.050 fused_ordering(991) 00:12:46.050 fused_ordering(992) 00:12:46.050 fused_ordering(993) 00:12:46.050 fused_ordering(994) 00:12:46.050 fused_ordering(995) 00:12:46.050 fused_ordering(996) 00:12:46.050 fused_ordering(997) 00:12:46.050 fused_ordering(998) 00:12:46.050 fused_ordering(999) 00:12:46.050 fused_ordering(1000) 00:12:46.050 fused_ordering(1001) 00:12:46.050 fused_ordering(1002) 00:12:46.050 fused_ordering(1003) 00:12:46.050 fused_ordering(1004) 00:12:46.050 fused_ordering(1005) 00:12:46.051 fused_ordering(1006) 00:12:46.051 fused_ordering(1007) 00:12:46.051 fused_ordering(1008) 00:12:46.051 fused_ordering(1009) 00:12:46.051 fused_ordering(1010) 00:12:46.051 fused_ordering(1011) 00:12:46.051 fused_ordering(1012) 00:12:46.051 fused_ordering(1013) 00:12:46.051 fused_ordering(1014) 00:12:46.051 fused_ordering(1015) 00:12:46.051 fused_ordering(1016) 00:12:46.051 fused_ordering(1017) 00:12:46.051 fused_ordering(1018) 00:12:46.051 fused_ordering(1019) 00:12:46.051 fused_ordering(1020) 00:12:46.051 fused_ordering(1021) 00:12:46.051 fused_ordering(1022) 00:12:46.051 fused_ordering(1023) 00:12:46.051 21:29:46 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:46.051 21:29:46 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:46.051 21:29:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:46.051 21:29:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:12:46.051 21:29:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:46.051 21:29:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:12:46.051 21:29:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:46.051 21:29:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:46.051 rmmod nvme_tcp 00:12:46.051 rmmod nvme_fabrics 00:12:46.051 rmmod nvme_keyring 00:12:46.051 21:29:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:46.051 21:29:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:12:46.051 21:29:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:12:46.051 21:29:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1343492 ']' 00:12:46.051 21:29:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1343492 00:12:46.051 21:29:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@949 -- # '[' -z 1343492 ']' 00:12:46.051 21:29:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # kill -0 1343492 00:12:46.051 21:29:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # uname 00:12:46.051 21:29:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:46.051 21:29:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1343492 00:12:46.051 21:29:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:12:46.051 21:29:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:12:46.051 21:29:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1343492' 00:12:46.051 killing process with pid 1343492 00:12:46.051 21:29:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # kill 1343492 00:12:46.051 21:29:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # wait 1343492 00:12:46.310 21:29:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:46.310 21:29:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:46.310 21:29:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:46.310 21:29:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:46.310 21:29:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:46.310 21:29:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.310 21:29:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:46.310 21:29:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.847 21:29:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:48.848 00:12:48.848 real 0m13.085s 00:12:48.848 user 0m7.530s 00:12:48.848 sys 0m7.048s 00:12:48.848 21:29:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:48.848 21:29:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:48.848 ************************************ 00:12:48.848 END TEST nvmf_fused_ordering 00:12:48.848 ************************************ 00:12:48.848 21:29:48 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:48.848 21:29:48 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:12:48.848 21:29:48 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:48.848 21:29:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:48.848 ************************************ 00:12:48.848 START TEST nvmf_delete_subsystem 00:12:48.848 ************************************ 00:12:48.848 21:29:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:48.848 * Looking for test storage... 00:12:48.848 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:48.848 21:29:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:48.848 21:29:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:12:48.848 21:29:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:48.848 21:29:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:48.848 21:29:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:48.848 21:29:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:48.848 21:29:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:48.848 21:29:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:48.848 21:29:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:48.848 21:29:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:48.848 21:29:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:48.848 21:29:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:48.848 21:29:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:12:48.848 21:29:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:12:48.848 21:29:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:48.848 21:29:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:48.848 21:29:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:48.848 21:29:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:48.848 21:29:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:48.848 21:29:48 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:48.848 21:29:48 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:48.848 21:29:48 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:48.848 21:29:48 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.848 21:29:48 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.848 21:29:48 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.848 21:29:48 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:12:48.848 21:29:48 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.848 21:29:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:12:48.848 21:29:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:48.848 21:29:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:48.848 21:29:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:48.848 21:29:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:48.848 21:29:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:48.848 21:29:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:48.848 21:29:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:48.848 21:29:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:48.848 21:29:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:12:48.848 21:29:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:48.848 21:29:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:48.848 21:29:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:48.848 21:29:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:48.848 21:29:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:48.848 21:29:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.848 21:29:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:48.848 21:29:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.848 21:29:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:48.848 21:29:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:48.848 21:29:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:12:48.848 21:29:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:55.418 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:55.418 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:12:55.418 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:55.418 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:55.418 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:55.418 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:55.418 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:55.418 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:12:55.418 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:55.418 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:12:55.418 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:12:55.418 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:12:55.418 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:12:55.418 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:12:55.418 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:12:55.418 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:55.418 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:55.418 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:55.418 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:55.418 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:55.418 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:55.418 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:55.418 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:55.418 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:55.419 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:55.419 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:55.419 Found net devices under 0000:af:00.0: cvl_0_0 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:55.419 Found net devices under 0000:af:00.1: cvl_0_1 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:55.419 21:29:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:55.419 21:29:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:55.419 21:29:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:55.419 21:29:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:55.419 21:29:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:55.419 21:29:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:55.419 21:29:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:55.419 21:29:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:55.419 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:55.419 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:12:55.419 00:12:55.419 --- 10.0.0.2 ping statistics --- 00:12:55.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:55.419 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:12:55.419 21:29:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:55.419 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:55.419 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:12:55.419 00:12:55.419 --- 10.0.0.1 ping statistics --- 00:12:55.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:55.419 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:12:55.419 21:29:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:55.419 21:29:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:12:55.419 21:29:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:55.419 21:29:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:55.419 21:29:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:55.419 21:29:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:55.419 21:29:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:55.419 21:29:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:55.419 21:29:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:55.419 21:29:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:12:55.419 21:29:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:55.419 21:29:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@723 -- # xtrace_disable 00:12:55.419 21:29:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:55.419 21:29:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1348296 00:12:55.419 21:29:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1348296 00:12:55.419 21:29:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:12:55.419 21:29:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@830 -- # '[' -z 1348296 ']' 00:12:55.419 21:29:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.419 21:29:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:55.419 21:29:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:55.419 21:29:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:55.419 21:29:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:55.419 [2024-06-07 21:29:55.241259] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:12:55.419 [2024-06-07 21:29:55.241323] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:55.419 EAL: No free 2048 kB hugepages reported on node 1 00:12:55.419 [2024-06-07 21:29:55.337800] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:55.419 [2024-06-07 21:29:55.424837] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:55.419 [2024-06-07 21:29:55.424882] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:55.419 [2024-06-07 21:29:55.424893] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:55.419 [2024-06-07 21:29:55.424902] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:55.419 [2024-06-07 21:29:55.424909] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:55.419 [2024-06-07 21:29:55.425016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:55.419 [2024-06-07 21:29:55.425020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.988 21:29:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:55.988 21:29:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@863 -- # return 0 00:12:55.988 21:29:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:55.988 21:29:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@729 -- # xtrace_disable 00:12:55.988 21:29:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:55.988 21:29:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:55.988 21:29:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:55.988 21:29:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:55.988 21:29:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:55.988 [2024-06-07 21:29:56.215423] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:55.988 21:29:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:55.988 21:29:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:55.988 21:29:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:55.988 21:29:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:55.988 21:29:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:55.988 21:29:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:55.988 21:29:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:55.988 21:29:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:55.988 [2024-06-07 21:29:56.235624] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.988 21:29:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:55.988 21:29:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:55.988 21:29:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:55.988 21:29:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:55.988 NULL1 00:12:55.988 21:29:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:55.988 21:29:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:55.988 21:29:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:55.988 21:29:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:55.988 Delay0 00:12:55.988 21:29:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:55.988 21:29:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:55.988 21:29:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:56.246 21:29:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:56.246 21:29:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:56.246 21:29:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1348508 00:12:56.246 21:29:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:12:56.246 21:29:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:56.246 EAL: No free 2048 kB hugepages reported on node 1 00:12:56.246 [2024-06-07 21:29:56.316551] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:58.148 21:29:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:58.148 21:29:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:58.148 21:29:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:58.407 Write completed with error (sct=0, sc=8) 00:12:58.407 starting I/O failed: -6 00:12:58.407 Read completed with error (sct=0, sc=8) 00:12:58.407 Read completed with error (sct=0, sc=8) 00:12:58.407 Write completed with error (sct=0, sc=8) 00:12:58.407 Read completed with error (sct=0, sc=8) 00:12:58.407 starting I/O failed: -6 00:12:58.407 Read completed with error (sct=0, sc=8) 00:12:58.407 Read completed with error (sct=0, sc=8) 00:12:58.407 Write completed with error (sct=0, sc=8) 00:12:58.407 Read completed with error (sct=0, sc=8) 00:12:58.407 starting I/O failed: -6 00:12:58.407 Write completed with error (sct=0, sc=8) 00:12:58.407 Read completed with error (sct=0, sc=8) 00:12:58.407 Read completed with error (sct=0, sc=8) 00:12:58.407 Write completed with error (sct=0, sc=8) 00:12:58.407 starting I/O failed: -6 00:12:58.407 Read completed with error (sct=0, sc=8) 00:12:58.407 Write completed with error (sct=0, sc=8) 00:12:58.407 Write completed with error (sct=0, sc=8) 00:12:58.407 Read completed with error (sct=0, sc=8) 00:12:58.407 starting I/O failed: -6 00:12:58.407 Write completed with error (sct=0, sc=8) 00:12:58.407 Write completed with error (sct=0, sc=8) 00:12:58.407 Read completed with error (sct=0, sc=8) 00:12:58.407 Write completed with error (sct=0, sc=8) 00:12:58.407 starting I/O failed: -6 00:12:58.407 Read completed with error (sct=0, sc=8) 00:12:58.407 Read completed with error (sct=0, sc=8) 00:12:58.407 Write completed with error (sct=0, sc=8) 00:12:58.407 Read completed with error (sct=0, sc=8) 00:12:58.407 starting I/O failed: -6 00:12:58.407 Read completed with error (sct=0, sc=8) 00:12:58.407 Read completed with error (sct=0, sc=8) 00:12:58.407 Read completed with error (sct=0, sc=8) 00:12:58.407 Write completed with error (sct=0, sc=8) 00:12:58.407 starting I/O failed: -6 00:12:58.407 Read completed with error (sct=0, sc=8) 00:12:58.407 Read completed with error (sct=0, sc=8) 00:12:58.407 Read completed with error (sct=0, sc=8) 00:12:58.407 Write completed with error (sct=0, sc=8) 00:12:58.407 starting I/O failed: -6 00:12:58.407 Read completed with error (sct=0, sc=8) 00:12:58.407 Read completed with error (sct=0, sc=8) 00:12:58.407 Read completed with error (sct=0, sc=8) 00:12:58.407 Read completed with error (sct=0, sc=8) 00:12:58.407 starting I/O failed: -6 00:12:58.407 Write completed with error (sct=0, sc=8) 00:12:58.407 Write completed with error (sct=0, sc=8) 00:12:58.407 Read completed with error (sct=0, sc=8) 00:12:58.407 Read completed with error (sct=0, sc=8) 00:12:58.407 starting I/O failed: -6 00:12:58.407 Read completed with error (sct=0, sc=8) 00:12:58.407 Write completed with error (sct=0, sc=8) 00:12:58.407 Read completed with error (sct=0, sc=8) 00:12:58.407 Read completed with error (sct=0, sc=8) 00:12:58.407 starting I/O failed: -6 00:12:58.407 [2024-06-07 21:29:58.526134] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbcb0 is same with the state(5) to be set 00:12:58.407 Write completed with error (sct=0, sc=8) 00:12:58.407 Read completed with error (sct=0, sc=8) 00:12:58.407 Read completed with error (sct=0, sc=8) 00:12:58.407 Write completed with error (sct=0, sc=8) 00:12:58.407 Read completed with error (sct=0, sc=8) 00:12:58.407 Read completed with error (sct=0, sc=8) 00:12:58.407 Write completed with error (sct=0, sc=8) 00:12:58.407 Read completed with error (sct=0, sc=8) 00:12:58.407 Read completed with error (sct=0, sc=8) 00:12:58.407 Write completed with error (sct=0, sc=8) 00:12:58.407 Read completed with error (sct=0, sc=8) 00:12:58.407 Read completed with error (sct=0, sc=8) 00:12:58.407 Write completed with error (sct=0, sc=8) 00:12:58.407 Read completed with error (sct=0, sc=8) 00:12:58.407 Read completed with error (sct=0, sc=8) 00:12:58.407 Write completed with error (sct=0, sc=8) 00:12:58.407 Read completed with error (sct=0, sc=8) 00:12:58.407 Read completed with error (sct=0, sc=8) 00:12:58.407 Read completed with error (sct=0, sc=8) 00:12:58.407 Write completed with error (sct=0, sc=8) 00:12:58.407 Read completed with error (sct=0, sc=8) 00:12:58.407 Read completed with error (sct=0, sc=8) 00:12:58.407 Read completed with error (sct=0, sc=8) 00:12:58.407 Read completed with error (sct=0, sc=8) 00:12:58.407 Read completed with error (sct=0, sc=8) 00:12:58.407 Read completed with error (sct=0, sc=8) 00:12:58.407 Write completed with error (sct=0, sc=8) 00:12:58.407 Read completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Write completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Write completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Write completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Write completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Write completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Write completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 [2024-06-07 21:29:58.528045] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebad00 is same with the state(5) to be set 00:12:58.408 Write completed with error (sct=0, sc=8) 00:12:58.408 starting I/O failed: -6 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Write completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 starting I/O failed: -6 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Write completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 starting I/O failed: -6 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Write completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Write completed with error (sct=0, sc=8) 00:12:58.408 starting I/O failed: -6 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 starting I/O failed: -6 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Write completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Write completed with error (sct=0, sc=8) 00:12:58.408 starting I/O failed: -6 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Write completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 starting I/O failed: -6 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 starting I/O failed: -6 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 starting I/O failed: -6 00:12:58.408 Write completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 starting I/O failed: -6 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Write completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 starting I/O failed: -6 00:12:58.408 Write completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 starting I/O failed: -6 00:12:58.408 Write completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 starting I/O failed: -6 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Write completed with error (sct=0, sc=8) 00:12:58.408 starting I/O failed: -6 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 starting I/O failed: -6 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 starting I/O failed: -6 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Write completed with error (sct=0, sc=8) 00:12:58.408 starting I/O failed: -6 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 starting I/O failed: -6 00:12:58.408 Write completed with error (sct=0, sc=8) 00:12:58.408 Write completed with error (sct=0, sc=8) 00:12:58.408 starting I/O failed: -6 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 starting I/O failed: -6 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Write completed with error (sct=0, sc=8) 00:12:58.408 starting I/O failed: -6 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 starting I/O failed: -6 00:12:58.408 Write completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 starting I/O failed: -6 00:12:58.408 Write completed with error (sct=0, sc=8) 00:12:58.408 Write completed with error (sct=0, sc=8) 00:12:58.408 starting I/O failed: -6 00:12:58.408 Write completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 starting I/O failed: -6 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 starting I/O failed: -6 00:12:58.408 Write completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 starting I/O failed: -6 00:12:58.408 Write completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 starting I/O failed: -6 00:12:58.408 Write completed with error (sct=0, sc=8) 00:12:58.408 Write completed with error (sct=0, sc=8) 00:12:58.408 starting I/O failed: -6 00:12:58.408 Write completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 starting I/O failed: -6 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 starting I/O failed: -6 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Write completed with error (sct=0, sc=8) 00:12:58.408 starting I/O failed: -6 00:12:58.408 Write completed with error (sct=0, sc=8) 00:12:58.408 Write completed with error (sct=0, sc=8) 00:12:58.408 starting I/O failed: -6 00:12:58.408 Write completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 starting I/O failed: -6 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Write completed with error (sct=0, sc=8) 00:12:58.408 starting I/O failed: -6 00:12:58.408 Write completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 starting I/O failed: -6 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 starting I/O failed: -6 00:12:58.408 Read completed with error (sct=0, sc=8) 00:12:58.408 Write completed with error (sct=0, sc=8) 00:12:58.408 starting I/O failed: -6 00:12:58.408 starting I/O failed: -6 00:12:58.408 starting I/O failed: -6 00:12:58.408 starting I/O failed: -6 00:12:58.408 starting I/O failed: -6 00:12:58.408 starting I/O failed: -6 00:12:58.408 starting I/O failed: -6 00:12:59.344 [2024-06-07 21:29:59.494882] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9a500 is same with the state(5) to be set 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Write completed with error (sct=0, sc=8) 00:12:59.344 Write completed with error (sct=0, sc=8) 00:12:59.344 Write completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Write completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Write completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Write completed with error (sct=0, sc=8) 00:12:59.344 Write completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Write completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 [2024-06-07 21:29:59.530009] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebe650 is same with the state(5) to be set 00:12:59.344 Write completed with error (sct=0, sc=8) 00:12:59.344 Write completed with error (sct=0, sc=8) 00:12:59.344 Write completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Write completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Write completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Write completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Write completed with error (sct=0, sc=8) 00:12:59.344 Write completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Write completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 [2024-06-07 21:29:59.530893] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebbe90 is same with the state(5) to be set 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Write completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Write completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Write completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Write completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Write completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Write completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Write completed with error (sct=0, sc=8) 00:12:59.344 Write completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Write completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Write completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Write completed with error (sct=0, sc=8) 00:12:59.344 Write completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Write completed with error (sct=0, sc=8) 00:12:59.344 Write completed with error (sct=0, sc=8) 00:12:59.344 [2024-06-07 21:29:59.532633] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f591800c780 is same with the state(5) to be set 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.344 Read completed with error (sct=0, sc=8) 00:12:59.345 Read completed with error (sct=0, sc=8) 00:12:59.345 Read completed with error (sct=0, sc=8) 00:12:59.345 Read completed with error (sct=0, sc=8) 00:12:59.345 Read completed with error (sct=0, sc=8) 00:12:59.345 Write completed with error (sct=0, sc=8) 00:12:59.345 Read completed with error (sct=0, sc=8) 00:12:59.345 Read completed with error (sct=0, sc=8) 00:12:59.345 Write completed with error (sct=0, sc=8) 00:12:59.345 Write completed with error (sct=0, sc=8) 00:12:59.345 Read completed with error (sct=0, sc=8) 00:12:59.345 Read completed with error (sct=0, sc=8) 00:12:59.345 Write completed with error (sct=0, sc=8) 00:12:59.345 Write completed with error (sct=0, sc=8) 00:12:59.345 Read completed with error (sct=0, sc=8) 00:12:59.345 Write completed with error (sct=0, sc=8) 00:12:59.345 Read completed with error (sct=0, sc=8) 00:12:59.345 Read completed with error (sct=0, sc=8) 00:12:59.345 Read completed with error (sct=0, sc=8) 00:12:59.345 Read completed with error (sct=0, sc=8) 00:12:59.345 Read completed with error (sct=0, sc=8) 00:12:59.345 Read completed with error (sct=0, sc=8) 00:12:59.345 Write completed with error (sct=0, sc=8) 00:12:59.345 Read completed with error (sct=0, sc=8) 00:12:59.345 Write completed with error (sct=0, sc=8) 00:12:59.345 Read completed with error (sct=0, sc=8) 00:12:59.345 Read completed with error (sct=0, sc=8) 00:12:59.345 Read completed with error (sct=0, sc=8) 00:12:59.345 [2024-06-07 21:29:59.532870] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f591800bfe0 is same with the state(5) to be set 00:12:59.345 Initializing NVMe Controllers 00:12:59.345 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:59.345 Controller IO queue size 128, less than required. 00:12:59.345 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:59.345 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:59.345 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:59.345 Initialization complete. Launching workers. 00:12:59.345 ======================================================== 00:12:59.345 Latency(us) 00:12:59.345 Device Information : IOPS MiB/s Average min max 00:12:59.345 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 169.92 0.08 897094.29 1274.38 1011147.05 00:12:59.345 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 181.34 0.09 934240.15 495.51 2002160.13 00:12:59.345 ======================================================== 00:12:59.345 Total : 351.26 0.17 916271.43 495.51 2002160.13 00:12:59.345 00:12:59.345 [2024-06-07 21:29:59.533378] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe9a500 (9): Bad file descriptor 00:12:59.345 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:12:59.345 21:29:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:59.345 21:29:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:12:59.345 21:29:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1348508 00:12:59.345 21:29:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:12:59.912 21:30:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:12:59.912 21:30:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1348508 00:12:59.912 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1348508) - No such process 00:12:59.912 21:30:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1348508 00:12:59.912 21:30:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@649 -- # local es=0 00:12:59.912 21:30:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # valid_exec_arg wait 1348508 00:12:59.912 21:30:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@637 -- # local arg=wait 00:12:59.912 21:30:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:59.912 21:30:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # type -t wait 00:12:59.912 21:30:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:59.912 21:30:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # wait 1348508 00:12:59.912 21:30:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # es=1 00:12:59.912 21:30:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:59.912 21:30:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:12:59.912 21:30:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:59.912 21:30:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:59.912 21:30:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:59.912 21:30:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:59.912 21:30:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:59.912 21:30:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:59.912 21:30:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:59.912 21:30:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:59.912 [2024-06-07 21:30:00.061645] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:59.912 21:30:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:59.912 21:30:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:59.912 21:30:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:59.912 21:30:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:59.912 21:30:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:59.912 21:30:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1349135 00:12:59.912 21:30:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:12:59.912 21:30:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:59.912 21:30:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1349135 00:12:59.912 21:30:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:59.912 EAL: No free 2048 kB hugepages reported on node 1 00:12:59.912 [2024-06-07 21:30:00.120657] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:00.480 21:30:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:00.480 21:30:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1349135 00:13:00.480 21:30:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:01.047 21:30:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:01.047 21:30:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1349135 00:13:01.047 21:30:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:01.615 21:30:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:01.615 21:30:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1349135 00:13:01.615 21:30:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:01.874 21:30:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:01.874 21:30:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1349135 00:13:01.874 21:30:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:02.441 21:30:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:02.441 21:30:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1349135 00:13:02.441 21:30:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:03.007 21:30:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:03.007 21:30:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1349135 00:13:03.007 21:30:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:03.264 Initializing NVMe Controllers 00:13:03.264 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:03.264 Controller IO queue size 128, less than required. 00:13:03.264 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:03.264 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:03.264 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:03.264 Initialization complete. Launching workers. 00:13:03.264 ======================================================== 00:13:03.265 Latency(us) 00:13:03.265 Device Information : IOPS MiB/s Average min max 00:13:03.265 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003690.21 1000224.60 1013933.19 00:13:03.265 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006662.79 1000414.93 1042860.36 00:13:03.265 ======================================================== 00:13:03.265 Total : 256.00 0.12 1005176.50 1000224.60 1042860.36 00:13:03.265 00:13:03.523 21:30:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:03.523 21:30:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1349135 00:13:03.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1349135) - No such process 00:13:03.523 21:30:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1349135 00:13:03.523 21:30:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:13:03.523 21:30:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:13:03.523 21:30:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:03.523 21:30:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:13:03.523 21:30:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:03.523 21:30:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:13:03.523 21:30:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:03.523 21:30:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:03.523 rmmod nvme_tcp 00:13:03.523 rmmod nvme_fabrics 00:13:03.523 rmmod nvme_keyring 00:13:03.523 21:30:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:03.523 21:30:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:13:03.523 21:30:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:13:03.523 21:30:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1348296 ']' 00:13:03.523 21:30:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1348296 00:13:03.523 21:30:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@949 -- # '[' -z 1348296 ']' 00:13:03.523 21:30:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # kill -0 1348296 00:13:03.523 21:30:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # uname 00:13:03.523 21:30:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:03.523 21:30:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1348296 00:13:03.523 21:30:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:03.523 21:30:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:03.523 21:30:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1348296' 00:13:03.523 killing process with pid 1348296 00:13:03.523 21:30:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # kill 1348296 00:13:03.523 21:30:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # wait 1348296 00:13:03.782 21:30:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:03.782 21:30:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:03.782 21:30:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:03.782 21:30:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:03.782 21:30:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:03.782 21:30:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:03.782 21:30:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:03.782 21:30:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.318 21:30:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:06.318 00:13:06.318 real 0m17.365s 00:13:06.318 user 0m31.224s 00:13:06.318 sys 0m5.753s 00:13:06.318 21:30:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:06.318 21:30:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:06.318 ************************************ 00:13:06.318 END TEST nvmf_delete_subsystem 00:13:06.318 ************************************ 00:13:06.318 21:30:06 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:06.318 21:30:06 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:13:06.318 21:30:06 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:06.318 21:30:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:06.318 ************************************ 00:13:06.318 START TEST nvmf_ns_masking 00:13:06.318 ************************************ 00:13:06.318 21:30:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:06.318 * Looking for test storage... 00:13:06.318 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:06.318 21:30:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:06.318 21:30:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:06.318 21:30:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:06.318 21:30:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:06.318 21:30:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:06.318 21:30:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:06.318 21:30:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:06.318 21:30:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:06.318 21:30:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:06.318 21:30:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:06.318 21:30:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:06.318 21:30:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:06.318 21:30:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:13:06.318 21:30:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:13:06.318 21:30:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:06.318 21:30:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:06.318 21:30:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:06.318 21:30:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:06.318 21:30:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:06.318 21:30:06 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:06.318 21:30:06 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:06.318 21:30:06 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:06.318 21:30:06 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.318 21:30:06 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.318 21:30:06 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.318 21:30:06 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:06.318 21:30:06 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.319 21:30:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:13:06.319 21:30:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:06.319 21:30:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:06.319 21:30:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:06.319 21:30:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:06.319 21:30:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:06.319 21:30:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:06.319 21:30:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:06.319 21:30:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:06.319 21:30:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:06.319 21:30:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:13:06.319 21:30:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:06.319 21:30:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:13:06.319 21:30:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:13:06.319 21:30:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=dd36a589-ec4b-4c81-b2f1-29cb4f3c679c 00:13:06.319 21:30:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:13:06.319 21:30:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:06.319 21:30:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:06.319 21:30:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:06.319 21:30:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:06.319 21:30:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:06.319 21:30:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.319 21:30:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:06.319 21:30:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.319 21:30:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:06.319 21:30:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:06.319 21:30:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:13:06.319 21:30:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:12.959 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:12.959 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:12.959 Found net devices under 0000:af:00.0: cvl_0_0 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:12.959 Found net devices under 0000:af:00.1: cvl_0_1 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:12.959 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:12.960 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:12.960 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:12.960 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:12.960 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:12.960 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:13:12.960 00:13:12.960 --- 10.0.0.2 ping statistics --- 00:13:12.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.960 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:13:12.960 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:12.960 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:12.960 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:13:12.960 00:13:12.960 --- 10.0.0.1 ping statistics --- 00:13:12.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.960 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:13:12.960 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:12.960 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:13:12.960 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:12.960 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:12.960 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:12.960 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:12.960 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:12.960 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:12.960 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:12.960 21:30:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:13:12.960 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:12.960 21:30:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@723 -- # xtrace_disable 00:13:12.960 21:30:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:12.960 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1354433 00:13:12.960 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1354433 00:13:12.960 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:12.960 21:30:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@830 -- # '[' -z 1354433 ']' 00:13:12.960 21:30:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.960 21:30:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:12.960 21:30:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.960 21:30:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:12.960 21:30:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:12.960 [2024-06-07 21:30:12.616848] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:13:12.960 [2024-06-07 21:30:12.616886] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:12.960 EAL: No free 2048 kB hugepages reported on node 1 00:13:12.960 [2024-06-07 21:30:12.696317] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:12.960 [2024-06-07 21:30:12.788629] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:12.960 [2024-06-07 21:30:12.788670] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:12.960 [2024-06-07 21:30:12.788682] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:12.960 [2024-06-07 21:30:12.788691] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:12.960 [2024-06-07 21:30:12.788698] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:12.960 [2024-06-07 21:30:12.788755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:12.960 [2024-06-07 21:30:12.788777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:12.960 [2024-06-07 21:30:12.788916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:12.960 [2024-06-07 21:30:12.788917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.960 21:30:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:12.960 21:30:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@863 -- # return 0 00:13:12.960 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:12.960 21:30:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:12.960 21:30:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:12.960 21:30:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:12.960 21:30:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:12.960 [2024-06-07 21:30:13.173466] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:12.960 21:30:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:13:12.960 21:30:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:13:12.960 21:30:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:13.218 Malloc1 00:13:13.218 21:30:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:13.477 Malloc2 00:13:13.477 21:30:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:13.478 21:30:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:14.045 21:30:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:14.045 [2024-06-07 21:30:14.163359] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:14.045 21:30:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:13:14.045 21:30:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I dd36a589-ec4b-4c81-b2f1-29cb4f3c679c -a 10.0.0.2 -s 4420 -i 4 00:13:14.304 21:30:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:13:14.304 21:30:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:13:14.304 21:30:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:13:14.304 21:30:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:13:14.304 21:30:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:13:16.208 21:30:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:13:16.208 21:30:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:16.208 21:30:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:13:16.208 21:30:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:13:16.208 21:30:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:13:16.208 21:30:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:13:16.208 21:30:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:13:16.208 21:30:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:16.208 21:30:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:13:16.208 21:30:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:13:16.208 21:30:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:13:16.208 21:30:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:16.208 21:30:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:13:16.208 [ 0]:0x1 00:13:16.208 21:30:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:16.208 21:30:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:16.208 21:30:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=db76ac270e144610be1b724cfb68f791 00:13:16.208 21:30:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ db76ac270e144610be1b724cfb68f791 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:16.208 21:30:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:16.468 21:30:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:13:16.468 21:30:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:16.468 21:30:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:13:16.468 [ 0]:0x1 00:13:16.468 21:30:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:16.468 21:30:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:16.468 21:30:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=db76ac270e144610be1b724cfb68f791 00:13:16.468 21:30:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ db76ac270e144610be1b724cfb68f791 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:16.468 21:30:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:13:16.468 21:30:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:16.468 21:30:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:13:16.468 [ 1]:0x2 00:13:16.468 21:30:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:16.468 21:30:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:16.727 21:30:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=58bfff3223184fac9f1d6e7add5cb14d 00:13:16.727 21:30:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 58bfff3223184fac9f1d6e7add5cb14d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:16.727 21:30:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:13:16.727 21:30:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:16.727 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.727 21:30:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:16.987 21:30:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:16.987 21:30:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:13:16.987 21:30:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I dd36a589-ec4b-4c81-b2f1-29cb4f3c679c -a 10.0.0.2 -s 4420 -i 4 00:13:17.246 21:30:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:17.246 21:30:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:13:17.246 21:30:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:13:17.246 21:30:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n 1 ]] 00:13:17.246 21:30:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # nvme_device_counter=1 00:13:17.246 21:30:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:13:19.781 21:30:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:13:19.781 21:30:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:19.782 21:30:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:13:19.782 21:30:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:13:19.782 21:30:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:13:19.782 21:30:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:13:19.782 21:30:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:13:19.782 21:30:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:19.782 21:30:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:13:19.782 21:30:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:13:19.782 21:30:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:13:19.782 21:30:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:13:19.782 21:30:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:13:19.782 21:30:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:13:19.782 21:30:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:19.782 21:30:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:13:19.782 21:30:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:19.782 21:30:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:13:19.782 21:30:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:19.782 21:30:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:13:19.782 21:30:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:19.782 21:30:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:19.782 21:30:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:13:19.782 21:30:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:19.782 21:30:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:13:19.782 21:30:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:13:19.782 21:30:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:13:19.782 21:30:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:13:19.782 21:30:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:13:19.782 21:30:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:19.782 21:30:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:13:19.782 [ 0]:0x2 00:13:19.782 21:30:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:19.782 21:30:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:19.782 21:30:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=58bfff3223184fac9f1d6e7add5cb14d 00:13:19.782 21:30:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 58bfff3223184fac9f1d6e7add5cb14d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:19.782 21:30:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:19.782 21:30:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:13:19.782 21:30:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:19.782 21:30:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:13:19.782 [ 0]:0x1 00:13:19.782 21:30:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:19.782 21:30:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:19.782 21:30:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=db76ac270e144610be1b724cfb68f791 00:13:19.782 21:30:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ db76ac270e144610be1b724cfb68f791 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:19.782 21:30:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:13:19.782 21:30:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:13:19.782 21:30:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:19.782 [ 1]:0x2 00:13:19.782 21:30:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:19.782 21:30:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:19.782 21:30:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=58bfff3223184fac9f1d6e7add5cb14d 00:13:19.782 21:30:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 58bfff3223184fac9f1d6e7add5cb14d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:19.782 21:30:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:20.041 21:30:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:13:20.041 21:30:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:13:20.041 21:30:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:13:20.041 21:30:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:13:20.041 21:30:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:20.041 21:30:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:13:20.041 21:30:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:20.041 21:30:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:13:20.041 21:30:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:20.041 21:30:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:13:20.041 21:30:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:20.041 21:30:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:20.041 21:30:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:13:20.041 21:30:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:20.041 21:30:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:13:20.041 21:30:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:13:20.041 21:30:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:13:20.041 21:30:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:13:20.041 21:30:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:13:20.041 21:30:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:20.041 21:30:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:13:20.041 [ 0]:0x2 00:13:20.041 21:30:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:20.041 21:30:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:20.299 21:30:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=58bfff3223184fac9f1d6e7add5cb14d 00:13:20.299 21:30:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 58bfff3223184fac9f1d6e7add5cb14d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:20.299 21:30:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:13:20.299 21:30:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:20.299 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.299 21:30:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:20.299 21:30:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:13:20.299 21:30:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I dd36a589-ec4b-4c81-b2f1-29cb4f3c679c -a 10.0.0.2 -s 4420 -i 4 00:13:20.557 21:30:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:20.557 21:30:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:13:20.558 21:30:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:13:20.558 21:30:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n 2 ]] 00:13:20.558 21:30:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # nvme_device_counter=2 00:13:20.558 21:30:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:13:23.089 21:30:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:13:23.089 21:30:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:23.089 21:30:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:13:23.089 21:30:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=2 00:13:23.089 21:30:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:13:23.089 21:30:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:13:23.089 21:30:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:13:23.089 21:30:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:23.089 21:30:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:13:23.089 21:30:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:13:23.089 21:30:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:13:23.089 21:30:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:23.089 21:30:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:13:23.089 [ 0]:0x1 00:13:23.089 21:30:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:23.089 21:30:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:23.089 21:30:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=db76ac270e144610be1b724cfb68f791 00:13:23.089 21:30:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ db76ac270e144610be1b724cfb68f791 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:23.089 21:30:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:13:23.089 21:30:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:23.089 21:30:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:13:23.089 [ 1]:0x2 00:13:23.089 21:30:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:23.089 21:30:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:23.089 21:30:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=58bfff3223184fac9f1d6e7add5cb14d 00:13:23.089 21:30:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 58bfff3223184fac9f1d6e7add5cb14d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:23.089 21:30:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:23.089 21:30:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:13:23.089 21:30:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:13:23.089 21:30:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:13:23.089 21:30:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:13:23.089 21:30:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:23.089 21:30:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:13:23.089 21:30:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:23.089 21:30:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:13:23.089 21:30:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:23.089 21:30:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:13:23.089 21:30:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:23.089 21:30:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:23.089 21:30:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:13:23.089 21:30:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:23.089 21:30:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:13:23.089 21:30:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:13:23.089 21:30:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:13:23.089 21:30:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:13:23.089 21:30:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:13:23.089 21:30:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:23.089 21:30:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:13:23.089 [ 0]:0x2 00:13:23.089 21:30:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:23.089 21:30:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:23.089 21:30:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=58bfff3223184fac9f1d6e7add5cb14d 00:13:23.089 21:30:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 58bfff3223184fac9f1d6e7add5cb14d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:23.089 21:30:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:23.089 21:30:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:13:23.089 21:30:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:23.089 21:30:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:23.089 21:30:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:23.089 21:30:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:23.089 21:30:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:23.089 21:30:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:23.089 21:30:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:23.089 21:30:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:23.089 21:30:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:23.089 21:30:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:23.348 [2024-06-07 21:30:23.444146] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:23.348 request: 00:13:23.348 { 00:13:23.348 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:23.348 "nsid": 2, 00:13:23.348 "host": "nqn.2016-06.io.spdk:host1", 00:13:23.348 "method": "nvmf_ns_remove_host", 00:13:23.348 "req_id": 1 00:13:23.348 } 00:13:23.348 Got JSON-RPC error response 00:13:23.348 response: 00:13:23.348 { 00:13:23.348 "code": -32602, 00:13:23.348 "message": "Invalid parameters" 00:13:23.348 } 00:13:23.348 21:30:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:13:23.348 21:30:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:13:23.348 21:30:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:13:23.348 21:30:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:13:23.348 21:30:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:13:23.348 21:30:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:13:23.348 21:30:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:13:23.348 21:30:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:13:23.348 21:30:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:23.348 21:30:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:13:23.348 21:30:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:23.348 21:30:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:13:23.348 21:30:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:13:23.348 21:30:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:23.348 21:30:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:23.348 21:30:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:23.348 21:30:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:13:23.348 21:30:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:23.348 21:30:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:13:23.348 21:30:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:13:23.348 21:30:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:13:23.348 21:30:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:13:23.348 21:30:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:13:23.348 21:30:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:23.348 21:30:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:13:23.348 [ 0]:0x2 00:13:23.348 21:30:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:23.348 21:30:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:23.348 21:30:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=58bfff3223184fac9f1d6e7add5cb14d 00:13:23.348 21:30:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 58bfff3223184fac9f1d6e7add5cb14d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:23.348 21:30:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:13:23.348 21:30:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:23.606 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.606 21:30:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:23.606 21:30:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:13:23.606 21:30:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:13:23.606 21:30:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:23.606 21:30:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:13:23.606 21:30:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:23.606 21:30:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:13:23.606 21:30:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:23.606 21:30:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:23.606 rmmod nvme_tcp 00:13:23.606 rmmod nvme_fabrics 00:13:23.606 rmmod nvme_keyring 00:13:23.865 21:30:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:23.865 21:30:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:13:23.865 21:30:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:13:23.865 21:30:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1354433 ']' 00:13:23.865 21:30:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1354433 00:13:23.865 21:30:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@949 -- # '[' -z 1354433 ']' 00:13:23.865 21:30:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # kill -0 1354433 00:13:23.865 21:30:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # uname 00:13:23.865 21:30:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:23.865 21:30:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1354433 00:13:23.865 21:30:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:23.865 21:30:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:23.865 21:30:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1354433' 00:13:23.865 killing process with pid 1354433 00:13:23.865 21:30:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@968 -- # kill 1354433 00:13:23.865 21:30:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@973 -- # wait 1354433 00:13:24.124 21:30:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:24.124 21:30:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:24.124 21:30:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:24.124 21:30:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:24.124 21:30:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:24.124 21:30:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:24.124 21:30:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:24.124 21:30:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.050 21:30:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:26.050 00:13:26.050 real 0m20.201s 00:13:26.050 user 0m49.843s 00:13:26.050 sys 0m6.385s 00:13:26.051 21:30:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:26.051 21:30:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:26.051 ************************************ 00:13:26.051 END TEST nvmf_ns_masking 00:13:26.051 ************************************ 00:13:26.051 21:30:26 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:13:26.051 21:30:26 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:26.051 21:30:26 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:13:26.051 21:30:26 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:26.051 21:30:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:26.310 ************************************ 00:13:26.310 START TEST nvmf_nvme_cli 00:13:26.310 ************************************ 00:13:26.310 21:30:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:26.310 * Looking for test storage... 00:13:26.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:26.310 21:30:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:26.310 21:30:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:26.310 21:30:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:26.310 21:30:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:26.310 21:30:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:26.310 21:30:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:26.310 21:30:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:26.310 21:30:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:26.310 21:30:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:26.310 21:30:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:26.310 21:30:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:26.310 21:30:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:26.310 21:30:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:13:26.310 21:30:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:13:26.310 21:30:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:26.310 21:30:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:26.310 21:30:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:26.310 21:30:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:26.310 21:30:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:26.310 21:30:26 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:26.310 21:30:26 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:26.310 21:30:26 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:26.310 21:30:26 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.310 21:30:26 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.310 21:30:26 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.310 21:30:26 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:26.310 21:30:26 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.310 21:30:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:13:26.310 21:30:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:26.310 21:30:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:26.310 21:30:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:26.310 21:30:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:26.310 21:30:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:26.310 21:30:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:26.310 21:30:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:26.310 21:30:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:26.310 21:30:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:26.310 21:30:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:26.310 21:30:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:26.310 21:30:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:26.310 21:30:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:26.310 21:30:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:26.310 21:30:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:26.310 21:30:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:26.310 21:30:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:26.310 21:30:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.310 21:30:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:26.310 21:30:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.310 21:30:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:26.310 21:30:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:26.310 21:30:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:13:26.310 21:30:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:32.880 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:32.880 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:13:32.880 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:32.880 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:32.880 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:32.880 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:32.880 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:32.880 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:13:32.880 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:32.880 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:13:32.880 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:13:32.880 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:13:32.880 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:13:32.880 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:13:32.880 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:13:32.880 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:32.880 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:32.880 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:32.880 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:32.880 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:32.880 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:32.880 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:32.880 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:32.880 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:32.880 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:32.880 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:32.880 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:32.880 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:32.880 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:32.880 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:32.880 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:32.880 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:32.880 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:32.880 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:32.880 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:32.880 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:32.880 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:32.880 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:32.880 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:32.880 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:32.880 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:32.880 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:32.880 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:32.880 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:32.880 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:32.880 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:32.880 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:32.880 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:32.880 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:32.880 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:32.880 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:32.881 Found net devices under 0000:af:00.0: cvl_0_0 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:32.881 Found net devices under 0000:af:00.1: cvl_0_1 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:32.881 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:32.881 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:13:32.881 00:13:32.881 --- 10.0.0.2 ping statistics --- 00:13:32.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.881 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:32.881 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:32.881 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:13:32.881 00:13:32.881 --- 10.0.0.1 ping statistics --- 00:13:32.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.881 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@723 -- # xtrace_disable 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1360530 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1360530 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@830 -- # '[' -z 1360530 ']' 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:32.881 21:30:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:32.881 [2024-06-07 21:30:33.022754] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:13:32.881 [2024-06-07 21:30:33.022811] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:32.881 EAL: No free 2048 kB hugepages reported on node 1 00:13:32.881 [2024-06-07 21:30:33.115901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:33.141 [2024-06-07 21:30:33.204899] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:33.141 [2024-06-07 21:30:33.204946] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:33.141 [2024-06-07 21:30:33.204956] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:33.141 [2024-06-07 21:30:33.204965] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:33.141 [2024-06-07 21:30:33.204973] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:33.141 [2024-06-07 21:30:33.205040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:33.141 [2024-06-07 21:30:33.205055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:33.141 [2024-06-07 21:30:33.205176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.141 [2024-06-07 21:30:33.205176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:33.707 21:30:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:33.707 21:30:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@863 -- # return 0 00:13:33.707 21:30:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:33.707 21:30:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:33.707 21:30:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:33.707 21:30:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:33.707 21:30:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:33.707 21:30:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:33.707 21:30:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:33.707 [2024-06-07 21:30:33.928532] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:33.707 21:30:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:33.707 21:30:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:33.707 21:30:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:33.707 21:30:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:33.707 Malloc0 00:13:33.707 21:30:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:33.707 21:30:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:33.707 21:30:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:33.707 21:30:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:33.964 Malloc1 00:13:33.964 21:30:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:33.964 21:30:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:33.964 21:30:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:33.964 21:30:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:33.964 21:30:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:33.964 21:30:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:33.964 21:30:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:33.964 21:30:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:33.964 21:30:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:33.964 21:30:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:33.964 21:30:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:33.964 21:30:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:33.964 21:30:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:33.964 21:30:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:33.964 21:30:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:33.964 21:30:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:33.964 [2024-06-07 21:30:34.011432] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:33.964 21:30:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:33.964 21:30:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:33.964 21:30:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:33.964 21:30:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:33.964 21:30:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:33.964 21:30:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:13:33.964 00:13:33.964 Discovery Log Number of Records 2, Generation counter 2 00:13:33.964 =====Discovery Log Entry 0====== 00:13:33.964 trtype: tcp 00:13:33.965 adrfam: ipv4 00:13:33.965 subtype: current discovery subsystem 00:13:33.965 treq: not required 00:13:33.965 portid: 0 00:13:33.965 trsvcid: 4420 00:13:33.965 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:33.965 traddr: 10.0.0.2 00:13:33.965 eflags: explicit discovery connections, duplicate discovery information 00:13:33.965 sectype: none 00:13:33.965 =====Discovery Log Entry 1====== 00:13:33.965 trtype: tcp 00:13:33.965 adrfam: ipv4 00:13:33.965 subtype: nvme subsystem 00:13:33.965 treq: not required 00:13:33.965 portid: 0 00:13:33.965 trsvcid: 4420 00:13:33.965 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:33.965 traddr: 10.0.0.2 00:13:33.965 eflags: none 00:13:33.965 sectype: none 00:13:33.965 21:30:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:33.965 21:30:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:33.965 21:30:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:13:33.965 21:30:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:33.965 21:30:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:13:33.965 21:30:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:13:33.965 21:30:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:33.965 21:30:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:13:33.965 21:30:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:33.965 21:30:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:33.965 21:30:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:35.338 21:30:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:35.338 21:30:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # local i=0 00:13:35.338 21:30:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:13:35.338 21:30:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # [[ -n 2 ]] 00:13:35.338 21:30:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # nvme_device_counter=2 00:13:35.338 21:30:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # sleep 2 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # nvme_devices=2 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # return 0 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:13:37.872 /dev/nvme0n1 ]] 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:37.872 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1218 -- # local i=0 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1230 -- # return 0 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:37.872 21:30:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:13:37.873 21:30:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:37.873 21:30:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:13:37.873 21:30:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:37.873 21:30:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:37.873 rmmod nvme_tcp 00:13:37.873 rmmod nvme_fabrics 00:13:37.873 rmmod nvme_keyring 00:13:37.873 21:30:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:37.873 21:30:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:13:37.873 21:30:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:13:37.873 21:30:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1360530 ']' 00:13:37.873 21:30:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1360530 00:13:37.873 21:30:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@949 -- # '[' -z 1360530 ']' 00:13:37.873 21:30:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # kill -0 1360530 00:13:37.873 21:30:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # uname 00:13:37.873 21:30:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:37.873 21:30:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1360530 00:13:37.873 21:30:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:37.873 21:30:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:37.873 21:30:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1360530' 00:13:37.873 killing process with pid 1360530 00:13:37.873 21:30:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # kill 1360530 00:13:37.873 21:30:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # wait 1360530 00:13:37.873 21:30:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:37.873 21:30:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:37.873 21:30:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:37.873 21:30:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:37.873 21:30:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:37.873 21:30:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:37.873 21:30:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:37.873 21:30:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:40.409 21:30:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:40.409 00:13:40.409 real 0m13.814s 00:13:40.409 user 0m21.205s 00:13:40.409 sys 0m5.584s 00:13:40.409 21:30:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:40.409 21:30:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:40.409 ************************************ 00:13:40.409 END TEST nvmf_nvme_cli 00:13:40.409 ************************************ 00:13:40.409 21:30:40 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:13:40.409 21:30:40 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:40.409 21:30:40 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:13:40.409 21:30:40 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:40.409 21:30:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:40.409 ************************************ 00:13:40.409 START TEST nvmf_vfio_user 00:13:40.409 ************************************ 00:13:40.409 21:30:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:40.409 * Looking for test storage... 00:13:40.409 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:40.409 21:30:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:40.409 21:30:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:13:40.409 21:30:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:40.409 21:30:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:40.410 21:30:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:40.410 21:30:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:40.410 21:30:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:40.410 21:30:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:40.410 21:30:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:40.410 21:30:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:40.410 21:30:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:40.410 21:30:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:40.410 21:30:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:13:40.410 21:30:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:13:40.410 21:30:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:40.410 21:30:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:40.410 21:30:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:40.410 21:30:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:40.410 21:30:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:40.410 21:30:40 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:40.410 21:30:40 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:40.410 21:30:40 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:40.410 21:30:40 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.410 21:30:40 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.410 21:30:40 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.410 21:30:40 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:13:40.410 21:30:40 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.410 21:30:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:13:40.410 21:30:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:40.410 21:30:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:40.410 21:30:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:40.410 21:30:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:40.410 21:30:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:40.410 21:30:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:40.410 21:30:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:40.410 21:30:40 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:40.410 21:30:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:40.410 21:30:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:40.410 21:30:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:13:40.410 21:30:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:40.410 21:30:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:40.410 21:30:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:40.410 21:30:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:13:40.410 21:30:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:13:40.410 21:30:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:13:40.410 21:30:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:13:40.410 21:30:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1361975 00:13:40.410 21:30:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1361975' 00:13:40.410 Process pid: 1361975 00:13:40.410 21:30:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:40.410 21:30:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1361975 00:13:40.410 21:30:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@830 -- # '[' -z 1361975 ']' 00:13:40.410 21:30:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:13:40.410 21:30:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.410 21:30:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:40.410 21:30:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.410 21:30:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:40.410 21:30:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:40.410 [2024-06-07 21:30:40.398031] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:13:40.410 [2024-06-07 21:30:40.398095] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:40.410 EAL: No free 2048 kB hugepages reported on node 1 00:13:40.410 [2024-06-07 21:30:40.488707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:40.410 [2024-06-07 21:30:40.581229] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:40.410 [2024-06-07 21:30:40.581274] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:40.410 [2024-06-07 21:30:40.581284] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:40.410 [2024-06-07 21:30:40.581293] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:40.410 [2024-06-07 21:30:40.581300] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:40.410 [2024-06-07 21:30:40.581354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:40.410 [2024-06-07 21:30:40.581456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:40.410 [2024-06-07 21:30:40.581564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:40.410 [2024-06-07 21:30:40.581564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.345 21:30:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:41.345 21:30:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@863 -- # return 0 00:13:41.345 21:30:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:42.281 21:30:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:13:42.281 21:30:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:42.281 21:30:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:42.281 21:30:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:42.281 21:30:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:42.281 21:30:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:42.540 Malloc1 00:13:42.540 21:30:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:42.798 21:30:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:42.798 21:30:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:43.056 21:30:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:43.056 21:30:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:43.056 21:30:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:43.314 Malloc2 00:13:43.314 21:30:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:43.572 21:30:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:43.831 21:30:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:43.831 21:30:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:13:43.831 21:30:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:13:44.091 21:30:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:44.091 21:30:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:44.091 21:30:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:13:44.091 21:30:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:44.091 [2024-06-07 21:30:44.125963] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:13:44.091 [2024-06-07 21:30:44.125998] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1362714 ] 00:13:44.091 EAL: No free 2048 kB hugepages reported on node 1 00:13:44.091 [2024-06-07 21:30:44.164536] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:13:44.091 [2024-06-07 21:30:44.174454] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:44.091 [2024-06-07 21:30:44.174478] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7ffa83daa000 00:13:44.091 [2024-06-07 21:30:44.175454] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:44.091 [2024-06-07 21:30:44.176454] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:44.091 [2024-06-07 21:30:44.177460] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:44.091 [2024-06-07 21:30:44.178471] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:44.091 [2024-06-07 21:30:44.179479] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:44.091 [2024-06-07 21:30:44.180486] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:44.091 [2024-06-07 21:30:44.181486] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:44.091 [2024-06-07 21:30:44.182492] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:44.091 [2024-06-07 21:30:44.183506] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:44.091 [2024-06-07 21:30:44.183522] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7ffa83d9f000 00:13:44.091 [2024-06-07 21:30:44.184934] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:44.091 [2024-06-07 21:30:44.201334] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:13:44.091 [2024-06-07 21:30:44.201366] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:13:44.092 [2024-06-07 21:30:44.206663] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:44.092 [2024-06-07 21:30:44.206718] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:44.092 [2024-06-07 21:30:44.206823] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:13:44.092 [2024-06-07 21:30:44.206846] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:13:44.092 [2024-06-07 21:30:44.206854] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:13:44.092 [2024-06-07 21:30:44.207662] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:13:44.092 [2024-06-07 21:30:44.207674] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:13:44.092 [2024-06-07 21:30:44.207683] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:13:44.092 [2024-06-07 21:30:44.208666] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:44.092 [2024-06-07 21:30:44.208677] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:13:44.092 [2024-06-07 21:30:44.208686] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:13:44.092 [2024-06-07 21:30:44.209673] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:13:44.092 [2024-06-07 21:30:44.209685] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:44.092 [2024-06-07 21:30:44.210680] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:13:44.092 [2024-06-07 21:30:44.210691] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:13:44.092 [2024-06-07 21:30:44.210697] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:13:44.092 [2024-06-07 21:30:44.210706] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:44.092 [2024-06-07 21:30:44.210813] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:13:44.092 [2024-06-07 21:30:44.210820] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:44.092 [2024-06-07 21:30:44.210826] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:13:44.092 [2024-06-07 21:30:44.211685] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:13:44.092 [2024-06-07 21:30:44.212687] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:13:44.092 [2024-06-07 21:30:44.213696] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:44.092 [2024-06-07 21:30:44.214692] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:44.092 [2024-06-07 21:30:44.214771] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:44.092 [2024-06-07 21:30:44.215708] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:13:44.092 [2024-06-07 21:30:44.215718] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:44.092 [2024-06-07 21:30:44.215728] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:13:44.092 [2024-06-07 21:30:44.215753] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:13:44.092 [2024-06-07 21:30:44.215763] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:13:44.092 [2024-06-07 21:30:44.215785] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:44.092 [2024-06-07 21:30:44.215792] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:44.092 [2024-06-07 21:30:44.215809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:44.092 [2024-06-07 21:30:44.215861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:44.092 [2024-06-07 21:30:44.215873] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:13:44.092 [2024-06-07 21:30:44.215880] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:13:44.092 [2024-06-07 21:30:44.215885] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:13:44.092 [2024-06-07 21:30:44.215894] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:44.092 [2024-06-07 21:30:44.215900] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:13:44.092 [2024-06-07 21:30:44.215906] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:13:44.092 [2024-06-07 21:30:44.215912] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:13:44.092 [2024-06-07 21:30:44.215922] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:13:44.092 [2024-06-07 21:30:44.215935] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:44.092 [2024-06-07 21:30:44.215954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:44.092 [2024-06-07 21:30:44.215968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:44.092 [2024-06-07 21:30:44.215978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:44.092 [2024-06-07 21:30:44.215989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:44.092 [2024-06-07 21:30:44.216000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:44.092 [2024-06-07 21:30:44.216006] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:13:44.092 [2024-06-07 21:30:44.216017] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:44.092 [2024-06-07 21:30:44.216033] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:44.092 [2024-06-07 21:30:44.216042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:44.092 [2024-06-07 21:30:44.216052] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:13:44.092 [2024-06-07 21:30:44.216059] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:44.092 [2024-06-07 21:30:44.216068] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:13:44.092 [2024-06-07 21:30:44.216075] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:13:44.092 [2024-06-07 21:30:44.216086] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:44.092 [2024-06-07 21:30:44.216103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:44.092 [2024-06-07 21:30:44.216163] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:13:44.092 [2024-06-07 21:30:44.216173] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:13:44.092 [2024-06-07 21:30:44.216183] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:44.092 [2024-06-07 21:30:44.216189] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:44.092 [2024-06-07 21:30:44.216197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:44.092 [2024-06-07 21:30:44.216215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:44.092 [2024-06-07 21:30:44.216227] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:13:44.092 [2024-06-07 21:30:44.216241] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:13:44.092 [2024-06-07 21:30:44.216251] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:13:44.092 [2024-06-07 21:30:44.216260] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:44.092 [2024-06-07 21:30:44.216266] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:44.092 [2024-06-07 21:30:44.216274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:44.092 [2024-06-07 21:30:44.216296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:44.092 [2024-06-07 21:30:44.216312] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:44.092 [2024-06-07 21:30:44.216322] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:44.092 [2024-06-07 21:30:44.216331] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:44.092 [2024-06-07 21:30:44.216336] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:44.092 [2024-06-07 21:30:44.216345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:44.092 [2024-06-07 21:30:44.216360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:44.092 [2024-06-07 21:30:44.216371] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:44.092 [2024-06-07 21:30:44.216382] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:13:44.093 [2024-06-07 21:30:44.216392] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:13:44.093 [2024-06-07 21:30:44.216400] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:44.093 [2024-06-07 21:30:44.216407] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:13:44.093 [2024-06-07 21:30:44.216413] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:13:44.093 [2024-06-07 21:30:44.216419] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:13:44.093 [2024-06-07 21:30:44.216426] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:13:44.093 [2024-06-07 21:30:44.216449] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:44.093 [2024-06-07 21:30:44.216465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:44.093 [2024-06-07 21:30:44.216479] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:44.093 [2024-06-07 21:30:44.216491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:44.093 [2024-06-07 21:30:44.216504] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:44.093 [2024-06-07 21:30:44.216518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:44.093 [2024-06-07 21:30:44.216531] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:44.093 [2024-06-07 21:30:44.216545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:44.093 [2024-06-07 21:30:44.216558] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:44.093 [2024-06-07 21:30:44.216564] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:44.093 [2024-06-07 21:30:44.216569] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:44.093 [2024-06-07 21:30:44.216573] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:44.093 [2024-06-07 21:30:44.216581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:44.093 [2024-06-07 21:30:44.216590] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:44.093 [2024-06-07 21:30:44.216596] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:44.093 [2024-06-07 21:30:44.216604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:44.093 [2024-06-07 21:30:44.216613] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:44.093 [2024-06-07 21:30:44.216618] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:44.093 [2024-06-07 21:30:44.216625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:44.093 [2024-06-07 21:30:44.216637] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:44.093 [2024-06-07 21:30:44.216643] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:44.093 [2024-06-07 21:30:44.216650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:44.093 [2024-06-07 21:30:44.216659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:44.093 [2024-06-07 21:30:44.216676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:44.093 [2024-06-07 21:30:44.216688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:44.093 [2024-06-07 21:30:44.216700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:44.093 ===================================================== 00:13:44.093 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:44.093 ===================================================== 00:13:44.093 Controller Capabilities/Features 00:13:44.093 ================================ 00:13:44.093 Vendor ID: 4e58 00:13:44.093 Subsystem Vendor ID: 4e58 00:13:44.093 Serial Number: SPDK1 00:13:44.093 Model Number: SPDK bdev Controller 00:13:44.093 Firmware Version: 24.09 00:13:44.093 Recommended Arb Burst: 6 00:13:44.093 IEEE OUI Identifier: 8d 6b 50 00:13:44.093 Multi-path I/O 00:13:44.093 May have multiple subsystem ports: Yes 00:13:44.093 May have multiple controllers: Yes 00:13:44.093 Associated with SR-IOV VF: No 00:13:44.093 Max Data Transfer Size: 131072 00:13:44.093 Max Number of Namespaces: 32 00:13:44.093 Max Number of I/O Queues: 127 00:13:44.093 NVMe Specification Version (VS): 1.3 00:13:44.093 NVMe Specification Version (Identify): 1.3 00:13:44.093 Maximum Queue Entries: 256 00:13:44.093 Contiguous Queues Required: Yes 00:13:44.093 Arbitration Mechanisms Supported 00:13:44.093 Weighted Round Robin: Not Supported 00:13:44.093 Vendor Specific: Not Supported 00:13:44.093 Reset Timeout: 15000 ms 00:13:44.093 Doorbell Stride: 4 bytes 00:13:44.093 NVM Subsystem Reset: Not Supported 00:13:44.093 Command Sets Supported 00:13:44.093 NVM Command Set: Supported 00:13:44.093 Boot Partition: Not Supported 00:13:44.093 Memory Page Size Minimum: 4096 bytes 00:13:44.093 Memory Page Size Maximum: 4096 bytes 00:13:44.093 Persistent Memory Region: Not Supported 00:13:44.093 Optional Asynchronous Events Supported 00:13:44.093 Namespace Attribute Notices: Supported 00:13:44.093 Firmware Activation Notices: Not Supported 00:13:44.093 ANA Change Notices: Not Supported 00:13:44.093 PLE Aggregate Log Change Notices: Not Supported 00:13:44.093 LBA Status Info Alert Notices: Not Supported 00:13:44.093 EGE Aggregate Log Change Notices: Not Supported 00:13:44.093 Normal NVM Subsystem Shutdown event: Not Supported 00:13:44.093 Zone Descriptor Change Notices: Not Supported 00:13:44.093 Discovery Log Change Notices: Not Supported 00:13:44.093 Controller Attributes 00:13:44.093 128-bit Host Identifier: Supported 00:13:44.093 Non-Operational Permissive Mode: Not Supported 00:13:44.093 NVM Sets: Not Supported 00:13:44.093 Read Recovery Levels: Not Supported 00:13:44.093 Endurance Groups: Not Supported 00:13:44.093 Predictable Latency Mode: Not Supported 00:13:44.093 Traffic Based Keep ALive: Not Supported 00:13:44.093 Namespace Granularity: Not Supported 00:13:44.093 SQ Associations: Not Supported 00:13:44.093 UUID List: Not Supported 00:13:44.093 Multi-Domain Subsystem: Not Supported 00:13:44.093 Fixed Capacity Management: Not Supported 00:13:44.093 Variable Capacity Management: Not Supported 00:13:44.093 Delete Endurance Group: Not Supported 00:13:44.093 Delete NVM Set: Not Supported 00:13:44.093 Extended LBA Formats Supported: Not Supported 00:13:44.093 Flexible Data Placement Supported: Not Supported 00:13:44.093 00:13:44.093 Controller Memory Buffer Support 00:13:44.093 ================================ 00:13:44.093 Supported: No 00:13:44.093 00:13:44.093 Persistent Memory Region Support 00:13:44.093 ================================ 00:13:44.093 Supported: No 00:13:44.093 00:13:44.093 Admin Command Set Attributes 00:13:44.093 ============================ 00:13:44.093 Security Send/Receive: Not Supported 00:13:44.093 Format NVM: Not Supported 00:13:44.093 Firmware Activate/Download: Not Supported 00:13:44.093 Namespace Management: Not Supported 00:13:44.093 Device Self-Test: Not Supported 00:13:44.093 Directives: Not Supported 00:13:44.093 NVMe-MI: Not Supported 00:13:44.093 Virtualization Management: Not Supported 00:13:44.093 Doorbell Buffer Config: Not Supported 00:13:44.093 Get LBA Status Capability: Not Supported 00:13:44.093 Command & Feature Lockdown Capability: Not Supported 00:13:44.093 Abort Command Limit: 4 00:13:44.093 Async Event Request Limit: 4 00:13:44.093 Number of Firmware Slots: N/A 00:13:44.093 Firmware Slot 1 Read-Only: N/A 00:13:44.093 Firmware Activation Without Reset: N/A 00:13:44.094 Multiple Update Detection Support: N/A 00:13:44.094 Firmware Update Granularity: No Information Provided 00:13:44.094 Per-Namespace SMART Log: No 00:13:44.094 Asymmetric Namespace Access Log Page: Not Supported 00:13:44.094 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:13:44.094 Command Effects Log Page: Supported 00:13:44.094 Get Log Page Extended Data: Supported 00:13:44.094 Telemetry Log Pages: Not Supported 00:13:44.094 Persistent Event Log Pages: Not Supported 00:13:44.094 Supported Log Pages Log Page: May Support 00:13:44.094 Commands Supported & Effects Log Page: Not Supported 00:13:44.094 Feature Identifiers & Effects Log Page:May Support 00:13:44.094 NVMe-MI Commands & Effects Log Page: May Support 00:13:44.094 Data Area 4 for Telemetry Log: Not Supported 00:13:44.094 Error Log Page Entries Supported: 128 00:13:44.094 Keep Alive: Supported 00:13:44.094 Keep Alive Granularity: 10000 ms 00:13:44.094 00:13:44.094 NVM Command Set Attributes 00:13:44.094 ========================== 00:13:44.094 Submission Queue Entry Size 00:13:44.094 Max: 64 00:13:44.094 Min: 64 00:13:44.094 Completion Queue Entry Size 00:13:44.094 Max: 16 00:13:44.094 Min: 16 00:13:44.094 Number of Namespaces: 32 00:13:44.094 Compare Command: Supported 00:13:44.094 Write Uncorrectable Command: Not Supported 00:13:44.094 Dataset Management Command: Supported 00:13:44.094 Write Zeroes Command: Supported 00:13:44.094 Set Features Save Field: Not Supported 00:13:44.094 Reservations: Not Supported 00:13:44.094 Timestamp: Not Supported 00:13:44.094 Copy: Supported 00:13:44.094 Volatile Write Cache: Present 00:13:44.094 Atomic Write Unit (Normal): 1 00:13:44.094 Atomic Write Unit (PFail): 1 00:13:44.094 Atomic Compare & Write Unit: 1 00:13:44.094 Fused Compare & Write: Supported 00:13:44.094 Scatter-Gather List 00:13:44.094 SGL Command Set: Supported (Dword aligned) 00:13:44.094 SGL Keyed: Not Supported 00:13:44.094 SGL Bit Bucket Descriptor: Not Supported 00:13:44.094 SGL Metadata Pointer: Not Supported 00:13:44.094 Oversized SGL: Not Supported 00:13:44.094 SGL Metadata Address: Not Supported 00:13:44.094 SGL Offset: Not Supported 00:13:44.094 Transport SGL Data Block: Not Supported 00:13:44.094 Replay Protected Memory Block: Not Supported 00:13:44.094 00:13:44.094 Firmware Slot Information 00:13:44.094 ========================= 00:13:44.094 Active slot: 1 00:13:44.094 Slot 1 Firmware Revision: 24.09 00:13:44.094 00:13:44.094 00:13:44.094 Commands Supported and Effects 00:13:44.094 ============================== 00:13:44.094 Admin Commands 00:13:44.094 -------------- 00:13:44.094 Get Log Page (02h): Supported 00:13:44.094 Identify (06h): Supported 00:13:44.094 Abort (08h): Supported 00:13:44.094 Set Features (09h): Supported 00:13:44.094 Get Features (0Ah): Supported 00:13:44.094 Asynchronous Event Request (0Ch): Supported 00:13:44.094 Keep Alive (18h): Supported 00:13:44.094 I/O Commands 00:13:44.094 ------------ 00:13:44.094 Flush (00h): Supported LBA-Change 00:13:44.094 Write (01h): Supported LBA-Change 00:13:44.094 Read (02h): Supported 00:13:44.094 Compare (05h): Supported 00:13:44.094 Write Zeroes (08h): Supported LBA-Change 00:13:44.094 Dataset Management (09h): Supported LBA-Change 00:13:44.094 Copy (19h): Supported LBA-Change 00:13:44.094 Unknown (79h): Supported LBA-Change 00:13:44.094 Unknown (7Ah): Supported 00:13:44.094 00:13:44.094 Error Log 00:13:44.094 ========= 00:13:44.094 00:13:44.094 Arbitration 00:13:44.094 =========== 00:13:44.094 Arbitration Burst: 1 00:13:44.094 00:13:44.094 Power Management 00:13:44.094 ================ 00:13:44.094 Number of Power States: 1 00:13:44.094 Current Power State: Power State #0 00:13:44.094 Power State #0: 00:13:44.094 Max Power: 0.00 W 00:13:44.094 Non-Operational State: Operational 00:13:44.094 Entry Latency: Not Reported 00:13:44.094 Exit Latency: Not Reported 00:13:44.094 Relative Read Throughput: 0 00:13:44.094 Relative Read Latency: 0 00:13:44.094 Relative Write Throughput: 0 00:13:44.094 Relative Write Latency: 0 00:13:44.094 Idle Power: Not Reported 00:13:44.094 Active Power: Not Reported 00:13:44.094 Non-Operational Permissive Mode: Not Supported 00:13:44.094 00:13:44.094 Health Information 00:13:44.094 ================== 00:13:44.094 Critical Warnings: 00:13:44.094 Available Spare Space: OK 00:13:44.094 Temperature: OK 00:13:44.094 Device Reliability: OK 00:13:44.094 Read Only: No 00:13:44.094 Volatile Memory Backup: OK 00:13:44.094 Current Temperature: 0 Kelvin (-2[2024-06-07 21:30:44.216819] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:44.094 [2024-06-07 21:30:44.216830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:44.094 [2024-06-07 21:30:44.216859] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:13:44.094 [2024-06-07 21:30:44.216870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:44.094 [2024-06-07 21:30:44.216879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:44.094 [2024-06-07 21:30:44.216887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:44.094 [2024-06-07 21:30:44.216895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:44.094 [2024-06-07 21:30:44.220033] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:44.094 [2024-06-07 21:30:44.220047] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:13:44.094 [2024-06-07 21:30:44.220733] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:44.094 [2024-06-07 21:30:44.220797] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:13:44.094 [2024-06-07 21:30:44.220805] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:13:44.094 [2024-06-07 21:30:44.221748] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:13:44.094 [2024-06-07 21:30:44.221763] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:13:44.094 [2024-06-07 21:30:44.221822] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:13:44.094 [2024-06-07 21:30:44.223775] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:44.094 73 Celsius) 00:13:44.094 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:44.094 Available Spare: 0% 00:13:44.094 Available Spare Threshold: 0% 00:13:44.094 Life Percentage Used: 0% 00:13:44.094 Data Units Read: 0 00:13:44.094 Data Units Written: 0 00:13:44.094 Host Read Commands: 0 00:13:44.094 Host Write Commands: 0 00:13:44.094 Controller Busy Time: 0 minutes 00:13:44.094 Power Cycles: 0 00:13:44.094 Power On Hours: 0 hours 00:13:44.094 Unsafe Shutdowns: 0 00:13:44.094 Unrecoverable Media Errors: 0 00:13:44.094 Lifetime Error Log Entries: 0 00:13:44.094 Warning Temperature Time: 0 minutes 00:13:44.094 Critical Temperature Time: 0 minutes 00:13:44.094 00:13:44.094 Number of Queues 00:13:44.094 ================ 00:13:44.094 Number of I/O Submission Queues: 127 00:13:44.094 Number of I/O Completion Queues: 127 00:13:44.094 00:13:44.094 Active Namespaces 00:13:44.094 ================= 00:13:44.094 Namespace ID:1 00:13:44.094 Error Recovery Timeout: Unlimited 00:13:44.094 Command Set Identifier: NVM (00h) 00:13:44.094 Deallocate: Supported 00:13:44.094 Deallocated/Unwritten Error: Not Supported 00:13:44.094 Deallocated Read Value: Unknown 00:13:44.094 Deallocate in Write Zeroes: Not Supported 00:13:44.094 Deallocated Guard Field: 0xFFFF 00:13:44.094 Flush: Supported 00:13:44.094 Reservation: Supported 00:13:44.094 Namespace Sharing Capabilities: Multiple Controllers 00:13:44.094 Size (in LBAs): 131072 (0GiB) 00:13:44.094 Capacity (in LBAs): 131072 (0GiB) 00:13:44.094 Utilization (in LBAs): 131072 (0GiB) 00:13:44.094 NGUID: BD59714F98A74D7E8A6E8E15DD8C7C11 00:13:44.095 UUID: bd59714f-98a7-4d7e-8a6e-8e15dd8c7c11 00:13:44.095 Thin Provisioning: Not Supported 00:13:44.095 Per-NS Atomic Units: Yes 00:13:44.095 Atomic Boundary Size (Normal): 0 00:13:44.095 Atomic Boundary Size (PFail): 0 00:13:44.095 Atomic Boundary Offset: 0 00:13:44.095 Maximum Single Source Range Length: 65535 00:13:44.095 Maximum Copy Length: 65535 00:13:44.095 Maximum Source Range Count: 1 00:13:44.095 NGUID/EUI64 Never Reused: No 00:13:44.095 Namespace Write Protected: No 00:13:44.095 Number of LBA Formats: 1 00:13:44.095 Current LBA Format: LBA Format #00 00:13:44.095 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:44.095 00:13:44.095 21:30:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:44.095 EAL: No free 2048 kB hugepages reported on node 1 00:13:44.353 [2024-06-07 21:30:44.463896] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:49.692 Initializing NVMe Controllers 00:13:49.692 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:49.692 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:49.692 Initialization complete. Launching workers. 00:13:49.692 ======================================================== 00:13:49.692 Latency(us) 00:13:49.692 Device Information : IOPS MiB/s Average min max 00:13:49.692 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 25051.15 97.86 5108.60 1411.94 10486.07 00:13:49.692 ======================================================== 00:13:49.692 Total : 25051.15 97.86 5108.60 1411.94 10486.07 00:13:49.692 00:13:49.692 [2024-06-07 21:30:49.483735] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:49.692 21:30:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:49.692 EAL: No free 2048 kB hugepages reported on node 1 00:13:49.692 [2024-06-07 21:30:49.725919] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:54.966 Initializing NVMe Controllers 00:13:54.967 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:54.967 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:54.967 Initialization complete. Launching workers. 00:13:54.967 ======================================================== 00:13:54.967 Latency(us) 00:13:54.967 Device Information : IOPS MiB/s Average min max 00:13:54.967 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16057.52 62.72 7976.31 4986.22 10973.20 00:13:54.967 ======================================================== 00:13:54.967 Total : 16057.52 62.72 7976.31 4986.22 10973.20 00:13:54.967 00:13:54.967 [2024-06-07 21:30:54.768784] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:54.967 21:30:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:54.967 EAL: No free 2048 kB hugepages reported on node 1 00:13:54.967 [2024-06-07 21:30:55.012019] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:00.240 [2024-06-07 21:31:00.071238] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:00.240 Initializing NVMe Controllers 00:14:00.240 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:00.240 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:00.240 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:00.240 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:00.240 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:00.240 Initialization complete. Launching workers. 00:14:00.240 Starting thread on core 2 00:14:00.240 Starting thread on core 3 00:14:00.240 Starting thread on core 1 00:14:00.240 21:31:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:00.240 EAL: No free 2048 kB hugepages reported on node 1 00:14:00.240 [2024-06-07 21:31:00.414097] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:03.531 [2024-06-07 21:31:03.485666] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:03.531 Initializing NVMe Controllers 00:14:03.531 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:03.531 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:03.531 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:03.531 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:03.531 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:03.531 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:03.531 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:03.531 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:03.531 Initialization complete. Launching workers. 00:14:03.531 Starting thread on core 1 with urgent priority queue 00:14:03.531 Starting thread on core 2 with urgent priority queue 00:14:03.531 Starting thread on core 3 with urgent priority queue 00:14:03.531 Starting thread on core 0 with urgent priority queue 00:14:03.531 SPDK bdev Controller (SPDK1 ) core 0: 9436.00 IO/s 10.60 secs/100000 ios 00:14:03.531 SPDK bdev Controller (SPDK1 ) core 1: 7481.00 IO/s 13.37 secs/100000 ios 00:14:03.531 SPDK bdev Controller (SPDK1 ) core 2: 8757.67 IO/s 11.42 secs/100000 ios 00:14:03.531 SPDK bdev Controller (SPDK1 ) core 3: 7971.00 IO/s 12.55 secs/100000 ios 00:14:03.531 ======================================================== 00:14:03.531 00:14:03.531 21:31:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:03.531 EAL: No free 2048 kB hugepages reported on node 1 00:14:03.790 [2024-06-07 21:31:03.810591] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:03.790 Initializing NVMe Controllers 00:14:03.790 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:03.790 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:03.790 Namespace ID: 1 size: 0GB 00:14:03.790 Initialization complete. 00:14:03.790 INFO: using host memory buffer for IO 00:14:03.790 Hello world! 00:14:03.790 [2024-06-07 21:31:03.845110] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:03.790 21:31:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:03.790 EAL: No free 2048 kB hugepages reported on node 1 00:14:04.048 [2024-06-07 21:31:04.174580] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:04.986 Initializing NVMe Controllers 00:14:04.986 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:04.986 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:04.986 Initialization complete. Launching workers. 00:14:04.986 submit (in ns) avg, min, max = 9999.9, 4546.4, 4011241.8 00:14:04.986 complete (in ns) avg, min, max = 18911.0, 2702.7, 4005539.1 00:14:04.986 00:14:04.986 Submit histogram 00:14:04.986 ================ 00:14:04.986 Range in us Cumulative Count 00:14:04.986 4.538 - 4.567: 0.2133% ( 36) 00:14:04.986 4.567 - 4.596: 1.7476% ( 259) 00:14:04.986 4.596 - 4.625: 4.4254% ( 452) 00:14:04.986 4.625 - 4.655: 7.7251% ( 557) 00:14:04.986 4.655 - 4.684: 18.3531% ( 1794) 00:14:04.986 4.684 - 4.713: 32.1327% ( 2326) 00:14:04.986 4.713 - 4.742: 43.4479% ( 1910) 00:14:04.986 4.742 - 4.771: 55.4206% ( 2021) 00:14:04.986 4.771 - 4.800: 66.7476% ( 1912) 00:14:04.986 4.800 - 4.829: 76.2796% ( 1609) 00:14:04.986 4.829 - 4.858: 83.3057% ( 1186) 00:14:04.986 4.858 - 4.887: 85.9360% ( 444) 00:14:04.986 4.887 - 4.916: 87.2334% ( 219) 00:14:04.986 4.916 - 4.945: 88.4242% ( 201) 00:14:04.986 4.945 - 4.975: 90.2133% ( 302) 00:14:04.986 4.975 - 5.004: 92.1031% ( 319) 00:14:04.986 5.004 - 5.033: 94.0107% ( 322) 00:14:04.986 5.033 - 5.062: 95.8235% ( 306) 00:14:04.986 5.062 - 5.091: 97.2097% ( 234) 00:14:04.986 5.091 - 5.120: 98.1754% ( 163) 00:14:04.986 5.120 - 5.149: 98.7737% ( 101) 00:14:04.986 5.149 - 5.178: 99.1647% ( 66) 00:14:04.986 5.178 - 5.207: 99.3483% ( 31) 00:14:04.986 5.207 - 5.236: 99.4135% ( 11) 00:14:04.986 5.236 - 5.265: 99.4431% ( 5) 00:14:04.986 5.265 - 5.295: 99.4609% ( 3) 00:14:04.986 5.295 - 5.324: 99.4727% ( 2) 00:14:04.986 5.324 - 5.353: 99.4787% ( 1) 00:14:04.986 5.353 - 5.382: 99.4846% ( 1) 00:14:04.986 5.411 - 5.440: 99.4964% ( 2) 00:14:04.986 6.982 - 7.011: 99.5024% ( 1) 00:14:04.986 7.040 - 7.069: 99.5083% ( 1) 00:14:04.986 7.069 - 7.098: 99.5261% ( 3) 00:14:04.986 7.098 - 7.127: 99.5320% ( 1) 00:14:04.986 7.127 - 7.156: 99.5379% ( 1) 00:14:04.986 7.156 - 7.185: 99.5438% ( 1) 00:14:04.986 7.215 - 7.244: 99.5498% ( 1) 00:14:04.986 7.244 - 7.273: 99.5675% ( 3) 00:14:04.986 7.302 - 7.331: 99.5735% ( 1) 00:14:04.986 7.331 - 7.360: 99.5853% ( 2) 00:14:04.986 7.360 - 7.389: 99.5912% ( 1) 00:14:04.986 7.418 - 7.447: 99.6090% ( 3) 00:14:04.986 7.447 - 7.505: 99.6268% ( 3) 00:14:04.986 7.505 - 7.564: 99.6386% ( 2) 00:14:04.986 7.564 - 7.622: 99.6445% ( 1) 00:14:04.986 7.622 - 7.680: 99.6505% ( 1) 00:14:04.986 7.680 - 7.738: 99.6682% ( 3) 00:14:04.986 7.738 - 7.796: 99.6742% ( 1) 00:14:04.987 7.796 - 7.855: 99.6919% ( 3) 00:14:04.987 7.913 - 7.971: 99.7038% ( 2) 00:14:04.987 8.029 - 8.087: 99.7097% ( 1) 00:14:04.987 8.204 - 8.262: 99.7156% ( 1) 00:14:04.987 8.320 - 8.378: 99.7216% ( 1) 00:14:04.987 8.378 - 8.436: 99.7275% ( 1) 00:14:04.987 8.436 - 8.495: 99.7334% ( 1) 00:14:04.987 8.553 - 8.611: 99.7512% ( 3) 00:14:04.987 8.785 - 8.844: 99.7571% ( 1) 00:14:04.987 8.844 - 8.902: 99.7690% ( 2) 00:14:04.987 8.902 - 8.960: 99.7749% ( 1) 00:14:04.987 9.076 - 9.135: 99.7808% ( 1) 00:14:04.987 9.135 - 9.193: 99.7867% ( 1) 00:14:04.987 9.193 - 9.251: 99.7927% ( 1) 00:14:04.987 9.251 - 9.309: 99.8045% ( 2) 00:14:04.987 9.309 - 9.367: 99.8104% ( 1) 00:14:04.987 9.425 - 9.484: 99.8164% ( 1) 00:14:04.987 9.775 - 9.833: 99.8341% ( 3) 00:14:04.987 9.891 - 9.949: 99.8400% ( 1) 00:14:04.987 10.356 - 10.415: 99.8460% ( 1) 00:14:04.987 10.822 - 10.880: 99.8519% ( 1) 00:14:04.987 11.055 - 11.113: 99.8578% ( 1) 00:14:04.987 11.113 - 11.171: 99.8637% ( 1) 00:14:04.987 11.287 - 11.345: 99.8697% ( 1) 00:14:04.987 3991.738 - 4021.527: 100.0000% ( 22) 00:14:04.987 00:14:04.987 Complete histogram 00:14:04.987 ================== 00:14:04.987 Range in us Cumulative Count 00:14:04.987 2.691 - 2.705: 0.0059% ( 1) 00:14:04.987 2.705 - 2.720: 0.0237% ( 3) 00:14:04.987 2.720 - 2.735: 1.1315% ( 187) 00:14:04.987 2.735 - 2.749: 4.4254% ( 556) 00:14:04.987 2.749 - [2024-06-07 21:31:05.197495] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:04.987 2.764: 7.3637% ( 496) 00:14:04.987 2.764 - 2.778: 9.9822% ( 442) 00:14:04.987 2.778 - 2.793: 26.4810% ( 2785) 00:14:04.987 2.793 - 2.807: 59.3780% ( 5553) 00:14:04.987 2.807 - 2.822: 79.6742% ( 3426) 00:14:04.987 2.822 - 2.836: 86.0723% ( 1080) 00:14:04.987 2.836 - 2.851: 89.3602% ( 555) 00:14:04.987 2.851 - 2.865: 91.6410% ( 385) 00:14:04.987 2.865 - 2.880: 94.1410% ( 422) 00:14:04.987 2.880 - 2.895: 97.0675% ( 494) 00:14:04.987 2.895 - 2.909: 98.6197% ( 262) 00:14:04.987 2.909 - 2.924: 99.0521% ( 73) 00:14:04.987 2.924 - 2.938: 99.1765% ( 21) 00:14:04.987 2.938 - 2.953: 99.2713% ( 16) 00:14:04.987 2.953 - 2.967: 99.3069% ( 6) 00:14:04.987 2.967 - 2.982: 99.3246% ( 3) 00:14:04.987 2.982 - 2.996: 99.3543% ( 5) 00:14:04.987 3.011 - 3.025: 99.3661% ( 2) 00:14:04.987 3.025 - 3.040: 99.3780% ( 2) 00:14:04.987 3.069 - 3.084: 99.3839% ( 1) 00:14:04.987 3.098 - 3.113: 99.3898% ( 1) 00:14:04.987 3.113 - 3.127: 99.3957% ( 1) 00:14:04.987 3.273 - 3.287: 99.4017% ( 1) 00:14:04.987 5.033 - 5.062: 99.4076% ( 1) 00:14:04.987 5.091 - 5.120: 99.4135% ( 1) 00:14:04.987 5.236 - 5.265: 99.4194% ( 1) 00:14:04.987 5.382 - 5.411: 99.4254% ( 1) 00:14:04.987 5.440 - 5.469: 99.4313% ( 1) 00:14:04.987 5.498 - 5.527: 99.4372% ( 1) 00:14:04.987 5.731 - 5.760: 99.4431% ( 1) 00:14:04.987 5.847 - 5.876: 99.4491% ( 1) 00:14:04.987 6.080 - 6.109: 99.4550% ( 1) 00:14:04.987 6.109 - 6.138: 99.4668% ( 2) 00:14:04.987 6.342 - 6.371: 99.4727% ( 1) 00:14:04.987 6.371 - 6.400: 99.4787% ( 1) 00:14:04.987 6.429 - 6.458: 99.4846% ( 1) 00:14:04.987 6.458 - 6.487: 99.4905% ( 1) 00:14:04.987 6.516 - 6.545: 99.4964% ( 1) 00:14:04.987 6.545 - 6.575: 99.5024% ( 1) 00:14:04.987 6.691 - 6.720: 99.5142% ( 2) 00:14:04.987 6.924 - 6.953: 99.5201% ( 1) 00:14:04.987 6.982 - 7.011: 99.5320% ( 2) 00:14:04.987 7.098 - 7.127: 99.5379% ( 1) 00:14:04.987 7.185 - 7.215: 99.5438% ( 1) 00:14:04.987 7.302 - 7.331: 99.5557% ( 2) 00:14:04.987 7.447 - 7.505: 99.5616% ( 1) 00:14:04.987 7.505 - 7.564: 99.5675% ( 1) 00:14:04.987 8.320 - 8.378: 99.5735% ( 1) 00:14:04.987 8.436 - 8.495: 99.5853% ( 2) 00:14:04.987 8.611 - 8.669: 99.5912% ( 1) 00:14:04.987 8.844 - 8.902: 99.5972% ( 1) 00:14:04.987 3991.738 - 4021.527: 100.0000% ( 68) 00:14:04.987 00:14:04.987 21:31:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:04.987 21:31:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:04.987 21:31:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:04.987 21:31:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:05.246 21:31:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:05.246 [ 00:14:05.246 { 00:14:05.246 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:05.246 "subtype": "Discovery", 00:14:05.246 "listen_addresses": [], 00:14:05.246 "allow_any_host": true, 00:14:05.246 "hosts": [] 00:14:05.246 }, 00:14:05.246 { 00:14:05.246 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:05.246 "subtype": "NVMe", 00:14:05.246 "listen_addresses": [ 00:14:05.246 { 00:14:05.246 "trtype": "VFIOUSER", 00:14:05.246 "adrfam": "IPv4", 00:14:05.246 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:05.246 "trsvcid": "0" 00:14:05.246 } 00:14:05.246 ], 00:14:05.246 "allow_any_host": true, 00:14:05.246 "hosts": [], 00:14:05.246 "serial_number": "SPDK1", 00:14:05.246 "model_number": "SPDK bdev Controller", 00:14:05.246 "max_namespaces": 32, 00:14:05.246 "min_cntlid": 1, 00:14:05.246 "max_cntlid": 65519, 00:14:05.246 "namespaces": [ 00:14:05.246 { 00:14:05.246 "nsid": 1, 00:14:05.246 "bdev_name": "Malloc1", 00:14:05.246 "name": "Malloc1", 00:14:05.247 "nguid": "BD59714F98A74D7E8A6E8E15DD8C7C11", 00:14:05.247 "uuid": "bd59714f-98a7-4d7e-8a6e-8e15dd8c7c11" 00:14:05.247 } 00:14:05.247 ] 00:14:05.247 }, 00:14:05.247 { 00:14:05.247 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:05.247 "subtype": "NVMe", 00:14:05.247 "listen_addresses": [ 00:14:05.247 { 00:14:05.247 "trtype": "VFIOUSER", 00:14:05.247 "adrfam": "IPv4", 00:14:05.247 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:05.247 "trsvcid": "0" 00:14:05.247 } 00:14:05.247 ], 00:14:05.247 "allow_any_host": true, 00:14:05.247 "hosts": [], 00:14:05.247 "serial_number": "SPDK2", 00:14:05.247 "model_number": "SPDK bdev Controller", 00:14:05.247 "max_namespaces": 32, 00:14:05.247 "min_cntlid": 1, 00:14:05.247 "max_cntlid": 65519, 00:14:05.247 "namespaces": [ 00:14:05.247 { 00:14:05.247 "nsid": 1, 00:14:05.247 "bdev_name": "Malloc2", 00:14:05.247 "name": "Malloc2", 00:14:05.247 "nguid": "B6BC0C32D9C04364B6607E5EEB8AD815", 00:14:05.247 "uuid": "b6bc0c32-d9c0-4364-b660-7e5eeb8ad815" 00:14:05.247 } 00:14:05.247 ] 00:14:05.247 } 00:14:05.247 ] 00:14:05.247 21:31:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:05.247 21:31:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1366559 00:14:05.247 21:31:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:05.247 21:31:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:05.247 21:31:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # local i=0 00:14:05.247 21:31:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:05.247 21:31:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:05.247 21:31:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1275 -- # return 0 00:14:05.247 21:31:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:05.506 21:31:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:05.506 EAL: No free 2048 kB hugepages reported on node 1 00:14:05.506 [2024-06-07 21:31:05.703207] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:05.506 Malloc3 00:14:05.764 21:31:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:05.764 [2024-06-07 21:31:05.932225] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:05.764 21:31:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:05.764 Asynchronous Event Request test 00:14:05.764 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:05.764 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:05.764 Registering asynchronous event callbacks... 00:14:05.764 Starting namespace attribute notice tests for all controllers... 00:14:05.764 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:05.764 aer_cb - Changed Namespace 00:14:05.764 Cleaning up... 00:14:06.023 [ 00:14:06.023 { 00:14:06.023 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:06.023 "subtype": "Discovery", 00:14:06.023 "listen_addresses": [], 00:14:06.023 "allow_any_host": true, 00:14:06.023 "hosts": [] 00:14:06.023 }, 00:14:06.024 { 00:14:06.024 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:06.024 "subtype": "NVMe", 00:14:06.024 "listen_addresses": [ 00:14:06.024 { 00:14:06.024 "trtype": "VFIOUSER", 00:14:06.024 "adrfam": "IPv4", 00:14:06.024 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:06.024 "trsvcid": "0" 00:14:06.024 } 00:14:06.024 ], 00:14:06.024 "allow_any_host": true, 00:14:06.024 "hosts": [], 00:14:06.024 "serial_number": "SPDK1", 00:14:06.024 "model_number": "SPDK bdev Controller", 00:14:06.024 "max_namespaces": 32, 00:14:06.024 "min_cntlid": 1, 00:14:06.024 "max_cntlid": 65519, 00:14:06.024 "namespaces": [ 00:14:06.024 { 00:14:06.024 "nsid": 1, 00:14:06.024 "bdev_name": "Malloc1", 00:14:06.024 "name": "Malloc1", 00:14:06.024 "nguid": "BD59714F98A74D7E8A6E8E15DD8C7C11", 00:14:06.024 "uuid": "bd59714f-98a7-4d7e-8a6e-8e15dd8c7c11" 00:14:06.024 }, 00:14:06.024 { 00:14:06.024 "nsid": 2, 00:14:06.024 "bdev_name": "Malloc3", 00:14:06.024 "name": "Malloc3", 00:14:06.024 "nguid": "66A1C8EDC3FE4EBF8285E11B75F83AB6", 00:14:06.024 "uuid": "66a1c8ed-c3fe-4ebf-8285-e11b75f83ab6" 00:14:06.024 } 00:14:06.024 ] 00:14:06.024 }, 00:14:06.024 { 00:14:06.024 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:06.024 "subtype": "NVMe", 00:14:06.024 "listen_addresses": [ 00:14:06.024 { 00:14:06.024 "trtype": "VFIOUSER", 00:14:06.024 "adrfam": "IPv4", 00:14:06.024 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:06.024 "trsvcid": "0" 00:14:06.024 } 00:14:06.024 ], 00:14:06.024 "allow_any_host": true, 00:14:06.024 "hosts": [], 00:14:06.024 "serial_number": "SPDK2", 00:14:06.024 "model_number": "SPDK bdev Controller", 00:14:06.024 "max_namespaces": 32, 00:14:06.024 "min_cntlid": 1, 00:14:06.024 "max_cntlid": 65519, 00:14:06.024 "namespaces": [ 00:14:06.024 { 00:14:06.024 "nsid": 1, 00:14:06.024 "bdev_name": "Malloc2", 00:14:06.024 "name": "Malloc2", 00:14:06.024 "nguid": "B6BC0C32D9C04364B6607E5EEB8AD815", 00:14:06.024 "uuid": "b6bc0c32-d9c0-4364-b660-7e5eeb8ad815" 00:14:06.024 } 00:14:06.024 ] 00:14:06.024 } 00:14:06.024 ] 00:14:06.024 21:31:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1366559 00:14:06.024 21:31:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:06.024 21:31:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:06.024 21:31:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:06.024 21:31:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:06.024 [2024-06-07 21:31:06.153060] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:14:06.024 [2024-06-07 21:31:06.153103] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1366655 ] 00:14:06.024 EAL: No free 2048 kB hugepages reported on node 1 00:14:06.024 [2024-06-07 21:31:06.190433] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:06.024 [2024-06-07 21:31:06.198276] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:06.024 [2024-06-07 21:31:06.198301] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f1f78883000 00:14:06.024 [2024-06-07 21:31:06.199278] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:06.024 [2024-06-07 21:31:06.200276] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:06.024 [2024-06-07 21:31:06.201288] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:06.024 [2024-06-07 21:31:06.202293] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:06.024 [2024-06-07 21:31:06.203301] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:06.024 [2024-06-07 21:31:06.204303] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:06.024 [2024-06-07 21:31:06.205315] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:06.024 [2024-06-07 21:31:06.206329] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:06.024 [2024-06-07 21:31:06.207338] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:06.024 [2024-06-07 21:31:06.207355] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f1f78878000 00:14:06.024 [2024-06-07 21:31:06.208763] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:06.024 [2024-06-07 21:31:06.226506] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:06.024 [2024-06-07 21:31:06.226535] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:14:06.024 [2024-06-07 21:31:06.231629] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:06.024 [2024-06-07 21:31:06.231683] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:06.024 [2024-06-07 21:31:06.231779] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:14:06.024 [2024-06-07 21:31:06.231797] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:14:06.024 [2024-06-07 21:31:06.231805] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:14:06.024 [2024-06-07 21:31:06.232636] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:06.024 [2024-06-07 21:31:06.232649] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:14:06.024 [2024-06-07 21:31:06.232659] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:14:06.024 [2024-06-07 21:31:06.233643] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:06.024 [2024-06-07 21:31:06.233654] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:14:06.024 [2024-06-07 21:31:06.233664] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:14:06.024 [2024-06-07 21:31:06.234657] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:06.024 [2024-06-07 21:31:06.234670] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:06.024 [2024-06-07 21:31:06.235657] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:06.024 [2024-06-07 21:31:06.235670] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:14:06.024 [2024-06-07 21:31:06.235676] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:14:06.024 [2024-06-07 21:31:06.235685] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:06.024 [2024-06-07 21:31:06.235792] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:14:06.024 [2024-06-07 21:31:06.235798] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:06.024 [2024-06-07 21:31:06.235805] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:06.024 [2024-06-07 21:31:06.236670] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:06.024 [2024-06-07 21:31:06.237670] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:06.024 [2024-06-07 21:31:06.238681] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:06.024 [2024-06-07 21:31:06.239687] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:06.024 [2024-06-07 21:31:06.239736] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:06.024 [2024-06-07 21:31:06.240696] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:06.024 [2024-06-07 21:31:06.240709] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:06.024 [2024-06-07 21:31:06.240715] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:14:06.024 [2024-06-07 21:31:06.240740] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:14:06.024 [2024-06-07 21:31:06.240755] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:14:06.024 [2024-06-07 21:31:06.240772] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:06.024 [2024-06-07 21:31:06.240779] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:06.024 [2024-06-07 21:31:06.240793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:06.024 [2024-06-07 21:31:06.248034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:06.025 [2024-06-07 21:31:06.248049] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:14:06.025 [2024-06-07 21:31:06.248055] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:14:06.025 [2024-06-07 21:31:06.248061] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:14:06.025 [2024-06-07 21:31:06.248070] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:06.025 [2024-06-07 21:31:06.248076] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:14:06.025 [2024-06-07 21:31:06.248083] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:14:06.025 [2024-06-07 21:31:06.248089] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:14:06.025 [2024-06-07 21:31:06.248099] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:14:06.025 [2024-06-07 21:31:06.248112] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:06.025 [2024-06-07 21:31:06.256034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:06.025 [2024-06-07 21:31:06.256051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.025 [2024-06-07 21:31:06.256062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.025 [2024-06-07 21:31:06.256075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.025 [2024-06-07 21:31:06.256086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.025 [2024-06-07 21:31:06.256093] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:14:06.025 [2024-06-07 21:31:06.256104] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:06.025 [2024-06-07 21:31:06.256116] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:06.025 [2024-06-07 21:31:06.264031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:06.025 [2024-06-07 21:31:06.264042] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:14:06.025 [2024-06-07 21:31:06.264049] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:06.025 [2024-06-07 21:31:06.264058] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:14:06.025 [2024-06-07 21:31:06.264065] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:14:06.025 [2024-06-07 21:31:06.264077] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:06.025 [2024-06-07 21:31:06.272033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:06.025 [2024-06-07 21:31:06.272099] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:14:06.025 [2024-06-07 21:31:06.272110] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:14:06.025 [2024-06-07 21:31:06.272121] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:06.025 [2024-06-07 21:31:06.272127] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:06.025 [2024-06-07 21:31:06.272135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:06.025 [2024-06-07 21:31:06.280031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:06.025 [2024-06-07 21:31:06.280047] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:14:06.025 [2024-06-07 21:31:06.280059] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:14:06.025 [2024-06-07 21:31:06.280069] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:14:06.025 [2024-06-07 21:31:06.280078] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:06.025 [2024-06-07 21:31:06.280084] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:06.025 [2024-06-07 21:31:06.280093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:06.025 [2024-06-07 21:31:06.288032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:06.025 [2024-06-07 21:31:06.288057] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:06.025 [2024-06-07 21:31:06.288067] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:06.025 [2024-06-07 21:31:06.288077] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:06.025 [2024-06-07 21:31:06.288083] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:06.025 [2024-06-07 21:31:06.288091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:06.285 [2024-06-07 21:31:06.296033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:06.285 [2024-06-07 21:31:06.296046] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:06.285 [2024-06-07 21:31:06.296055] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:14:06.285 [2024-06-07 21:31:06.296065] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:14:06.285 [2024-06-07 21:31:06.296073] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:06.285 [2024-06-07 21:31:06.296080] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:14:06.285 [2024-06-07 21:31:06.296086] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:14:06.285 [2024-06-07 21:31:06.296092] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:14:06.285 [2024-06-07 21:31:06.296098] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:14:06.285 [2024-06-07 21:31:06.296121] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:06.285 [2024-06-07 21:31:06.304033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:06.285 [2024-06-07 21:31:06.304050] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:06.285 [2024-06-07 21:31:06.312031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:06.285 [2024-06-07 21:31:06.312047] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:06.285 [2024-06-07 21:31:06.320031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:06.285 [2024-06-07 21:31:06.320047] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:06.285 [2024-06-07 21:31:06.328030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:06.285 [2024-06-07 21:31:06.328047] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:06.285 [2024-06-07 21:31:06.328053] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:06.285 [2024-06-07 21:31:06.328058] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:06.285 [2024-06-07 21:31:06.328062] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:06.285 [2024-06-07 21:31:06.328073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:06.285 [2024-06-07 21:31:06.328083] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:06.285 [2024-06-07 21:31:06.328089] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:06.285 [2024-06-07 21:31:06.328097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:06.285 [2024-06-07 21:31:06.328106] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:06.285 [2024-06-07 21:31:06.328111] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:06.285 [2024-06-07 21:31:06.328119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:06.285 [2024-06-07 21:31:06.328128] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:06.285 [2024-06-07 21:31:06.328134] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:06.285 [2024-06-07 21:31:06.328142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:06.285 [2024-06-07 21:31:06.336035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:06.285 [2024-06-07 21:31:06.336054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:06.285 [2024-06-07 21:31:06.336065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:06.285 [2024-06-07 21:31:06.336077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:06.285 ===================================================== 00:14:06.285 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:06.285 ===================================================== 00:14:06.285 Controller Capabilities/Features 00:14:06.285 ================================ 00:14:06.285 Vendor ID: 4e58 00:14:06.285 Subsystem Vendor ID: 4e58 00:14:06.285 Serial Number: SPDK2 00:14:06.285 Model Number: SPDK bdev Controller 00:14:06.285 Firmware Version: 24.09 00:14:06.285 Recommended Arb Burst: 6 00:14:06.285 IEEE OUI Identifier: 8d 6b 50 00:14:06.285 Multi-path I/O 00:14:06.285 May have multiple subsystem ports: Yes 00:14:06.285 May have multiple controllers: Yes 00:14:06.285 Associated with SR-IOV VF: No 00:14:06.285 Max Data Transfer Size: 131072 00:14:06.285 Max Number of Namespaces: 32 00:14:06.285 Max Number of I/O Queues: 127 00:14:06.285 NVMe Specification Version (VS): 1.3 00:14:06.285 NVMe Specification Version (Identify): 1.3 00:14:06.285 Maximum Queue Entries: 256 00:14:06.285 Contiguous Queues Required: Yes 00:14:06.285 Arbitration Mechanisms Supported 00:14:06.285 Weighted Round Robin: Not Supported 00:14:06.285 Vendor Specific: Not Supported 00:14:06.285 Reset Timeout: 15000 ms 00:14:06.285 Doorbell Stride: 4 bytes 00:14:06.285 NVM Subsystem Reset: Not Supported 00:14:06.285 Command Sets Supported 00:14:06.285 NVM Command Set: Supported 00:14:06.285 Boot Partition: Not Supported 00:14:06.285 Memory Page Size Minimum: 4096 bytes 00:14:06.285 Memory Page Size Maximum: 4096 bytes 00:14:06.285 Persistent Memory Region: Not Supported 00:14:06.285 Optional Asynchronous Events Supported 00:14:06.285 Namespace Attribute Notices: Supported 00:14:06.285 Firmware Activation Notices: Not Supported 00:14:06.285 ANA Change Notices: Not Supported 00:14:06.285 PLE Aggregate Log Change Notices: Not Supported 00:14:06.285 LBA Status Info Alert Notices: Not Supported 00:14:06.285 EGE Aggregate Log Change Notices: Not Supported 00:14:06.285 Normal NVM Subsystem Shutdown event: Not Supported 00:14:06.285 Zone Descriptor Change Notices: Not Supported 00:14:06.285 Discovery Log Change Notices: Not Supported 00:14:06.285 Controller Attributes 00:14:06.285 128-bit Host Identifier: Supported 00:14:06.285 Non-Operational Permissive Mode: Not Supported 00:14:06.285 NVM Sets: Not Supported 00:14:06.285 Read Recovery Levels: Not Supported 00:14:06.285 Endurance Groups: Not Supported 00:14:06.285 Predictable Latency Mode: Not Supported 00:14:06.285 Traffic Based Keep ALive: Not Supported 00:14:06.285 Namespace Granularity: Not Supported 00:14:06.285 SQ Associations: Not Supported 00:14:06.285 UUID List: Not Supported 00:14:06.285 Multi-Domain Subsystem: Not Supported 00:14:06.285 Fixed Capacity Management: Not Supported 00:14:06.285 Variable Capacity Management: Not Supported 00:14:06.286 Delete Endurance Group: Not Supported 00:14:06.286 Delete NVM Set: Not Supported 00:14:06.286 Extended LBA Formats Supported: Not Supported 00:14:06.286 Flexible Data Placement Supported: Not Supported 00:14:06.286 00:14:06.286 Controller Memory Buffer Support 00:14:06.286 ================================ 00:14:06.286 Supported: No 00:14:06.286 00:14:06.286 Persistent Memory Region Support 00:14:06.286 ================================ 00:14:06.286 Supported: No 00:14:06.286 00:14:06.286 Admin Command Set Attributes 00:14:06.286 ============================ 00:14:06.286 Security Send/Receive: Not Supported 00:14:06.286 Format NVM: Not Supported 00:14:06.286 Firmware Activate/Download: Not Supported 00:14:06.286 Namespace Management: Not Supported 00:14:06.286 Device Self-Test: Not Supported 00:14:06.286 Directives: Not Supported 00:14:06.286 NVMe-MI: Not Supported 00:14:06.286 Virtualization Management: Not Supported 00:14:06.286 Doorbell Buffer Config: Not Supported 00:14:06.286 Get LBA Status Capability: Not Supported 00:14:06.286 Command & Feature Lockdown Capability: Not Supported 00:14:06.286 Abort Command Limit: 4 00:14:06.286 Async Event Request Limit: 4 00:14:06.286 Number of Firmware Slots: N/A 00:14:06.286 Firmware Slot 1 Read-Only: N/A 00:14:06.286 Firmware Activation Without Reset: N/A 00:14:06.286 Multiple Update Detection Support: N/A 00:14:06.286 Firmware Update Granularity: No Information Provided 00:14:06.286 Per-Namespace SMART Log: No 00:14:06.286 Asymmetric Namespace Access Log Page: Not Supported 00:14:06.286 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:06.286 Command Effects Log Page: Supported 00:14:06.286 Get Log Page Extended Data: Supported 00:14:06.286 Telemetry Log Pages: Not Supported 00:14:06.286 Persistent Event Log Pages: Not Supported 00:14:06.286 Supported Log Pages Log Page: May Support 00:14:06.286 Commands Supported & Effects Log Page: Not Supported 00:14:06.286 Feature Identifiers & Effects Log Page:May Support 00:14:06.286 NVMe-MI Commands & Effects Log Page: May Support 00:14:06.286 Data Area 4 for Telemetry Log: Not Supported 00:14:06.286 Error Log Page Entries Supported: 128 00:14:06.286 Keep Alive: Supported 00:14:06.286 Keep Alive Granularity: 10000 ms 00:14:06.286 00:14:06.286 NVM Command Set Attributes 00:14:06.286 ========================== 00:14:06.286 Submission Queue Entry Size 00:14:06.286 Max: 64 00:14:06.286 Min: 64 00:14:06.286 Completion Queue Entry Size 00:14:06.286 Max: 16 00:14:06.286 Min: 16 00:14:06.286 Number of Namespaces: 32 00:14:06.286 Compare Command: Supported 00:14:06.286 Write Uncorrectable Command: Not Supported 00:14:06.286 Dataset Management Command: Supported 00:14:06.286 Write Zeroes Command: Supported 00:14:06.286 Set Features Save Field: Not Supported 00:14:06.286 Reservations: Not Supported 00:14:06.286 Timestamp: Not Supported 00:14:06.286 Copy: Supported 00:14:06.286 Volatile Write Cache: Present 00:14:06.286 Atomic Write Unit (Normal): 1 00:14:06.286 Atomic Write Unit (PFail): 1 00:14:06.286 Atomic Compare & Write Unit: 1 00:14:06.286 Fused Compare & Write: Supported 00:14:06.286 Scatter-Gather List 00:14:06.286 SGL Command Set: Supported (Dword aligned) 00:14:06.286 SGL Keyed: Not Supported 00:14:06.286 SGL Bit Bucket Descriptor: Not Supported 00:14:06.286 SGL Metadata Pointer: Not Supported 00:14:06.286 Oversized SGL: Not Supported 00:14:06.286 SGL Metadata Address: Not Supported 00:14:06.286 SGL Offset: Not Supported 00:14:06.286 Transport SGL Data Block: Not Supported 00:14:06.286 Replay Protected Memory Block: Not Supported 00:14:06.286 00:14:06.286 Firmware Slot Information 00:14:06.286 ========================= 00:14:06.286 Active slot: 1 00:14:06.286 Slot 1 Firmware Revision: 24.09 00:14:06.286 00:14:06.286 00:14:06.286 Commands Supported and Effects 00:14:06.286 ============================== 00:14:06.286 Admin Commands 00:14:06.286 -------------- 00:14:06.286 Get Log Page (02h): Supported 00:14:06.286 Identify (06h): Supported 00:14:06.286 Abort (08h): Supported 00:14:06.286 Set Features (09h): Supported 00:14:06.286 Get Features (0Ah): Supported 00:14:06.286 Asynchronous Event Request (0Ch): Supported 00:14:06.286 Keep Alive (18h): Supported 00:14:06.286 I/O Commands 00:14:06.286 ------------ 00:14:06.286 Flush (00h): Supported LBA-Change 00:14:06.286 Write (01h): Supported LBA-Change 00:14:06.286 Read (02h): Supported 00:14:06.286 Compare (05h): Supported 00:14:06.286 Write Zeroes (08h): Supported LBA-Change 00:14:06.286 Dataset Management (09h): Supported LBA-Change 00:14:06.286 Copy (19h): Supported LBA-Change 00:14:06.286 Unknown (79h): Supported LBA-Change 00:14:06.286 Unknown (7Ah): Supported 00:14:06.286 00:14:06.286 Error Log 00:14:06.286 ========= 00:14:06.286 00:14:06.286 Arbitration 00:14:06.286 =========== 00:14:06.286 Arbitration Burst: 1 00:14:06.286 00:14:06.286 Power Management 00:14:06.286 ================ 00:14:06.286 Number of Power States: 1 00:14:06.286 Current Power State: Power State #0 00:14:06.286 Power State #0: 00:14:06.286 Max Power: 0.00 W 00:14:06.286 Non-Operational State: Operational 00:14:06.286 Entry Latency: Not Reported 00:14:06.286 Exit Latency: Not Reported 00:14:06.286 Relative Read Throughput: 0 00:14:06.286 Relative Read Latency: 0 00:14:06.286 Relative Write Throughput: 0 00:14:06.286 Relative Write Latency: 0 00:14:06.286 Idle Power: Not Reported 00:14:06.286 Active Power: Not Reported 00:14:06.286 Non-Operational Permissive Mode: Not Supported 00:14:06.286 00:14:06.286 Health Information 00:14:06.286 ================== 00:14:06.286 Critical Warnings: 00:14:06.286 Available Spare Space: OK 00:14:06.286 Temperature: OK 00:14:06.286 Device Reliability: OK 00:14:06.286 Read Only: No 00:14:06.286 Volatile Memory Backup: OK 00:14:06.286 Current Temperature: 0 Kelvin (-2[2024-06-07 21:31:06.336197] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:06.286 [2024-06-07 21:31:06.344034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:06.286 [2024-06-07 21:31:06.344067] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:14:06.286 [2024-06-07 21:31:06.344079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.286 [2024-06-07 21:31:06.344087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.287 [2024-06-07 21:31:06.344095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.287 [2024-06-07 21:31:06.344103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.287 [2024-06-07 21:31:06.344173] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:06.287 [2024-06-07 21:31:06.344188] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:06.287 [2024-06-07 21:31:06.345174] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:06.287 [2024-06-07 21:31:06.345234] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:14:06.287 [2024-06-07 21:31:06.345243] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:14:06.287 [2024-06-07 21:31:06.346177] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:06.287 [2024-06-07 21:31:06.346196] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:14:06.287 [2024-06-07 21:31:06.346253] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:06.287 [2024-06-07 21:31:06.347718] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:06.287 73 Celsius) 00:14:06.287 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:06.287 Available Spare: 0% 00:14:06.287 Available Spare Threshold: 0% 00:14:06.287 Life Percentage Used: 0% 00:14:06.287 Data Units Read: 0 00:14:06.287 Data Units Written: 0 00:14:06.287 Host Read Commands: 0 00:14:06.287 Host Write Commands: 0 00:14:06.287 Controller Busy Time: 0 minutes 00:14:06.287 Power Cycles: 0 00:14:06.287 Power On Hours: 0 hours 00:14:06.287 Unsafe Shutdowns: 0 00:14:06.287 Unrecoverable Media Errors: 0 00:14:06.287 Lifetime Error Log Entries: 0 00:14:06.287 Warning Temperature Time: 0 minutes 00:14:06.287 Critical Temperature Time: 0 minutes 00:14:06.287 00:14:06.287 Number of Queues 00:14:06.287 ================ 00:14:06.287 Number of I/O Submission Queues: 127 00:14:06.287 Number of I/O Completion Queues: 127 00:14:06.287 00:14:06.287 Active Namespaces 00:14:06.287 ================= 00:14:06.287 Namespace ID:1 00:14:06.287 Error Recovery Timeout: Unlimited 00:14:06.287 Command Set Identifier: NVM (00h) 00:14:06.287 Deallocate: Supported 00:14:06.287 Deallocated/Unwritten Error: Not Supported 00:14:06.287 Deallocated Read Value: Unknown 00:14:06.287 Deallocate in Write Zeroes: Not Supported 00:14:06.287 Deallocated Guard Field: 0xFFFF 00:14:06.287 Flush: Supported 00:14:06.287 Reservation: Supported 00:14:06.287 Namespace Sharing Capabilities: Multiple Controllers 00:14:06.287 Size (in LBAs): 131072 (0GiB) 00:14:06.287 Capacity (in LBAs): 131072 (0GiB) 00:14:06.287 Utilization (in LBAs): 131072 (0GiB) 00:14:06.287 NGUID: B6BC0C32D9C04364B6607E5EEB8AD815 00:14:06.287 UUID: b6bc0c32-d9c0-4364-b660-7e5eeb8ad815 00:14:06.287 Thin Provisioning: Not Supported 00:14:06.287 Per-NS Atomic Units: Yes 00:14:06.287 Atomic Boundary Size (Normal): 0 00:14:06.287 Atomic Boundary Size (PFail): 0 00:14:06.287 Atomic Boundary Offset: 0 00:14:06.287 Maximum Single Source Range Length: 65535 00:14:06.287 Maximum Copy Length: 65535 00:14:06.287 Maximum Source Range Count: 1 00:14:06.287 NGUID/EUI64 Never Reused: No 00:14:06.287 Namespace Write Protected: No 00:14:06.287 Number of LBA Formats: 1 00:14:06.287 Current LBA Format: LBA Format #00 00:14:06.287 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:06.287 00:14:06.287 21:31:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:06.287 EAL: No free 2048 kB hugepages reported on node 1 00:14:06.546 [2024-06-07 21:31:06.568816] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:11.831 Initializing NVMe Controllers 00:14:11.831 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:11.831 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:11.831 Initialization complete. Launching workers. 00:14:11.831 ======================================================== 00:14:11.831 Latency(us) 00:14:11.831 Device Information : IOPS MiB/s Average min max 00:14:11.831 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 42124.14 164.55 3038.18 1023.50 9862.43 00:14:11.831 ======================================================== 00:14:11.831 Total : 42124.14 164.55 3038.18 1023.50 9862.43 00:14:11.831 00:14:11.831 [2024-06-07 21:31:11.672308] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:11.831 21:31:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:11.831 EAL: No free 2048 kB hugepages reported on node 1 00:14:11.831 [2024-06-07 21:31:11.919083] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:17.102 Initializing NVMe Controllers 00:14:17.102 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:17.102 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:17.102 Initialization complete. Launching workers. 00:14:17.102 ======================================================== 00:14:17.102 Latency(us) 00:14:17.102 Device Information : IOPS MiB/s Average min max 00:14:17.102 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 24067.03 94.01 5317.78 1437.46 7906.87 00:14:17.102 ======================================================== 00:14:17.102 Total : 24067.03 94.01 5317.78 1437.46 7906.87 00:14:17.102 00:14:17.102 [2024-06-07 21:31:16.942504] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:17.102 21:31:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:17.102 EAL: No free 2048 kB hugepages reported on node 1 00:14:17.102 [2024-06-07 21:31:17.197851] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:22.375 [2024-06-07 21:31:22.333147] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:22.375 Initializing NVMe Controllers 00:14:22.375 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:22.375 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:22.375 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:22.375 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:22.375 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:22.375 Initialization complete. Launching workers. 00:14:22.375 Starting thread on core 2 00:14:22.375 Starting thread on core 3 00:14:22.375 Starting thread on core 1 00:14:22.375 21:31:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:22.375 EAL: No free 2048 kB hugepages reported on node 1 00:14:22.635 [2024-06-07 21:31:22.662253] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:26.827 [2024-06-07 21:31:26.528917] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:26.827 Initializing NVMe Controllers 00:14:26.827 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:26.827 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:26.827 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:26.827 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:26.827 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:26.827 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:26.827 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:26.827 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:26.827 Initialization complete. Launching workers. 00:14:26.827 Starting thread on core 1 with urgent priority queue 00:14:26.827 Starting thread on core 2 with urgent priority queue 00:14:26.827 Starting thread on core 3 with urgent priority queue 00:14:26.827 Starting thread on core 0 with urgent priority queue 00:14:26.827 SPDK bdev Controller (SPDK2 ) core 0: 5523.00 IO/s 18.11 secs/100000 ios 00:14:26.827 SPDK bdev Controller (SPDK2 ) core 1: 3455.00 IO/s 28.94 secs/100000 ios 00:14:26.827 SPDK bdev Controller (SPDK2 ) core 2: 5061.67 IO/s 19.76 secs/100000 ios 00:14:26.827 SPDK bdev Controller (SPDK2 ) core 3: 4299.00 IO/s 23.26 secs/100000 ios 00:14:26.827 ======================================================== 00:14:26.827 00:14:26.827 21:31:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:26.827 EAL: No free 2048 kB hugepages reported on node 1 00:14:26.827 [2024-06-07 21:31:26.851529] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:26.827 Initializing NVMe Controllers 00:14:26.827 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:26.827 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:26.827 Namespace ID: 1 size: 0GB 00:14:26.827 Initialization complete. 00:14:26.827 INFO: using host memory buffer for IO 00:14:26.827 Hello world! 00:14:26.827 [2024-06-07 21:31:26.863609] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:26.827 21:31:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:26.827 EAL: No free 2048 kB hugepages reported on node 1 00:14:27.086 [2024-06-07 21:31:27.176808] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:28.024 Initializing NVMe Controllers 00:14:28.024 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:28.024 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:28.024 Initialization complete. Launching workers. 00:14:28.024 submit (in ns) avg, min, max = 8044.8, 4532.7, 4005660.0 00:14:28.024 complete (in ns) avg, min, max = 27843.7, 2709.1, 5995490.0 00:14:28.024 00:14:28.024 Submit histogram 00:14:28.024 ================ 00:14:28.024 Range in us Cumulative Count 00:14:28.024 4.509 - 4.538: 0.0325% ( 4) 00:14:28.024 4.538 - 4.567: 0.6740% ( 79) 00:14:28.024 4.567 - 4.596: 2.6068% ( 238) 00:14:28.024 4.596 - 4.625: 6.1069% ( 431) 00:14:28.024 4.625 - 4.655: 9.5582% ( 425) 00:14:28.024 4.655 - 4.684: 20.2534% ( 1317) 00:14:28.024 4.684 - 4.713: 32.0448% ( 1452) 00:14:28.024 4.713 - 4.742: 45.2656% ( 1628) 00:14:28.024 4.742 - 4.771: 56.6591% ( 1403) 00:14:28.024 4.771 - 4.800: 66.8101% ( 1250) 00:14:28.024 4.800 - 4.829: 76.4658% ( 1189) 00:14:28.024 4.829 - 4.858: 82.2397% ( 711) 00:14:28.024 4.858 - 4.887: 85.4475% ( 395) 00:14:28.024 4.887 - 4.916: 87.0554% ( 198) 00:14:28.024 4.916 - 4.945: 88.3466% ( 159) 00:14:28.024 4.945 - 4.975: 90.1088% ( 217) 00:14:28.024 4.975 - 5.004: 92.1796% ( 255) 00:14:28.024 5.004 - 5.033: 94.1449% ( 242) 00:14:28.024 5.033 - 5.062: 95.8503% ( 210) 00:14:28.024 5.062 - 5.091: 97.1252% ( 157) 00:14:28.024 5.091 - 5.120: 98.0266% ( 111) 00:14:28.024 5.120 - 5.149: 98.6438% ( 76) 00:14:28.024 5.149 - 5.178: 99.0661% ( 52) 00:14:28.024 5.178 - 5.207: 99.2285% ( 20) 00:14:28.024 5.207 - 5.236: 99.3341% ( 13) 00:14:28.024 5.236 - 5.265: 99.3747% ( 5) 00:14:28.024 5.295 - 5.324: 99.4153% ( 5) 00:14:28.024 5.324 - 5.353: 99.4315% ( 2) 00:14:28.024 5.353 - 5.382: 99.4640% ( 4) 00:14:28.024 5.411 - 5.440: 99.4803% ( 2) 00:14:28.024 5.469 - 5.498: 99.4884% ( 1) 00:14:28.024 7.389 - 7.418: 99.4965% ( 1) 00:14:28.024 7.913 - 7.971: 99.5127% ( 2) 00:14:28.024 8.029 - 8.087: 99.5209% ( 1) 00:14:28.024 8.145 - 8.204: 99.5290% ( 1) 00:14:28.024 8.204 - 8.262: 99.5371% ( 1) 00:14:28.024 8.436 - 8.495: 99.5452% ( 1) 00:14:28.024 8.495 - 8.553: 99.5615% ( 2) 00:14:28.024 8.611 - 8.669: 99.5696% ( 1) 00:14:28.024 8.669 - 8.727: 99.5777% ( 1) 00:14:28.024 8.727 - 8.785: 99.5858% ( 1) 00:14:28.024 8.785 - 8.844: 99.5940% ( 1) 00:14:28.024 8.902 - 8.960: 99.6102% ( 2) 00:14:28.024 8.960 - 9.018: 99.6183% ( 1) 00:14:28.024 9.076 - 9.135: 99.6346% ( 2) 00:14:28.024 9.135 - 9.193: 99.6427% ( 1) 00:14:28.024 9.193 - 9.251: 99.6508% ( 1) 00:14:28.024 9.251 - 9.309: 99.6589% ( 1) 00:14:28.024 9.367 - 9.425: 99.6670% ( 1) 00:14:28.024 9.425 - 9.484: 99.6752% ( 1) 00:14:28.024 9.542 - 9.600: 99.6914% ( 2) 00:14:28.024 9.600 - 9.658: 99.6995% ( 1) 00:14:28.024 9.658 - 9.716: 99.7076% ( 1) 00:14:28.024 9.716 - 9.775: 99.7158% ( 1) 00:14:28.024 9.891 - 9.949: 99.7239% ( 1) 00:14:28.024 9.949 - 10.007: 99.7320% ( 1) 00:14:28.024 10.065 - 10.124: 99.7401% ( 1) 00:14:28.024 10.240 - 10.298: 99.7483% ( 1) 00:14:28.024 10.356 - 10.415: 99.7726% ( 3) 00:14:28.024 10.415 - 10.473: 99.7807% ( 1) 00:14:28.024 10.473 - 10.531: 99.7889% ( 1) 00:14:28.024 10.764 - 10.822: 99.7970% ( 1) 00:14:28.024 10.996 - 11.055: 99.8051% ( 1) 00:14:28.024 11.113 - 11.171: 99.8213% ( 2) 00:14:28.024 11.229 - 11.287: 99.8376% ( 2) 00:14:28.024 11.287 - 11.345: 99.8457% ( 1) 00:14:28.024 11.345 - 11.404: 99.8538% ( 1) 00:14:28.024 11.404 - 11.462: 99.8701% ( 2) 00:14:28.024 11.520 - 11.578: 99.8782% ( 1) 00:14:28.024 11.578 - 11.636: 99.8863% ( 1) 00:14:28.024 11.985 - 12.044: 99.8944% ( 1) 00:14:28.024 12.160 - 12.218: 99.9107% ( 2) 00:14:28.024 15.476 - 15.593: 99.9188% ( 1) 00:14:28.024 3991.738 - 4021.527: 100.0000% ( 10) 00:14:28.024 00:14:28.024 Complete histogram 00:14:28.024 ================== 00:14:28.024 Range in us Cumulative Count 00:14:28.024 2.705 - 2.720: 0.1056% ( 13) 00:14:28.024 2.720 - 2.735: 6.0744% ( 735) 00:14:28.024 2.735 - 2.749: 39.6053% ( 4129) 00:14:28.024 2.749 - [2024-06-07 21:31:28.273517] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:28.284 2.764: 71.4796% ( 3925) 00:14:28.284 2.764 - 2.778: 79.9415% ( 1042) 00:14:28.284 2.778 - 2.793: 85.0820% ( 633) 00:14:28.284 2.793 - 2.807: 89.4104% ( 533) 00:14:28.284 2.807 - 2.822: 91.6274% ( 273) 00:14:28.284 2.822 - 2.836: 94.7052% ( 379) 00:14:28.284 2.836 - 2.851: 97.4013% ( 332) 00:14:28.284 2.851 - 2.865: 98.3109% ( 112) 00:14:28.284 2.865 - 2.880: 98.6357% ( 40) 00:14:28.284 2.880 - 2.895: 98.8225% ( 23) 00:14:28.284 2.895 - 2.909: 98.8712% ( 6) 00:14:28.284 2.909 - 2.924: 98.9280% ( 7) 00:14:28.284 2.924 - 2.938: 98.9524% ( 3) 00:14:28.284 2.953 - 2.967: 98.9687% ( 2) 00:14:28.284 2.967 - 2.982: 98.9849% ( 2) 00:14:28.284 2.982 - 2.996: 99.0011% ( 2) 00:14:28.284 2.996 - 3.011: 99.0255% ( 3) 00:14:28.284 3.011 - 3.025: 99.0417% ( 2) 00:14:28.284 3.025 - 3.040: 99.0499% ( 1) 00:14:28.284 3.040 - 3.055: 99.0661% ( 2) 00:14:28.284 3.055 - 3.069: 99.0986% ( 4) 00:14:28.284 3.069 - 3.084: 99.1067% ( 1) 00:14:28.284 3.084 - 3.098: 99.1229% ( 2) 00:14:28.284 3.098 - 3.113: 99.1392% ( 2) 00:14:28.284 3.113 - 3.127: 99.1554% ( 2) 00:14:28.284 6.051 - 6.080: 99.1636% ( 1) 00:14:28.284 6.516 - 6.545: 99.1717% ( 1) 00:14:28.284 6.662 - 6.691: 99.1798% ( 1) 00:14:28.284 6.778 - 6.807: 99.1879% ( 1) 00:14:28.284 6.953 - 6.982: 99.1960% ( 1) 00:14:28.284 7.098 - 7.127: 99.2042% ( 1) 00:14:28.284 7.418 - 7.447: 99.2123% ( 1) 00:14:28.284 7.447 - 7.505: 99.2366% ( 3) 00:14:28.284 7.505 - 7.564: 99.2448% ( 1) 00:14:28.284 8.087 - 8.145: 99.2529% ( 1) 00:14:28.284 8.204 - 8.262: 99.2610% ( 1) 00:14:28.284 8.320 - 8.378: 99.2691% ( 1) 00:14:28.284 8.378 - 8.436: 99.2772% ( 1) 00:14:28.284 8.553 - 8.611: 99.2854% ( 1) 00:14:28.284 8.611 - 8.669: 99.2935% ( 1) 00:14:28.284 8.669 - 8.727: 99.3016% ( 1) 00:14:28.284 8.902 - 8.960: 99.3097% ( 1) 00:14:28.284 9.018 - 9.076: 99.3178% ( 1) 00:14:28.284 9.367 - 9.425: 99.3260% ( 1) 00:14:28.284 9.425 - 9.484: 99.3341% ( 1) 00:14:28.284 9.891 - 9.949: 99.3422% ( 1) 00:14:28.284 10.415 - 10.473: 99.3503% ( 1) 00:14:28.284 16.291 - 16.407: 99.3585% ( 1) 00:14:28.284 16.989 - 17.105: 99.3666% ( 1) 00:14:28.284 1124.538 - 1131.985: 99.3747% ( 1) 00:14:28.284 1817.135 - 1824.582: 99.3828% ( 1) 00:14:28.284 3991.738 - 4021.527: 99.9919% ( 75) 00:14:28.284 5987.607 - 6017.396: 100.0000% ( 1) 00:14:28.284 00:14:28.284 21:31:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:28.284 21:31:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:28.284 21:31:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:28.284 21:31:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:28.284 21:31:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:28.284 [ 00:14:28.284 { 00:14:28.284 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:28.284 "subtype": "Discovery", 00:14:28.284 "listen_addresses": [], 00:14:28.284 "allow_any_host": true, 00:14:28.284 "hosts": [] 00:14:28.284 }, 00:14:28.284 { 00:14:28.284 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:28.284 "subtype": "NVMe", 00:14:28.284 "listen_addresses": [ 00:14:28.284 { 00:14:28.284 "trtype": "VFIOUSER", 00:14:28.284 "adrfam": "IPv4", 00:14:28.284 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:28.284 "trsvcid": "0" 00:14:28.284 } 00:14:28.284 ], 00:14:28.284 "allow_any_host": true, 00:14:28.284 "hosts": [], 00:14:28.284 "serial_number": "SPDK1", 00:14:28.284 "model_number": "SPDK bdev Controller", 00:14:28.284 "max_namespaces": 32, 00:14:28.284 "min_cntlid": 1, 00:14:28.284 "max_cntlid": 65519, 00:14:28.284 "namespaces": [ 00:14:28.284 { 00:14:28.284 "nsid": 1, 00:14:28.284 "bdev_name": "Malloc1", 00:14:28.284 "name": "Malloc1", 00:14:28.284 "nguid": "BD59714F98A74D7E8A6E8E15DD8C7C11", 00:14:28.284 "uuid": "bd59714f-98a7-4d7e-8a6e-8e15dd8c7c11" 00:14:28.284 }, 00:14:28.284 { 00:14:28.284 "nsid": 2, 00:14:28.284 "bdev_name": "Malloc3", 00:14:28.284 "name": "Malloc3", 00:14:28.284 "nguid": "66A1C8EDC3FE4EBF8285E11B75F83AB6", 00:14:28.284 "uuid": "66a1c8ed-c3fe-4ebf-8285-e11b75f83ab6" 00:14:28.284 } 00:14:28.284 ] 00:14:28.284 }, 00:14:28.284 { 00:14:28.284 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:28.284 "subtype": "NVMe", 00:14:28.284 "listen_addresses": [ 00:14:28.284 { 00:14:28.284 "trtype": "VFIOUSER", 00:14:28.284 "adrfam": "IPv4", 00:14:28.284 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:28.284 "trsvcid": "0" 00:14:28.284 } 00:14:28.284 ], 00:14:28.284 "allow_any_host": true, 00:14:28.284 "hosts": [], 00:14:28.284 "serial_number": "SPDK2", 00:14:28.284 "model_number": "SPDK bdev Controller", 00:14:28.284 "max_namespaces": 32, 00:14:28.284 "min_cntlid": 1, 00:14:28.284 "max_cntlid": 65519, 00:14:28.284 "namespaces": [ 00:14:28.284 { 00:14:28.284 "nsid": 1, 00:14:28.284 "bdev_name": "Malloc2", 00:14:28.284 "name": "Malloc2", 00:14:28.284 "nguid": "B6BC0C32D9C04364B6607E5EEB8AD815", 00:14:28.284 "uuid": "b6bc0c32-d9c0-4364-b660-7e5eeb8ad815" 00:14:28.284 } 00:14:28.284 ] 00:14:28.284 } 00:14:28.284 ] 00:14:28.284 21:31:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:28.284 21:31:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:28.284 21:31:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1370589 00:14:28.284 21:31:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:28.284 21:31:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # local i=0 00:14:28.284 21:31:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:28.284 21:31:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:28.284 21:31:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1275 -- # return 0 00:14:28.284 21:31:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:28.284 21:31:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:28.284 EAL: No free 2048 kB hugepages reported on node 1 00:14:28.544 [2024-06-07 21:31:28.688513] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:28.544 Malloc4 00:14:28.544 21:31:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:28.802 [2024-06-07 21:31:28.938403] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:28.802 21:31:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:28.802 Asynchronous Event Request test 00:14:28.802 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:28.802 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:28.802 Registering asynchronous event callbacks... 00:14:28.802 Starting namespace attribute notice tests for all controllers... 00:14:28.802 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:28.802 aer_cb - Changed Namespace 00:14:28.802 Cleaning up... 00:14:29.062 [ 00:14:29.062 { 00:14:29.062 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:29.062 "subtype": "Discovery", 00:14:29.062 "listen_addresses": [], 00:14:29.062 "allow_any_host": true, 00:14:29.062 "hosts": [] 00:14:29.062 }, 00:14:29.062 { 00:14:29.062 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:29.062 "subtype": "NVMe", 00:14:29.062 "listen_addresses": [ 00:14:29.062 { 00:14:29.062 "trtype": "VFIOUSER", 00:14:29.062 "adrfam": "IPv4", 00:14:29.062 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:29.062 "trsvcid": "0" 00:14:29.062 } 00:14:29.062 ], 00:14:29.062 "allow_any_host": true, 00:14:29.062 "hosts": [], 00:14:29.062 "serial_number": "SPDK1", 00:14:29.062 "model_number": "SPDK bdev Controller", 00:14:29.062 "max_namespaces": 32, 00:14:29.062 "min_cntlid": 1, 00:14:29.062 "max_cntlid": 65519, 00:14:29.062 "namespaces": [ 00:14:29.062 { 00:14:29.062 "nsid": 1, 00:14:29.062 "bdev_name": "Malloc1", 00:14:29.062 "name": "Malloc1", 00:14:29.062 "nguid": "BD59714F98A74D7E8A6E8E15DD8C7C11", 00:14:29.062 "uuid": "bd59714f-98a7-4d7e-8a6e-8e15dd8c7c11" 00:14:29.062 }, 00:14:29.062 { 00:14:29.062 "nsid": 2, 00:14:29.062 "bdev_name": "Malloc3", 00:14:29.062 "name": "Malloc3", 00:14:29.062 "nguid": "66A1C8EDC3FE4EBF8285E11B75F83AB6", 00:14:29.062 "uuid": "66a1c8ed-c3fe-4ebf-8285-e11b75f83ab6" 00:14:29.062 } 00:14:29.062 ] 00:14:29.062 }, 00:14:29.062 { 00:14:29.062 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:29.062 "subtype": "NVMe", 00:14:29.062 "listen_addresses": [ 00:14:29.062 { 00:14:29.062 "trtype": "VFIOUSER", 00:14:29.062 "adrfam": "IPv4", 00:14:29.062 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:29.062 "trsvcid": "0" 00:14:29.062 } 00:14:29.062 ], 00:14:29.062 "allow_any_host": true, 00:14:29.062 "hosts": [], 00:14:29.062 "serial_number": "SPDK2", 00:14:29.062 "model_number": "SPDK bdev Controller", 00:14:29.062 "max_namespaces": 32, 00:14:29.062 "min_cntlid": 1, 00:14:29.062 "max_cntlid": 65519, 00:14:29.062 "namespaces": [ 00:14:29.062 { 00:14:29.062 "nsid": 1, 00:14:29.062 "bdev_name": "Malloc2", 00:14:29.062 "name": "Malloc2", 00:14:29.062 "nguid": "B6BC0C32D9C04364B6607E5EEB8AD815", 00:14:29.062 "uuid": "b6bc0c32-d9c0-4364-b660-7e5eeb8ad815" 00:14:29.062 }, 00:14:29.062 { 00:14:29.062 "nsid": 2, 00:14:29.062 "bdev_name": "Malloc4", 00:14:29.062 "name": "Malloc4", 00:14:29.062 "nguid": "4FE86393234842719956AF9E8BFE2F2D", 00:14:29.062 "uuid": "4fe86393-2348-4271-9956-af9e8bfe2f2d" 00:14:29.062 } 00:14:29.062 ] 00:14:29.062 } 00:14:29.062 ] 00:14:29.062 21:31:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1370589 00:14:29.062 21:31:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:29.062 21:31:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1361975 00:14:29.062 21:31:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@949 -- # '[' -z 1361975 ']' 00:14:29.062 21:31:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # kill -0 1361975 00:14:29.062 21:31:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # uname 00:14:29.062 21:31:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:29.062 21:31:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1361975 00:14:29.062 21:31:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:14:29.062 21:31:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:14:29.062 21:31:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1361975' 00:14:29.062 killing process with pid 1361975 00:14:29.063 21:31:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@968 -- # kill 1361975 00:14:29.063 21:31:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@973 -- # wait 1361975 00:14:29.322 21:31:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:29.322 21:31:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:29.322 21:31:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:29.322 21:31:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:29.322 21:31:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:29.322 21:31:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:29.322 21:31:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1370858 00:14:29.322 21:31:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1370858' 00:14:29.322 Process pid: 1370858 00:14:29.322 21:31:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:29.322 21:31:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1370858 00:14:29.322 21:31:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@830 -- # '[' -z 1370858 ']' 00:14:29.322 21:31:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.322 21:31:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:29.322 21:31:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.322 21:31:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:29.322 21:31:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:29.322 [2024-06-07 21:31:29.582592] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:29.322 [2024-06-07 21:31:29.583393] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:14:29.322 [2024-06-07 21:31:29.583431] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:29.582 EAL: No free 2048 kB hugepages reported on node 1 00:14:29.582 [2024-06-07 21:31:29.660356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:29.582 [2024-06-07 21:31:29.752369] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:29.582 [2024-06-07 21:31:29.752412] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:29.582 [2024-06-07 21:31:29.752423] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:29.582 [2024-06-07 21:31:29.752432] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:29.582 [2024-06-07 21:31:29.752440] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:29.582 [2024-06-07 21:31:29.752502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:29.582 [2024-06-07 21:31:29.752526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:29.582 [2024-06-07 21:31:29.752666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:29.582 [2024-06-07 21:31:29.752667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.582 [2024-06-07 21:31:29.837543] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:14:29.582 [2024-06-07 21:31:29.837567] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:14:29.582 [2024-06-07 21:31:29.837975] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:14:29.582 [2024-06-07 21:31:29.838147] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:14:29.582 [2024-06-07 21:31:29.838473] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:14:29.841 21:31:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:29.841 21:31:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@863 -- # return 0 00:14:29.841 21:31:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:30.807 21:31:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:30.807 21:31:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:31.116 21:31:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:31.116 21:31:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:31.116 21:31:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:31.116 21:31:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:31.116 Malloc1 00:14:31.116 21:31:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:31.375 21:31:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:31.634 21:31:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:31.893 21:31:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:31.893 21:31:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:31.893 21:31:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:32.152 Malloc2 00:14:32.152 21:31:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:32.412 21:31:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:32.412 21:31:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:32.671 21:31:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:32.671 21:31:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1370858 00:14:32.671 21:31:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@949 -- # '[' -z 1370858 ']' 00:14:32.671 21:31:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # kill -0 1370858 00:14:32.671 21:31:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # uname 00:14:32.671 21:31:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:32.671 21:31:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1370858 00:14:32.671 21:31:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:14:32.671 21:31:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:14:32.671 21:31:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1370858' 00:14:32.671 killing process with pid 1370858 00:14:32.671 21:31:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@968 -- # kill 1370858 00:14:32.671 21:31:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@973 -- # wait 1370858 00:14:32.930 21:31:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:32.930 21:31:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:32.930 00:14:32.930 real 0m52.821s 00:14:32.930 user 3m28.513s 00:14:32.930 sys 0m3.675s 00:14:32.930 21:31:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:32.930 21:31:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:32.930 ************************************ 00:14:32.930 END TEST nvmf_vfio_user 00:14:32.930 ************************************ 00:14:32.930 21:31:33 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:32.930 21:31:33 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:14:32.930 21:31:33 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:32.930 21:31:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:32.930 ************************************ 00:14:32.930 START TEST nvmf_vfio_user_nvme_compliance 00:14:32.930 ************************************ 00:14:32.930 21:31:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:32.930 * Looking for test storage... 00:14:32.930 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:14:32.930 21:31:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:32.930 21:31:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:14:32.930 21:31:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:32.930 21:31:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:32.930 21:31:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:33.189 21:31:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:33.189 21:31:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:33.189 21:31:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:33.189 21:31:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:33.189 21:31:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:33.189 21:31:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:33.189 21:31:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:33.189 21:31:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:14:33.189 21:31:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:14:33.189 21:31:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:33.189 21:31:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:33.189 21:31:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:33.189 21:31:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:33.189 21:31:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:33.189 21:31:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:33.189 21:31:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:33.189 21:31:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:33.189 21:31:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.189 21:31:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.189 21:31:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.189 21:31:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:14:33.189 21:31:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.189 21:31:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:14:33.189 21:31:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:33.189 21:31:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:33.189 21:31:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:33.189 21:31:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:33.189 21:31:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:33.189 21:31:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:33.189 21:31:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:33.189 21:31:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:33.189 21:31:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:33.189 21:31:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:33.189 21:31:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:33.189 21:31:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:33.189 21:31:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:33.189 21:31:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1371529 00:14:33.189 21:31:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1371529' 00:14:33.189 Process pid: 1371529 00:14:33.189 21:31:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:33.189 21:31:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1371529 00:14:33.190 21:31:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:33.190 21:31:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@830 -- # '[' -z 1371529 ']' 00:14:33.190 21:31:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.190 21:31:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:33.190 21:31:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.190 21:31:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:33.190 21:31:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:33.190 [2024-06-07 21:31:33.277264] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:14:33.190 [2024-06-07 21:31:33.277324] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:33.190 EAL: No free 2048 kB hugepages reported on node 1 00:14:33.190 [2024-06-07 21:31:33.365428] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:33.190 [2024-06-07 21:31:33.455740] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:33.190 [2024-06-07 21:31:33.455783] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:33.190 [2024-06-07 21:31:33.455794] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:33.190 [2024-06-07 21:31:33.455802] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:33.190 [2024-06-07 21:31:33.455809] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:33.190 [2024-06-07 21:31:33.455862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:33.190 [2024-06-07 21:31:33.455962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:33.190 [2024-06-07 21:31:33.455966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.126 21:31:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:34.126 21:31:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@863 -- # return 0 00:14:34.126 21:31:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:14:35.062 21:31:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:35.062 21:31:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:35.062 21:31:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:35.062 21:31:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:35.062 21:31:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:35.062 21:31:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:35.062 21:31:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:35.062 21:31:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:35.062 21:31:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:35.062 21:31:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:35.062 malloc0 00:14:35.062 21:31:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:35.062 21:31:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:35.062 21:31:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:35.062 21:31:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:35.062 21:31:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:35.062 21:31:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:35.062 21:31:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:35.062 21:31:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:35.062 21:31:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:35.062 21:31:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:35.062 21:31:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:35.062 21:31:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:35.062 21:31:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:35.062 21:31:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:35.321 EAL: No free 2048 kB hugepages reported on node 1 00:14:35.321 00:14:35.321 00:14:35.321 CUnit - A unit testing framework for C - Version 2.1-3 00:14:35.321 http://cunit.sourceforge.net/ 00:14:35.321 00:14:35.321 00:14:35.321 Suite: nvme_compliance 00:14:35.321 Test: admin_identify_ctrlr_verify_dptr ...[2024-06-07 21:31:35.491607] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:35.321 [2024-06-07 21:31:35.493051] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:35.321 [2024-06-07 21:31:35.493071] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:35.321 [2024-06-07 21:31:35.493080] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:35.321 [2024-06-07 21:31:35.494640] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:35.321 passed 00:14:35.580 Test: admin_identify_ctrlr_verify_fused ...[2024-06-07 21:31:35.596402] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:35.580 [2024-06-07 21:31:35.599427] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:35.580 passed 00:14:35.580 Test: admin_identify_ns ...[2024-06-07 21:31:35.699802] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:35.580 [2024-06-07 21:31:35.759042] ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:35.580 [2024-06-07 21:31:35.767052] ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:35.580 [2024-06-07 21:31:35.788169] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:35.580 passed 00:14:35.838 Test: admin_get_features_mandatory_features ...[2024-06-07 21:31:35.886216] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:35.838 [2024-06-07 21:31:35.889237] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:35.838 passed 00:14:35.838 Test: admin_get_features_optional_features ...[2024-06-07 21:31:35.990952] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:35.838 [2024-06-07 21:31:35.993973] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:35.838 passed 00:14:35.838 Test: admin_set_features_number_of_queues ...[2024-06-07 21:31:36.090851] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:36.096 [2024-06-07 21:31:36.195133] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:36.097 passed 00:14:36.097 Test: admin_get_log_page_mandatory_logs ...[2024-06-07 21:31:36.294203] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:36.097 [2024-06-07 21:31:36.297232] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:36.097 passed 00:14:36.355 Test: admin_get_log_page_with_lpo ...[2024-06-07 21:31:36.394065] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:36.355 [2024-06-07 21:31:36.464046] ctrlr.c:2656:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:36.355 [2024-06-07 21:31:36.477137] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:36.355 passed 00:14:36.356 Test: fabric_property_get ...[2024-06-07 21:31:36.573997] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:36.356 [2024-06-07 21:31:36.575330] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:14:36.356 [2024-06-07 21:31:36.577031] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:36.356 passed 00:14:36.613 Test: admin_delete_io_sq_use_admin_qid ...[2024-06-07 21:31:36.676739] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:36.613 [2024-06-07 21:31:36.678007] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:36.613 [2024-06-07 21:31:36.679757] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:36.613 passed 00:14:36.613 Test: admin_delete_io_sq_delete_sq_twice ...[2024-06-07 21:31:36.779831] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:36.613 [2024-06-07 21:31:36.864042] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:36.614 [2024-06-07 21:31:36.880037] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:36.871 [2024-06-07 21:31:36.885148] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:36.871 passed 00:14:36.871 Test: admin_delete_io_cq_use_admin_qid ...[2024-06-07 21:31:36.980917] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:36.871 [2024-06-07 21:31:36.982191] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:36.871 [2024-06-07 21:31:36.983932] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:36.871 passed 00:14:36.871 Test: admin_delete_io_cq_delete_cq_first ...[2024-06-07 21:31:37.083840] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:37.129 [2024-06-07 21:31:37.159040] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:37.129 [2024-06-07 21:31:37.183032] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:37.129 [2024-06-07 21:31:37.188137] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:37.129 passed 00:14:37.129 Test: admin_create_io_cq_verify_iv_pc ...[2024-06-07 21:31:37.283912] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:37.129 [2024-06-07 21:31:37.285188] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:37.129 [2024-06-07 21:31:37.285212] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:37.129 [2024-06-07 21:31:37.286932] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:37.129 passed 00:14:37.129 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-06-07 21:31:37.386794] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:37.388 [2024-06-07 21:31:37.478038] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:37.388 [2024-06-07 21:31:37.486040] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:37.388 [2024-06-07 21:31:37.494036] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:37.388 [2024-06-07 21:31:37.502037] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:37.388 [2024-06-07 21:31:37.530136] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:37.388 passed 00:14:37.388 Test: admin_create_io_sq_verify_pc ...[2024-06-07 21:31:37.629191] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:37.388 [2024-06-07 21:31:37.648044] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:37.647 [2024-06-07 21:31:37.665387] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:37.647 passed 00:14:37.647 Test: admin_create_io_qp_max_qps ...[2024-06-07 21:31:37.761070] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:39.022 [2024-06-07 21:31:38.860038] nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:14:39.022 [2024-06-07 21:31:39.241287] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:39.022 passed 00:14:39.280 Test: admin_create_io_sq_shared_cq ...[2024-06-07 21:31:39.340410] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:39.280 [2024-06-07 21:31:39.472046] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:39.280 [2024-06-07 21:31:39.509106] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:39.539 passed 00:14:39.539 00:14:39.539 Run Summary: Type Total Ran Passed Failed Inactive 00:14:39.539 suites 1 1 n/a 0 0 00:14:39.539 tests 18 18 18 0 0 00:14:39.539 asserts 360 360 360 0 n/a 00:14:39.539 00:14:39.539 Elapsed time = 1.695 seconds 00:14:39.539 21:31:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1371529 00:14:39.539 21:31:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@949 -- # '[' -z 1371529 ']' 00:14:39.539 21:31:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # kill -0 1371529 00:14:39.539 21:31:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # uname 00:14:39.539 21:31:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:39.539 21:31:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1371529 00:14:39.539 21:31:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:14:39.539 21:31:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:14:39.539 21:31:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1371529' 00:14:39.539 killing process with pid 1371529 00:14:39.539 21:31:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # kill 1371529 00:14:39.539 21:31:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # wait 1371529 00:14:39.799 21:31:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:14:39.799 21:31:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:39.799 00:14:39.799 real 0m6.740s 00:14:39.799 user 0m19.260s 00:14:39.799 sys 0m0.565s 00:14:39.799 21:31:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:39.799 21:31:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:39.799 ************************************ 00:14:39.799 END TEST nvmf_vfio_user_nvme_compliance 00:14:39.799 ************************************ 00:14:39.799 21:31:39 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:39.799 21:31:39 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:14:39.799 21:31:39 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:39.799 21:31:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:39.799 ************************************ 00:14:39.799 START TEST nvmf_vfio_user_fuzz 00:14:39.799 ************************************ 00:14:39.799 21:31:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:39.799 * Looking for test storage... 00:14:39.799 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:39.799 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:39.799 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:14:39.799 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:39.799 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:39.799 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:39.799 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:39.799 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:39.799 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:39.799 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:39.799 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:39.799 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:39.799 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:39.799 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:14:39.799 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:14:39.799 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:39.799 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:39.799 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:39.799 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:39.799 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:39.799 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:39.799 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:39.799 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:39.799 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.799 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.799 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.799 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:14:39.799 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.799 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:14:39.799 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:39.799 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:39.799 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:39.799 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:39.800 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:39.800 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:39.800 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:39.800 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:39.800 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:39.800 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:39.800 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:39.800 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:14:39.800 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:39.800 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:39.800 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:14:39.800 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1372824 00:14:39.800 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1372824' 00:14:39.800 Process pid: 1372824 00:14:39.800 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:39.800 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:39.800 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1372824 00:14:39.800 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@830 -- # '[' -z 1372824 ']' 00:14:39.800 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:39.800 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:39.800 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:39.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:39.800 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:39.800 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:40.368 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:40.368 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@863 -- # return 0 00:14:40.368 21:31:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:14:41.303 21:31:41 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:41.303 21:31:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:41.303 21:31:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:41.303 21:31:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:41.303 21:31:41 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:14:41.303 21:31:41 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:41.303 21:31:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:41.303 21:31:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:41.303 malloc0 00:14:41.303 21:31:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:41.303 21:31:41 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:14:41.303 21:31:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:41.303 21:31:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:41.303 21:31:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:41.303 21:31:41 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:41.303 21:31:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:41.303 21:31:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:41.303 21:31:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:41.303 21:31:41 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:41.303 21:31:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:41.303 21:31:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:41.303 21:31:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:41.303 21:31:41 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:14:41.303 21:31:41 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:15:13.379 Fuzzing completed. Shutting down the fuzz application 00:15:13.379 00:15:13.379 Dumping successful admin opcodes: 00:15:13.379 8, 9, 10, 24, 00:15:13.379 Dumping successful io opcodes: 00:15:13.379 0, 00:15:13.379 NS: 0x200003a1ef00 I/O qp, Total commands completed: 767679, total successful commands: 2971, random_seed: 943377408 00:15:13.379 NS: 0x200003a1ef00 admin qp, Total commands completed: 188377, total successful commands: 1513, random_seed: 1068046464 00:15:13.379 21:32:11 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:15:13.379 21:32:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:13.379 21:32:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:13.379 21:32:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:13.379 21:32:11 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1372824 00:15:13.379 21:32:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@949 -- # '[' -z 1372824 ']' 00:15:13.379 21:32:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # kill -0 1372824 00:15:13.379 21:32:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # uname 00:15:13.379 21:32:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:13.379 21:32:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1372824 00:15:13.379 21:32:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:13.379 21:32:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:13.379 21:32:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1372824' 00:15:13.379 killing process with pid 1372824 00:15:13.379 21:32:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # kill 1372824 00:15:13.379 21:32:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # wait 1372824 00:15:13.380 21:32:12 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:15:13.380 21:32:12 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:15:13.380 00:15:13.380 real 0m32.351s 00:15:13.380 user 0m36.277s 00:15:13.380 sys 0m24.911s 00:15:13.380 21:32:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:13.380 21:32:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:13.380 ************************************ 00:15:13.380 END TEST nvmf_vfio_user_fuzz 00:15:13.380 ************************************ 00:15:13.380 21:32:12 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:15:13.380 21:32:12 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:13.380 21:32:12 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:13.380 21:32:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:13.380 ************************************ 00:15:13.380 START TEST nvmf_host_management 00:15:13.380 ************************************ 00:15:13.380 21:32:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:15:13.380 * Looking for test storage... 00:15:13.380 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:13.380 21:32:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:13.380 21:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:15:13.380 21:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:13.380 21:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:13.380 21:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:13.380 21:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:13.380 21:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:13.380 21:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:13.380 21:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:13.380 21:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:13.380 21:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:13.380 21:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:13.380 21:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:13.380 21:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:15:13.380 21:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:13.380 21:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:13.380 21:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:13.380 21:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:13.380 21:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:13.380 21:32:12 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:13.380 21:32:12 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:13.380 21:32:12 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:13.380 21:32:12 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.380 21:32:12 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.380 21:32:12 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.380 21:32:12 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:15:13.380 21:32:12 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.380 21:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:15:13.380 21:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:13.380 21:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:13.380 21:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:13.380 21:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:13.380 21:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:13.380 21:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:13.380 21:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:13.380 21:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:13.380 21:32:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:13.380 21:32:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:13.380 21:32:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:15:13.380 21:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:13.380 21:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:13.380 21:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:13.380 21:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:13.380 21:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:13.380 21:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:13.380 21:32:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:13.380 21:32:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:13.380 21:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:13.380 21:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:13.380 21:32:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:15:13.380 21:32:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:18.661 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:18.661 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:18.661 Found net devices under 0000:af:00.0: cvl_0_0 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:18.661 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:18.662 Found net devices under 0000:af:00.1: cvl_0_1 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:18.662 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:18.662 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:15:18.662 00:15:18.662 --- 10.0.0.2 ping statistics --- 00:15:18.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:18.662 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:18.662 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:18.662 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:15:18.662 00:15:18.662 --- 10.0.0.1 ping statistics --- 00:15:18.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:18.662 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1382054 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1382054 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@830 -- # '[' -z 1382054 ']' 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:18.662 21:32:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:15:18.662 [2024-06-07 21:32:18.847465] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:15:18.662 [2024-06-07 21:32:18.847520] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:18.662 EAL: No free 2048 kB hugepages reported on node 1 00:15:18.921 [2024-06-07 21:32:18.935722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:18.921 [2024-06-07 21:32:19.027936] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:18.921 [2024-06-07 21:32:19.027981] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:18.921 [2024-06-07 21:32:19.027991] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:18.921 [2024-06-07 21:32:19.028000] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:18.922 [2024-06-07 21:32:19.028007] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:18.922 [2024-06-07 21:32:19.028121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:18.922 [2024-06-07 21:32:19.028245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:18.922 [2024-06-07 21:32:19.028362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:18.922 [2024-06-07 21:32:19.028363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:19.858 21:32:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:19.858 21:32:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@863 -- # return 0 00:15:19.858 21:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:19.858 21:32:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:19.858 21:32:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:19.858 21:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:19.858 21:32:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:19.858 21:32:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:19.858 21:32:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:19.858 [2024-06-07 21:32:19.834702] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:19.858 21:32:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:19.858 21:32:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:15:19.858 21:32:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:19.858 21:32:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:19.858 21:32:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:19.858 21:32:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:15:19.858 21:32:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:15:19.858 21:32:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:19.858 21:32:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:19.858 Malloc0 00:15:19.858 [2024-06-07 21:32:19.898680] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:19.858 21:32:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:19.858 21:32:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:15:19.858 21:32:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:19.858 21:32:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:19.858 21:32:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1382353 00:15:19.858 21:32:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1382353 /var/tmp/bdevperf.sock 00:15:19.858 21:32:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@830 -- # '[' -z 1382353 ']' 00:15:19.858 21:32:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:19.858 21:32:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:15:19.858 21:32:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:19.858 21:32:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:15:19.858 21:32:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:19.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:19.858 21:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:15:19.858 21:32:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:19.858 21:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:15:19.858 21:32:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:19.858 21:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:19.858 21:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:19.858 { 00:15:19.858 "params": { 00:15:19.858 "name": "Nvme$subsystem", 00:15:19.858 "trtype": "$TEST_TRANSPORT", 00:15:19.858 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:19.858 "adrfam": "ipv4", 00:15:19.858 "trsvcid": "$NVMF_PORT", 00:15:19.858 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:19.858 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:19.858 "hdgst": ${hdgst:-false}, 00:15:19.858 "ddgst": ${ddgst:-false} 00:15:19.858 }, 00:15:19.858 "method": "bdev_nvme_attach_controller" 00:15:19.858 } 00:15:19.858 EOF 00:15:19.858 )") 00:15:19.858 21:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:15:19.858 21:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:15:19.858 21:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:15:19.858 21:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:19.858 "params": { 00:15:19.858 "name": "Nvme0", 00:15:19.858 "trtype": "tcp", 00:15:19.858 "traddr": "10.0.0.2", 00:15:19.858 "adrfam": "ipv4", 00:15:19.858 "trsvcid": "4420", 00:15:19.858 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:19.858 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:15:19.858 "hdgst": false, 00:15:19.858 "ddgst": false 00:15:19.858 }, 00:15:19.858 "method": "bdev_nvme_attach_controller" 00:15:19.858 }' 00:15:19.858 [2024-06-07 21:32:19.995681] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:15:19.858 [2024-06-07 21:32:19.995739] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1382353 ] 00:15:19.858 EAL: No free 2048 kB hugepages reported on node 1 00:15:19.858 [2024-06-07 21:32:20.089182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.117 [2024-06-07 21:32:20.180359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.377 Running I/O for 10 seconds... 00:15:20.947 21:32:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:20.947 21:32:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@863 -- # return 0 00:15:20.947 21:32:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:15:20.947 21:32:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:20.947 21:32:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:20.947 21:32:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:20.947 21:32:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:20.947 21:32:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:15:20.947 21:32:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:15:20.947 21:32:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:15:20.947 21:32:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:15:20.947 21:32:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:15:20.947 21:32:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:15:20.947 21:32:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:15:20.947 21:32:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:15:20.947 21:32:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:20.947 21:32:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:15:20.947 21:32:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:20.947 21:32:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:20.947 21:32:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:15:20.947 21:32:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:15:20.947 21:32:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:15:20.947 21:32:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:15:20.947 21:32:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:15:20.947 21:32:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:15:20.947 21:32:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:20.947 21:32:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:20.947 [2024-06-07 21:32:21.015179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.947 [2024-06-07 21:32:21.015223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.947 [2024-06-07 21:32:21.015244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.947 [2024-06-07 21:32:21.015255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.947 [2024-06-07 21:32:21.015267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.947 [2024-06-07 21:32:21.015277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.947 [2024-06-07 21:32:21.015290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.947 [2024-06-07 21:32:21.015299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.947 [2024-06-07 21:32:21.015311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.947 [2024-06-07 21:32:21.015320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.947 [2024-06-07 21:32:21.015332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.947 [2024-06-07 21:32:21.015342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.947 [2024-06-07 21:32:21.015353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.947 [2024-06-07 21:32:21.015363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.947 [2024-06-07 21:32:21.015374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.947 [2024-06-07 21:32:21.015384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.947 [2024-06-07 21:32:21.015395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.947 [2024-06-07 21:32:21.015405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.947 [2024-06-07 21:32:21.015422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.947 [2024-06-07 21:32:21.015431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.947 [2024-06-07 21:32:21.015444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.947 [2024-06-07 21:32:21.015454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.947 [2024-06-07 21:32:21.015466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.947 [2024-06-07 21:32:21.015475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.947 [2024-06-07 21:32:21.015487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.947 [2024-06-07 21:32:21.015496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.947 [2024-06-07 21:32:21.015508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.947 [2024-06-07 21:32:21.015517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.947 [2024-06-07 21:32:21.015529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.947 [2024-06-07 21:32:21.015539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.947 [2024-06-07 21:32:21.015550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.947 [2024-06-07 21:32:21.015560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.947 [2024-06-07 21:32:21.015571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.947 [2024-06-07 21:32:21.015581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.947 [2024-06-07 21:32:21.015594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.947 [2024-06-07 21:32:21.015604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.948 [2024-06-07 21:32:21.015615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.948 [2024-06-07 21:32:21.015625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.948 [2024-06-07 21:32:21.015637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.948 [2024-06-07 21:32:21.015646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.948 [2024-06-07 21:32:21.015657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.948 [2024-06-07 21:32:21.015666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.948 [2024-06-07 21:32:21.015678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.948 [2024-06-07 21:32:21.015689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.948 [2024-06-07 21:32:21.015701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.948 [2024-06-07 21:32:21.015711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.948 [2024-06-07 21:32:21.015722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.948 [2024-06-07 21:32:21.015732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.948 [2024-06-07 21:32:21.015743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.948 [2024-06-07 21:32:21.015754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.948 [2024-06-07 21:32:21.015767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.948 [2024-06-07 21:32:21.015776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.948 [2024-06-07 21:32:21.015788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.948 [2024-06-07 21:32:21.015798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.948 [2024-06-07 21:32:21.015810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.948 [2024-06-07 21:32:21.015820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.948 [2024-06-07 21:32:21.015832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.948 [2024-06-07 21:32:21.015841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.948 [2024-06-07 21:32:21.015853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.948 [2024-06-07 21:32:21.015863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.948 [2024-06-07 21:32:21.015875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.948 [2024-06-07 21:32:21.015885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.948 [2024-06-07 21:32:21.015897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.948 [2024-06-07 21:32:21.015906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.948 [2024-06-07 21:32:21.015918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.948 [2024-06-07 21:32:21.015928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.948 [2024-06-07 21:32:21.015940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.948 [2024-06-07 21:32:21.015950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.948 [2024-06-07 21:32:21.015967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.948 [2024-06-07 21:32:21.015977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.948 [2024-06-07 21:32:21.015989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.948 [2024-06-07 21:32:21.015999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.948 [2024-06-07 21:32:21.016011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.948 [2024-06-07 21:32:21.016021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.948 [2024-06-07 21:32:21.016041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.948 [2024-06-07 21:32:21.016050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.948 [2024-06-07 21:32:21.016062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.948 [2024-06-07 21:32:21.016071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.948 [2024-06-07 21:32:21.016083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.948 [2024-06-07 21:32:21.016093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.948 [2024-06-07 21:32:21.016105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.948 [2024-06-07 21:32:21.016115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.948 [2024-06-07 21:32:21.016127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.948 [2024-06-07 21:32:21.016137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.948 [2024-06-07 21:32:21.016148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.948 [2024-06-07 21:32:21.016158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.948 [2024-06-07 21:32:21.016170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.948 [2024-06-07 21:32:21.016180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.948 [2024-06-07 21:32:21.016192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.948 [2024-06-07 21:32:21.016201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.948 [2024-06-07 21:32:21.016214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.948 [2024-06-07 21:32:21.016223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.948 [2024-06-07 21:32:21.016235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.948 [2024-06-07 21:32:21.016247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.948 [2024-06-07 21:32:21.016259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.948 [2024-06-07 21:32:21.016268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.948 [2024-06-07 21:32:21.016281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.948 [2024-06-07 21:32:21.016290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.948 [2024-06-07 21:32:21.016303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.948 [2024-06-07 21:32:21.016313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.948 [2024-06-07 21:32:21.016325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.948 [2024-06-07 21:32:21.016335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.948 [2024-06-07 21:32:21.016347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.948 [2024-06-07 21:32:21.016356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.948 [2024-06-07 21:32:21.016369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.948 [2024-06-07 21:32:21.016379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.948 [2024-06-07 21:32:21.016391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.948 [2024-06-07 21:32:21.016400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.948 [2024-06-07 21:32:21.016412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.949 [2024-06-07 21:32:21.016422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.949 [2024-06-07 21:32:21.016434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.949 [2024-06-07 21:32:21.016444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.949 [2024-06-07 21:32:21.016455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.949 [2024-06-07 21:32:21.016465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.949 [2024-06-07 21:32:21.016477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.949 [2024-06-07 21:32:21.016486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.949 [2024-06-07 21:32:21.016498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.949 [2024-06-07 21:32:21.016507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.949 [2024-06-07 21:32:21.016521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.949 [2024-06-07 21:32:21.016531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.949 [2024-06-07 21:32:21.016544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.949 [2024-06-07 21:32:21.016554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.949 [2024-06-07 21:32:21.016566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.949 [2024-06-07 21:32:21.016575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.949 [2024-06-07 21:32:21.016587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.949 [2024-06-07 21:32:21.016597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.949 [2024-06-07 21:32:21.016609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.949 [2024-06-07 21:32:21.016620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.949 [2024-06-07 21:32:21.016692] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2950310 was disconnected and freed. reset controller. 00:15:20.949 [2024-06-07 21:32:21.018069] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:15:20.949 21:32:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:20.949 21:32:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:15:20.949 21:32:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:20.949 task offset: 79872 on job bdev=Nvme0n1 fails 00:15:20.949 00:15:20.949 Latency(us) 00:15:20.949 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:20.949 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:20.949 Job: Nvme0n1 ended in about 0.51 seconds with error 00:15:20.949 Verification LBA range: start 0x0 length 0x400 00:15:20.949 Nvme0n1 : 0.51 1121.32 70.08 124.59 0.00 49833.08 2159.71 47662.55 00:15:20.949 =================================================================================================================== 00:15:20.949 Total : 1121.32 70.08 124.59 0.00 49833.08 2159.71 47662.55 00:15:20.949 21:32:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:20.949 [2024-06-07 21:32:21.020412] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:20.949 [2024-06-07 21:32:21.020434] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251edc0 (9): Bad file descriptor 00:15:20.949 [2024-06-07 21:32:21.022066] ctrlr.c: 818:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:15:20.949 [2024-06-07 21:32:21.022228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:20.949 [2024-06-07 21:32:21.022257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.949 [2024-06-07 21:32:21.022275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:15:20.949 [2024-06-07 21:32:21.022285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:15:20.949 [2024-06-07 21:32:21.022300] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:15:20.949 [2024-06-07 21:32:21.022309] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x251edc0 00:15:20.949 [2024-06-07 21:32:21.022335] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251edc0 (9): Bad file descriptor 00:15:20.949 [2024-06-07 21:32:21.022352] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:15:20.949 [2024-06-07 21:32:21.022361] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:15:20.949 [2024-06-07 21:32:21.022372] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:15:20.949 [2024-06-07 21:32:21.022388] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:20.949 21:32:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:20.949 21:32:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:15:21.886 21:32:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1382353 00:15:21.886 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1382353) - No such process 00:15:21.886 21:32:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:15:21.886 21:32:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:15:21.886 21:32:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:15:21.886 21:32:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:15:21.886 21:32:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:15:21.886 21:32:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:15:21.886 21:32:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:21.886 21:32:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:21.886 { 00:15:21.886 "params": { 00:15:21.886 "name": "Nvme$subsystem", 00:15:21.886 "trtype": "$TEST_TRANSPORT", 00:15:21.886 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:21.886 "adrfam": "ipv4", 00:15:21.886 "trsvcid": "$NVMF_PORT", 00:15:21.886 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:21.886 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:21.886 "hdgst": ${hdgst:-false}, 00:15:21.886 "ddgst": ${ddgst:-false} 00:15:21.886 }, 00:15:21.886 "method": "bdev_nvme_attach_controller" 00:15:21.886 } 00:15:21.886 EOF 00:15:21.886 )") 00:15:21.886 21:32:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:15:21.886 21:32:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:15:21.886 21:32:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:15:21.886 21:32:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:21.886 "params": { 00:15:21.886 "name": "Nvme0", 00:15:21.886 "trtype": "tcp", 00:15:21.886 "traddr": "10.0.0.2", 00:15:21.886 "adrfam": "ipv4", 00:15:21.886 "trsvcid": "4420", 00:15:21.886 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:21.886 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:15:21.886 "hdgst": false, 00:15:21.886 "ddgst": false 00:15:21.886 }, 00:15:21.886 "method": "bdev_nvme_attach_controller" 00:15:21.886 }' 00:15:21.886 [2024-06-07 21:32:22.083108] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:15:21.886 [2024-06-07 21:32:22.083172] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1382770 ] 00:15:21.886 EAL: No free 2048 kB hugepages reported on node 1 00:15:22.147 [2024-06-07 21:32:22.173844] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.147 [2024-06-07 21:32:22.257685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.406 Running I/O for 1 seconds... 00:15:23.347 00:15:23.347 Latency(us) 00:15:23.347 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:23.347 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:23.347 Verification LBA range: start 0x0 length 0x400 00:15:23.347 Nvme0n1 : 1.00 1213.56 75.85 0.00 0.00 51728.17 8757.99 46947.61 00:15:23.347 =================================================================================================================== 00:15:23.347 Total : 1213.56 75.85 0.00 0.00 51728.17 8757.99 46947.61 00:15:23.606 21:32:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:15:23.606 21:32:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:15:23.606 21:32:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:15:23.606 21:32:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:23.606 21:32:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:15:23.606 21:32:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:23.606 21:32:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:15:23.606 21:32:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:23.606 21:32:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:15:23.606 21:32:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:23.606 21:32:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:23.606 rmmod nvme_tcp 00:15:23.606 rmmod nvme_fabrics 00:15:23.606 rmmod nvme_keyring 00:15:23.606 21:32:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:23.606 21:32:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:15:23.606 21:32:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:15:23.606 21:32:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1382054 ']' 00:15:23.606 21:32:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1382054 00:15:23.606 21:32:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@949 -- # '[' -z 1382054 ']' 00:15:23.606 21:32:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # kill -0 1382054 00:15:23.606 21:32:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # uname 00:15:23.606 21:32:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:23.606 21:32:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1382054 00:15:23.606 21:32:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:15:23.606 21:32:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:15:23.606 21:32:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1382054' 00:15:23.606 killing process with pid 1382054 00:15:23.606 21:32:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@968 -- # kill 1382054 00:15:23.606 21:32:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@973 -- # wait 1382054 00:15:23.865 [2024-06-07 21:32:23.994716] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:15:23.865 21:32:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:23.865 21:32:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:23.865 21:32:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:23.865 21:32:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:23.865 21:32:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:23.865 21:32:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:23.865 21:32:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:23.865 21:32:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:26.458 21:32:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:26.458 21:32:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:15:26.458 00:15:26.458 real 0m13.762s 00:15:26.458 user 0m24.493s 00:15:26.458 sys 0m6.019s 00:15:26.459 21:32:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:26.459 21:32:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:26.459 ************************************ 00:15:26.459 END TEST nvmf_host_management 00:15:26.459 ************************************ 00:15:26.459 21:32:26 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:15:26.459 21:32:26 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:26.459 21:32:26 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:26.459 21:32:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:26.459 ************************************ 00:15:26.459 START TEST nvmf_lvol 00:15:26.459 ************************************ 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:15:26.459 * Looking for test storage... 00:15:26.459 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:15:26.459 21:32:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:33.031 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:33.031 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:33.031 Found net devices under 0000:af:00.0: cvl_0_0 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:33.031 Found net devices under 0000:af:00.1: cvl_0_1 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:33.031 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:33.031 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:33.031 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:15:33.031 00:15:33.031 --- 10.0.0.2 ping statistics --- 00:15:33.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.032 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:15:33.032 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:33.032 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:33.032 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:15:33.032 00:15:33.032 --- 10.0.0.1 ping statistics --- 00:15:33.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.032 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:15:33.032 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:33.032 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:15:33.032 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:33.032 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:33.032 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:33.032 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:33.032 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:33.032 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:33.032 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:33.032 21:32:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:15:33.032 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:33.032 21:32:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:33.032 21:32:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:33.032 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1387161 00:15:33.032 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1387161 00:15:33.032 21:32:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:33.032 21:32:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@830 -- # '[' -z 1387161 ']' 00:15:33.032 21:32:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.032 21:32:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:33.032 21:32:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.032 21:32:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:33.032 21:32:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:33.032 [2024-06-07 21:32:32.888948] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:15:33.032 [2024-06-07 21:32:32.889000] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:33.032 EAL: No free 2048 kB hugepages reported on node 1 00:15:33.032 [2024-06-07 21:32:32.985166] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:33.032 [2024-06-07 21:32:33.075807] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:33.032 [2024-06-07 21:32:33.075852] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:33.032 [2024-06-07 21:32:33.075862] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:33.032 [2024-06-07 21:32:33.075872] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:33.032 [2024-06-07 21:32:33.075879] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:33.032 [2024-06-07 21:32:33.075975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:33.032 [2024-06-07 21:32:33.076099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:33.032 [2024-06-07 21:32:33.076104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.599 21:32:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:33.599 21:32:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@863 -- # return 0 00:15:33.599 21:32:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:33.599 21:32:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:33.599 21:32:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:33.858 21:32:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:33.858 21:32:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:33.858 [2024-06-07 21:32:34.103894] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:34.117 21:32:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:34.117 21:32:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:15:34.117 21:32:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:34.376 21:32:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:15:34.376 21:32:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:15:34.635 21:32:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:15:34.894 21:32:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=e5172738-7c00-4084-9356-ccc2316bb01d 00:15:34.894 21:32:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e5172738-7c00-4084-9356-ccc2316bb01d lvol 20 00:15:35.153 21:32:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=5ca22dcf-a5c0-4418-b110-085c58fe3b08 00:15:35.153 21:32:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:35.412 21:32:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5ca22dcf-a5c0-4418-b110-085c58fe3b08 00:15:35.671 21:32:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:35.930 [2024-06-07 21:32:35.942220] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:35.930 21:32:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:36.189 21:32:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1387733 00:15:36.189 21:32:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:15:36.189 21:32:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:15:36.189 EAL: No free 2048 kB hugepages reported on node 1 00:15:37.125 21:32:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 5ca22dcf-a5c0-4418-b110-085c58fe3b08 MY_SNAPSHOT 00:15:37.385 21:32:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=b1cbf005-28b7-451f-8371-d7f5a8b2deae 00:15:37.385 21:32:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 5ca22dcf-a5c0-4418-b110-085c58fe3b08 30 00:15:37.645 21:32:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone b1cbf005-28b7-451f-8371-d7f5a8b2deae MY_CLONE 00:15:38.214 21:32:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=2731bc95-117a-44c6-ac4b-054928eb93b6 00:15:38.214 21:32:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 2731bc95-117a-44c6-ac4b-054928eb93b6 00:15:38.783 21:32:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1387733 00:15:46.906 Initializing NVMe Controllers 00:15:46.906 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:15:46.906 Controller IO queue size 128, less than required. 00:15:46.906 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:46.906 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:15:46.906 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:15:46.906 Initialization complete. Launching workers. 00:15:46.906 ======================================================== 00:15:46.906 Latency(us) 00:15:46.906 Device Information : IOPS MiB/s Average min max 00:15:46.906 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8848.86 34.57 14478.51 2104.60 120168.70 00:15:46.906 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8778.57 34.29 14584.88 3640.88 57247.57 00:15:46.906 ======================================================== 00:15:46.906 Total : 17627.43 68.86 14531.48 2104.60 120168.70 00:15:46.906 00:15:46.906 21:32:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:46.906 21:32:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5ca22dcf-a5c0-4418-b110-085c58fe3b08 00:15:47.165 21:32:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e5172738-7c00-4084-9356-ccc2316bb01d 00:15:47.424 21:32:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:15:47.424 21:32:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:15:47.424 21:32:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:15:47.424 21:32:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:47.424 21:32:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:15:47.424 21:32:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:47.424 21:32:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:15:47.424 21:32:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:47.424 21:32:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:47.424 rmmod nvme_tcp 00:15:47.424 rmmod nvme_fabrics 00:15:47.424 rmmod nvme_keyring 00:15:47.424 21:32:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:47.424 21:32:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:15:47.424 21:32:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:15:47.424 21:32:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1387161 ']' 00:15:47.424 21:32:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1387161 00:15:47.424 21:32:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@949 -- # '[' -z 1387161 ']' 00:15:47.424 21:32:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # kill -0 1387161 00:15:47.424 21:32:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # uname 00:15:47.424 21:32:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:47.424 21:32:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1387161 00:15:47.424 21:32:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:47.424 21:32:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:47.424 21:32:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1387161' 00:15:47.424 killing process with pid 1387161 00:15:47.424 21:32:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@968 -- # kill 1387161 00:15:47.424 21:32:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@973 -- # wait 1387161 00:15:47.684 21:32:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:47.684 21:32:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:47.684 21:32:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:47.684 21:32:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:47.684 21:32:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:47.684 21:32:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:47.684 21:32:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:47.684 21:32:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:50.218 21:32:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:50.218 00:15:50.218 real 0m23.744s 00:15:50.218 user 1m8.077s 00:15:50.218 sys 0m7.831s 00:15:50.218 21:32:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:50.218 21:32:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:50.218 ************************************ 00:15:50.218 END TEST nvmf_lvol 00:15:50.218 ************************************ 00:15:50.218 21:32:49 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:15:50.218 21:32:49 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:50.218 21:32:49 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:50.218 21:32:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:50.218 ************************************ 00:15:50.218 START TEST nvmf_lvs_grow 00:15:50.218 ************************************ 00:15:50.218 21:32:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:15:50.218 * Looking for test storage... 00:15:50.218 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:50.218 21:32:50 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:50.218 21:32:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:15:50.218 21:32:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:50.218 21:32:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:50.218 21:32:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:50.218 21:32:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:50.218 21:32:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:50.218 21:32:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:50.218 21:32:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:50.218 21:32:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:50.218 21:32:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:50.218 21:32:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:50.218 21:32:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:50.218 21:32:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:15:50.218 21:32:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:50.218 21:32:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:50.218 21:32:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:50.218 21:32:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:50.218 21:32:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:50.218 21:32:50 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:50.218 21:32:50 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:50.218 21:32:50 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:50.218 21:32:50 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.218 21:32:50 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.219 21:32:50 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.219 21:32:50 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:15:50.219 21:32:50 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.219 21:32:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:15:50.219 21:32:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:50.219 21:32:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:50.219 21:32:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:50.219 21:32:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:50.219 21:32:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:50.219 21:32:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:50.219 21:32:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:50.219 21:32:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:50.219 21:32:50 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:50.219 21:32:50 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:50.219 21:32:50 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:15:50.219 21:32:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:50.219 21:32:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:50.219 21:32:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:50.219 21:32:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:50.219 21:32:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:50.219 21:32:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:50.219 21:32:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:50.219 21:32:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:50.219 21:32:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:50.219 21:32:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:50.219 21:32:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:15:50.219 21:32:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:56.788 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:56.789 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:56.789 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:56.789 Found net devices under 0000:af:00.0: cvl_0_0 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:56.789 Found net devices under 0000:af:00.1: cvl_0_1 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:56.789 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:56.789 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:15:56.789 00:15:56.789 --- 10.0.0.2 ping statistics --- 00:15:56.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.789 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:56.789 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:56.789 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:15:56.789 00:15:56.789 --- 10.0.0.1 ping statistics --- 00:15:56.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.789 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1393836 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1393836 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@830 -- # '[' -z 1393836 ']' 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.789 21:32:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:56.790 21:32:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.790 21:32:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:56.790 21:32:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:56.790 [2024-06-07 21:32:56.676497] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:15:56.790 [2024-06-07 21:32:56.676552] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:56.790 EAL: No free 2048 kB hugepages reported on node 1 00:15:56.790 [2024-06-07 21:32:56.771724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.790 [2024-06-07 21:32:56.861275] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:56.790 [2024-06-07 21:32:56.861317] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:56.790 [2024-06-07 21:32:56.861327] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:56.790 [2024-06-07 21:32:56.861336] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:56.790 [2024-06-07 21:32:56.861343] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:56.790 [2024-06-07 21:32:56.861365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.358 21:32:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:57.358 21:32:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@863 -- # return 0 00:15:57.358 21:32:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:57.358 21:32:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:57.358 21:32:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:57.617 21:32:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:57.617 21:32:57 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:57.617 [2024-06-07 21:32:57.872261] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:57.876 21:32:57 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:15:57.876 21:32:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:15:57.876 21:32:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:57.876 21:32:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:57.876 ************************************ 00:15:57.876 START TEST lvs_grow_clean 00:15:57.876 ************************************ 00:15:57.876 21:32:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # lvs_grow 00:15:57.876 21:32:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:57.876 21:32:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:57.876 21:32:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:57.876 21:32:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:57.876 21:32:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:57.876 21:32:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:57.876 21:32:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:57.876 21:32:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:57.876 21:32:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:58.136 21:32:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:58.136 21:32:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:58.395 21:32:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=8f589902-d650-47b6-876a-45cb1b51d1e9 00:15:58.395 21:32:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f589902-d650-47b6-876a-45cb1b51d1e9 00:15:58.395 21:32:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:58.654 21:32:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:58.654 21:32:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:58.654 21:32:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8f589902-d650-47b6-876a-45cb1b51d1e9 lvol 150 00:15:58.654 21:32:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=b933aa3e-f849-44dd-813f-174f79316302 00:15:58.654 21:32:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:58.654 21:32:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:58.913 [2024-06-07 21:32:59.143616] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:58.913 [2024-06-07 21:32:59.143683] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:58.913 true 00:15:58.913 21:32:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f589902-d650-47b6-876a-45cb1b51d1e9 00:15:58.913 21:32:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:59.172 21:32:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:59.172 21:32:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:59.431 21:32:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b933aa3e-f849-44dd-813f-174f79316302 00:15:59.691 21:32:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:59.950 [2024-06-07 21:33:00.134657] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:59.950 21:33:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:00.209 21:33:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1394583 00:16:00.209 21:33:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:00.209 21:33:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:00.209 21:33:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1394583 /var/tmp/bdevperf.sock 00:16:00.209 21:33:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@830 -- # '[' -z 1394583 ']' 00:16:00.209 21:33:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:00.209 21:33:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:00.209 21:33:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:00.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:00.209 21:33:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:00.209 21:33:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:16:00.209 [2024-06-07 21:33:00.432640] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:16:00.209 [2024-06-07 21:33:00.432699] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1394583 ] 00:16:00.209 EAL: No free 2048 kB hugepages reported on node 1 00:16:00.468 [2024-06-07 21:33:00.513341] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.468 [2024-06-07 21:33:00.603950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:00.468 21:33:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:00.468 21:33:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@863 -- # return 0 00:16:00.468 21:33:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:01.036 Nvme0n1 00:16:01.036 21:33:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:01.295 [ 00:16:01.295 { 00:16:01.295 "name": "Nvme0n1", 00:16:01.295 "aliases": [ 00:16:01.295 "b933aa3e-f849-44dd-813f-174f79316302" 00:16:01.295 ], 00:16:01.295 "product_name": "NVMe disk", 00:16:01.295 "block_size": 4096, 00:16:01.295 "num_blocks": 38912, 00:16:01.295 "uuid": "b933aa3e-f849-44dd-813f-174f79316302", 00:16:01.295 "assigned_rate_limits": { 00:16:01.295 "rw_ios_per_sec": 0, 00:16:01.295 "rw_mbytes_per_sec": 0, 00:16:01.295 "r_mbytes_per_sec": 0, 00:16:01.295 "w_mbytes_per_sec": 0 00:16:01.295 }, 00:16:01.295 "claimed": false, 00:16:01.295 "zoned": false, 00:16:01.295 "supported_io_types": { 00:16:01.295 "read": true, 00:16:01.295 "write": true, 00:16:01.295 "unmap": true, 00:16:01.295 "write_zeroes": true, 00:16:01.295 "flush": true, 00:16:01.295 "reset": true, 00:16:01.295 "compare": true, 00:16:01.295 "compare_and_write": true, 00:16:01.295 "abort": true, 00:16:01.295 "nvme_admin": true, 00:16:01.295 "nvme_io": true 00:16:01.295 }, 00:16:01.295 "memory_domains": [ 00:16:01.295 { 00:16:01.295 "dma_device_id": "system", 00:16:01.295 "dma_device_type": 1 00:16:01.295 } 00:16:01.295 ], 00:16:01.295 "driver_specific": { 00:16:01.295 "nvme": [ 00:16:01.295 { 00:16:01.295 "trid": { 00:16:01.295 "trtype": "TCP", 00:16:01.295 "adrfam": "IPv4", 00:16:01.295 "traddr": "10.0.0.2", 00:16:01.295 "trsvcid": "4420", 00:16:01.295 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:01.295 }, 00:16:01.295 "ctrlr_data": { 00:16:01.295 "cntlid": 1, 00:16:01.295 "vendor_id": "0x8086", 00:16:01.295 "model_number": "SPDK bdev Controller", 00:16:01.295 "serial_number": "SPDK0", 00:16:01.295 "firmware_revision": "24.09", 00:16:01.295 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:01.295 "oacs": { 00:16:01.295 "security": 0, 00:16:01.295 "format": 0, 00:16:01.295 "firmware": 0, 00:16:01.295 "ns_manage": 0 00:16:01.295 }, 00:16:01.295 "multi_ctrlr": true, 00:16:01.295 "ana_reporting": false 00:16:01.295 }, 00:16:01.295 "vs": { 00:16:01.295 "nvme_version": "1.3" 00:16:01.295 }, 00:16:01.295 "ns_data": { 00:16:01.295 "id": 1, 00:16:01.295 "can_share": true 00:16:01.295 } 00:16:01.295 } 00:16:01.295 ], 00:16:01.295 "mp_policy": "active_passive" 00:16:01.295 } 00:16:01.295 } 00:16:01.295 ] 00:16:01.295 21:33:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1394750 00:16:01.295 21:33:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:01.295 21:33:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:01.295 Running I/O for 10 seconds... 00:16:02.673 Latency(us) 00:16:02.673 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:02.673 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:02.673 Nvme0n1 : 1.00 14550.00 56.84 0.00 0.00 0.00 0.00 0.00 00:16:02.673 =================================================================================================================== 00:16:02.673 Total : 14550.00 56.84 0.00 0.00 0.00 0.00 0.00 00:16:02.673 00:16:03.245 21:33:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8f589902-d650-47b6-876a-45cb1b51d1e9 00:16:03.504 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:03.504 Nvme0n1 : 2.00 14655.00 57.25 0.00 0.00 0.00 0.00 0.00 00:16:03.504 =================================================================================================================== 00:16:03.504 Total : 14655.00 57.25 0.00 0.00 0.00 0.00 0.00 00:16:03.504 00:16:03.504 true 00:16:03.504 21:33:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f589902-d650-47b6-876a-45cb1b51d1e9 00:16:03.504 21:33:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:03.763 21:33:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:03.763 21:33:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:03.763 21:33:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1394750 00:16:04.332 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:04.332 Nvme0n1 : 3.00 14698.00 57.41 0.00 0.00 0.00 0.00 0.00 00:16:04.332 =================================================================================================================== 00:16:04.332 Total : 14698.00 57.41 0.00 0.00 0.00 0.00 0.00 00:16:04.332 00:16:05.350 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:05.350 Nvme0n1 : 4.00 14735.50 57.56 0.00 0.00 0.00 0.00 0.00 00:16:05.350 =================================================================================================================== 00:16:05.350 Total : 14735.50 57.56 0.00 0.00 0.00 0.00 0.00 00:16:05.350 00:16:06.298 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:06.298 Nvme0n1 : 5.00 14758.00 57.65 0.00 0.00 0.00 0.00 0.00 00:16:06.298 =================================================================================================================== 00:16:06.298 Total : 14758.00 57.65 0.00 0.00 0.00 0.00 0.00 00:16:06.298 00:16:07.676 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:07.676 Nvme0n1 : 6.00 14782.33 57.74 0.00 0.00 0.00 0.00 0.00 00:16:07.676 =================================================================================================================== 00:16:07.676 Total : 14782.33 57.74 0.00 0.00 0.00 0.00 0.00 00:16:07.676 00:16:08.613 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:08.613 Nvme0n1 : 7.00 14796.29 57.80 0.00 0.00 0.00 0.00 0.00 00:16:08.613 =================================================================================================================== 00:16:08.613 Total : 14796.29 57.80 0.00 0.00 0.00 0.00 0.00 00:16:08.613 00:16:09.550 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:09.550 Nvme0n1 : 8.00 14810.75 57.85 0.00 0.00 0.00 0.00 0.00 00:16:09.550 =================================================================================================================== 00:16:09.550 Total : 14810.75 57.85 0.00 0.00 0.00 0.00 0.00 00:16:09.550 00:16:10.486 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:10.486 Nvme0n1 : 9.00 14821.11 57.89 0.00 0.00 0.00 0.00 0.00 00:16:10.486 =================================================================================================================== 00:16:10.486 Total : 14821.11 57.89 0.00 0.00 0.00 0.00 0.00 00:16:10.486 00:16:11.424 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:11.424 Nvme0n1 : 10.00 14833.40 57.94 0.00 0.00 0.00 0.00 0.00 00:16:11.424 =================================================================================================================== 00:16:11.424 Total : 14833.40 57.94 0.00 0.00 0.00 0.00 0.00 00:16:11.424 00:16:11.424 00:16:11.424 Latency(us) 00:16:11.424 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:11.424 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:11.424 Nvme0n1 : 10.01 14833.66 57.94 0.00 0.00 8621.95 6523.81 15371.17 00:16:11.424 =================================================================================================================== 00:16:11.424 Total : 14833.66 57.94 0.00 0.00 8621.95 6523.81 15371.17 00:16:11.424 0 00:16:11.424 21:33:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1394583 00:16:11.424 21:33:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@949 -- # '[' -z 1394583 ']' 00:16:11.424 21:33:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # kill -0 1394583 00:16:11.424 21:33:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # uname 00:16:11.424 21:33:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:11.424 21:33:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1394583 00:16:11.424 21:33:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:16:11.424 21:33:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:16:11.424 21:33:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1394583' 00:16:11.424 killing process with pid 1394583 00:16:11.424 21:33:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # kill 1394583 00:16:11.424 Received shutdown signal, test time was about 10.000000 seconds 00:16:11.424 00:16:11.424 Latency(us) 00:16:11.424 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:11.424 =================================================================================================================== 00:16:11.424 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:11.424 21:33:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # wait 1394583 00:16:11.684 21:33:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:11.943 21:33:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:12.200 21:33:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f589902-d650-47b6-876a-45cb1b51d1e9 00:16:12.200 21:33:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:16:12.458 21:33:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:16:12.458 21:33:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:16:12.458 21:33:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:12.716 [2024-06-07 21:33:12.758913] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:12.716 21:33:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f589902-d650-47b6-876a-45cb1b51d1e9 00:16:12.716 21:33:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@649 -- # local es=0 00:16:12.716 21:33:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f589902-d650-47b6-876a-45cb1b51d1e9 00:16:12.716 21:33:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:12.716 21:33:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:12.716 21:33:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:12.716 21:33:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:12.716 21:33:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:12.716 21:33:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:12.716 21:33:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:12.716 21:33:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:12.716 21:33:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f589902-d650-47b6-876a-45cb1b51d1e9 00:16:12.975 request: 00:16:12.975 { 00:16:12.975 "uuid": "8f589902-d650-47b6-876a-45cb1b51d1e9", 00:16:12.975 "method": "bdev_lvol_get_lvstores", 00:16:12.975 "req_id": 1 00:16:12.975 } 00:16:12.975 Got JSON-RPC error response 00:16:12.975 response: 00:16:12.975 { 00:16:12.975 "code": -19, 00:16:12.975 "message": "No such device" 00:16:12.975 } 00:16:12.975 21:33:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # es=1 00:16:12.975 21:33:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:16:12.975 21:33:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:16:12.975 21:33:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:16:12.975 21:33:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:13.233 aio_bdev 00:16:13.233 21:33:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b933aa3e-f849-44dd-813f-174f79316302 00:16:13.233 21:33:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_name=b933aa3e-f849-44dd-813f-174f79316302 00:16:13.233 21:33:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:16:13.234 21:33:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local i 00:16:13.234 21:33:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:16:13.234 21:33:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:16:13.234 21:33:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:13.492 21:33:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b933aa3e-f849-44dd-813f-174f79316302 -t 2000 00:16:13.751 [ 00:16:13.751 { 00:16:13.751 "name": "b933aa3e-f849-44dd-813f-174f79316302", 00:16:13.751 "aliases": [ 00:16:13.751 "lvs/lvol" 00:16:13.751 ], 00:16:13.751 "product_name": "Logical Volume", 00:16:13.751 "block_size": 4096, 00:16:13.751 "num_blocks": 38912, 00:16:13.751 "uuid": "b933aa3e-f849-44dd-813f-174f79316302", 00:16:13.751 "assigned_rate_limits": { 00:16:13.751 "rw_ios_per_sec": 0, 00:16:13.751 "rw_mbytes_per_sec": 0, 00:16:13.751 "r_mbytes_per_sec": 0, 00:16:13.751 "w_mbytes_per_sec": 0 00:16:13.751 }, 00:16:13.751 "claimed": false, 00:16:13.751 "zoned": false, 00:16:13.751 "supported_io_types": { 00:16:13.751 "read": true, 00:16:13.751 "write": true, 00:16:13.751 "unmap": true, 00:16:13.751 "write_zeroes": true, 00:16:13.751 "flush": false, 00:16:13.751 "reset": true, 00:16:13.751 "compare": false, 00:16:13.751 "compare_and_write": false, 00:16:13.751 "abort": false, 00:16:13.751 "nvme_admin": false, 00:16:13.751 "nvme_io": false 00:16:13.751 }, 00:16:13.751 "driver_specific": { 00:16:13.751 "lvol": { 00:16:13.751 "lvol_store_uuid": "8f589902-d650-47b6-876a-45cb1b51d1e9", 00:16:13.751 "base_bdev": "aio_bdev", 00:16:13.751 "thin_provision": false, 00:16:13.751 "num_allocated_clusters": 38, 00:16:13.751 "snapshot": false, 00:16:13.751 "clone": false, 00:16:13.751 "esnap_clone": false 00:16:13.751 } 00:16:13.751 } 00:16:13.751 } 00:16:13.751 ] 00:16:13.751 21:33:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # return 0 00:16:13.751 21:33:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f589902-d650-47b6-876a-45cb1b51d1e9 00:16:13.751 21:33:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:16:14.009 21:33:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:16:14.009 21:33:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f589902-d650-47b6-876a-45cb1b51d1e9 00:16:14.009 21:33:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:16:14.009 21:33:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:16:14.009 21:33:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b933aa3e-f849-44dd-813f-174f79316302 00:16:14.267 21:33:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8f589902-d650-47b6-876a-45cb1b51d1e9 00:16:14.525 21:33:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:14.784 21:33:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:14.784 00:16:14.784 real 0m17.122s 00:16:14.784 user 0m16.833s 00:16:14.784 sys 0m1.621s 00:16:14.784 21:33:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:14.784 21:33:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:16:14.784 ************************************ 00:16:14.784 END TEST lvs_grow_clean 00:16:14.784 ************************************ 00:16:15.042 21:33:15 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:16:15.042 21:33:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:15.042 21:33:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:15.042 21:33:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:15.042 ************************************ 00:16:15.042 START TEST lvs_grow_dirty 00:16:15.042 ************************************ 00:16:15.042 21:33:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # lvs_grow dirty 00:16:15.042 21:33:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:15.042 21:33:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:15.042 21:33:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:15.042 21:33:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:15.042 21:33:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:15.042 21:33:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:15.042 21:33:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:15.042 21:33:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:15.042 21:33:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:15.301 21:33:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:15.301 21:33:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:15.559 21:33:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=d5241b85-b210-4286-85cc-e8e8ba4ce7d0 00:16:15.559 21:33:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:15.559 21:33:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5241b85-b210-4286-85cc-e8e8ba4ce7d0 00:16:15.818 21:33:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:15.818 21:33:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:15.818 21:33:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d5241b85-b210-4286-85cc-e8e8ba4ce7d0 lvol 150 00:16:16.077 21:33:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=c14439c1-3821-4888-9ce2-89bbabdf8b33 00:16:16.077 21:33:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:16.077 21:33:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:16.077 [2024-06-07 21:33:16.345617] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:16.077 [2024-06-07 21:33:16.345677] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:16.335 true 00:16:16.335 21:33:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5241b85-b210-4286-85cc-e8e8ba4ce7d0 00:16:16.335 21:33:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:16.335 21:33:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:16.335 21:33:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:16.594 21:33:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c14439c1-3821-4888-9ce2-89bbabdf8b33 00:16:16.853 21:33:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:17.112 [2024-06-07 21:33:17.284596] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:17.112 21:33:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:17.371 21:33:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1398119 00:16:17.371 21:33:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:17.371 21:33:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:17.371 21:33:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1398119 /var/tmp/bdevperf.sock 00:16:17.371 21:33:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # '[' -z 1398119 ']' 00:16:17.371 21:33:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:17.371 21:33:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:17.371 21:33:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:17.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:17.371 21:33:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:17.371 21:33:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:17.371 [2024-06-07 21:33:17.595094] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:16:17.371 [2024-06-07 21:33:17.595152] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1398119 ] 00:16:17.371 EAL: No free 2048 kB hugepages reported on node 1 00:16:17.640 [2024-06-07 21:33:17.676056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.640 [2024-06-07 21:33:17.765071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:17.640 21:33:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:17.640 21:33:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@863 -- # return 0 00:16:17.640 21:33:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:17.900 Nvme0n1 00:16:18.158 21:33:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:18.158 [ 00:16:18.158 { 00:16:18.158 "name": "Nvme0n1", 00:16:18.158 "aliases": [ 00:16:18.158 "c14439c1-3821-4888-9ce2-89bbabdf8b33" 00:16:18.158 ], 00:16:18.158 "product_name": "NVMe disk", 00:16:18.158 "block_size": 4096, 00:16:18.158 "num_blocks": 38912, 00:16:18.158 "uuid": "c14439c1-3821-4888-9ce2-89bbabdf8b33", 00:16:18.158 "assigned_rate_limits": { 00:16:18.158 "rw_ios_per_sec": 0, 00:16:18.158 "rw_mbytes_per_sec": 0, 00:16:18.158 "r_mbytes_per_sec": 0, 00:16:18.158 "w_mbytes_per_sec": 0 00:16:18.158 }, 00:16:18.158 "claimed": false, 00:16:18.158 "zoned": false, 00:16:18.158 "supported_io_types": { 00:16:18.158 "read": true, 00:16:18.158 "write": true, 00:16:18.158 "unmap": true, 00:16:18.158 "write_zeroes": true, 00:16:18.158 "flush": true, 00:16:18.158 "reset": true, 00:16:18.158 "compare": true, 00:16:18.158 "compare_and_write": true, 00:16:18.158 "abort": true, 00:16:18.158 "nvme_admin": true, 00:16:18.158 "nvme_io": true 00:16:18.159 }, 00:16:18.159 "memory_domains": [ 00:16:18.159 { 00:16:18.159 "dma_device_id": "system", 00:16:18.159 "dma_device_type": 1 00:16:18.159 } 00:16:18.159 ], 00:16:18.159 "driver_specific": { 00:16:18.159 "nvme": [ 00:16:18.159 { 00:16:18.159 "trid": { 00:16:18.159 "trtype": "TCP", 00:16:18.159 "adrfam": "IPv4", 00:16:18.159 "traddr": "10.0.0.2", 00:16:18.159 "trsvcid": "4420", 00:16:18.159 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:18.159 }, 00:16:18.159 "ctrlr_data": { 00:16:18.159 "cntlid": 1, 00:16:18.159 "vendor_id": "0x8086", 00:16:18.159 "model_number": "SPDK bdev Controller", 00:16:18.159 "serial_number": "SPDK0", 00:16:18.159 "firmware_revision": "24.09", 00:16:18.159 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:18.159 "oacs": { 00:16:18.159 "security": 0, 00:16:18.159 "format": 0, 00:16:18.159 "firmware": 0, 00:16:18.159 "ns_manage": 0 00:16:18.159 }, 00:16:18.159 "multi_ctrlr": true, 00:16:18.159 "ana_reporting": false 00:16:18.159 }, 00:16:18.159 "vs": { 00:16:18.159 "nvme_version": "1.3" 00:16:18.159 }, 00:16:18.159 "ns_data": { 00:16:18.159 "id": 1, 00:16:18.159 "can_share": true 00:16:18.159 } 00:16:18.159 } 00:16:18.159 ], 00:16:18.159 "mp_policy": "active_passive" 00:16:18.159 } 00:16:18.159 } 00:16:18.159 ] 00:16:18.159 21:33:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1398377 00:16:18.159 21:33:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:18.159 21:33:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:18.417 Running I/O for 10 seconds... 00:16:19.355 Latency(us) 00:16:19.355 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:19.355 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:19.355 Nvme0n1 : 1.00 14662.00 57.27 0.00 0.00 0.00 0.00 0.00 00:16:19.355 =================================================================================================================== 00:16:19.355 Total : 14662.00 57.27 0.00 0.00 0.00 0.00 0.00 00:16:19.355 00:16:20.291 21:33:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d5241b85-b210-4286-85cc-e8e8ba4ce7d0 00:16:20.291 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:20.291 Nvme0n1 : 2.00 14707.00 57.45 0.00 0.00 0.00 0.00 0.00 00:16:20.291 =================================================================================================================== 00:16:20.291 Total : 14707.00 57.45 0.00 0.00 0.00 0.00 0.00 00:16:20.291 00:16:20.549 true 00:16:20.549 21:33:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5241b85-b210-4286-85cc-e8e8ba4ce7d0 00:16:20.549 21:33:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:20.807 21:33:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:20.807 21:33:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:20.807 21:33:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1398377 00:16:21.374 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:21.374 Nvme0n1 : 3.00 14732.67 57.55 0.00 0.00 0.00 0.00 0.00 00:16:21.374 =================================================================================================================== 00:16:21.374 Total : 14732.67 57.55 0.00 0.00 0.00 0.00 0.00 00:16:21.374 00:16:22.311 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:22.311 Nvme0n1 : 4.00 14755.50 57.64 0.00 0.00 0.00 0.00 0.00 00:16:22.311 =================================================================================================================== 00:16:22.311 Total : 14755.50 57.64 0.00 0.00 0.00 0.00 0.00 00:16:22.311 00:16:23.689 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:23.689 Nvme0n1 : 5.00 14778.80 57.73 0.00 0.00 0.00 0.00 0.00 00:16:23.689 =================================================================================================================== 00:16:23.689 Total : 14778.80 57.73 0.00 0.00 0.00 0.00 0.00 00:16:23.689 00:16:24.627 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:24.627 Nvme0n1 : 6.00 14794.33 57.79 0.00 0.00 0.00 0.00 0.00 00:16:24.627 =================================================================================================================== 00:16:24.627 Total : 14794.33 57.79 0.00 0.00 0.00 0.00 0.00 00:16:24.627 00:16:25.565 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:25.565 Nvme0n1 : 7.00 14810.00 57.85 0.00 0.00 0.00 0.00 0.00 00:16:25.565 =================================================================================================================== 00:16:25.565 Total : 14810.00 57.85 0.00 0.00 0.00 0.00 0.00 00:16:25.565 00:16:26.503 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:26.503 Nvme0n1 : 8.00 14823.75 57.91 0.00 0.00 0.00 0.00 0.00 00:16:26.503 =================================================================================================================== 00:16:26.503 Total : 14823.75 57.91 0.00 0.00 0.00 0.00 0.00 00:16:26.503 00:16:27.441 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:27.441 Nvme0n1 : 9.00 14835.33 57.95 0.00 0.00 0.00 0.00 0.00 00:16:27.441 =================================================================================================================== 00:16:27.441 Total : 14835.33 57.95 0.00 0.00 0.00 0.00 0.00 00:16:27.441 00:16:28.378 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:28.378 Nvme0n1 : 10.00 14844.60 57.99 0.00 0.00 0.00 0.00 0.00 00:16:28.378 =================================================================================================================== 00:16:28.378 Total : 14844.60 57.99 0.00 0.00 0.00 0.00 0.00 00:16:28.378 00:16:28.378 00:16:28.378 Latency(us) 00:16:28.378 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:28.378 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:28.378 Nvme0n1 : 10.01 14844.47 57.99 0.00 0.00 8614.51 3187.43 11498.59 00:16:28.378 =================================================================================================================== 00:16:28.378 Total : 14844.47 57.99 0.00 0.00 8614.51 3187.43 11498.59 00:16:28.378 0 00:16:28.378 21:33:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1398119 00:16:28.378 21:33:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@949 -- # '[' -z 1398119 ']' 00:16:28.378 21:33:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # kill -0 1398119 00:16:28.378 21:33:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # uname 00:16:28.378 21:33:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:28.378 21:33:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1398119 00:16:28.378 21:33:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:16:28.378 21:33:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:16:28.378 21:33:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1398119' 00:16:28.378 killing process with pid 1398119 00:16:28.378 21:33:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # kill 1398119 00:16:28.378 Received shutdown signal, test time was about 10.000000 seconds 00:16:28.378 00:16:28.378 Latency(us) 00:16:28.378 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:28.378 =================================================================================================================== 00:16:28.378 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:28.378 21:33:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # wait 1398119 00:16:28.637 21:33:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:28.896 21:33:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:29.154 21:33:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5241b85-b210-4286-85cc-e8e8ba4ce7d0 00:16:29.154 21:33:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:16:29.414 21:33:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:16:29.414 21:33:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:16:29.414 21:33:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1393836 00:16:29.414 21:33:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1393836 00:16:29.414 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1393836 Killed "${NVMF_APP[@]}" "$@" 00:16:29.414 21:33:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:16:29.414 21:33:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:16:29.414 21:33:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:29.414 21:33:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@723 -- # xtrace_disable 00:16:29.414 21:33:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:29.414 21:33:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1400248 00:16:29.414 21:33:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1400248 00:16:29.414 21:33:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:29.414 21:33:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # '[' -z 1400248 ']' 00:16:29.414 21:33:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:29.414 21:33:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:29.414 21:33:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:29.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:29.414 21:33:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:29.414 21:33:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:29.673 [2024-06-07 21:33:29.696131] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:16:29.673 [2024-06-07 21:33:29.696190] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:29.673 EAL: No free 2048 kB hugepages reported on node 1 00:16:29.673 [2024-06-07 21:33:29.793256] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.673 [2024-06-07 21:33:29.882933] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:29.673 [2024-06-07 21:33:29.882976] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:29.673 [2024-06-07 21:33:29.882986] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:29.673 [2024-06-07 21:33:29.882996] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:29.673 [2024-06-07 21:33:29.883003] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:29.673 [2024-06-07 21:33:29.883032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.610 21:33:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:30.610 21:33:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@863 -- # return 0 00:16:30.610 21:33:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:30.610 21:33:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@729 -- # xtrace_disable 00:16:30.610 21:33:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:30.610 21:33:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:30.610 21:33:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:30.870 [2024-06-07 21:33:30.907407] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:16:30.870 [2024-06-07 21:33:30.907510] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:16:30.870 [2024-06-07 21:33:30.907547] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:16:30.870 21:33:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:16:30.870 21:33:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev c14439c1-3821-4888-9ce2-89bbabdf8b33 00:16:30.870 21:33:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_name=c14439c1-3821-4888-9ce2-89bbabdf8b33 00:16:30.870 21:33:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:16:30.870 21:33:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local i 00:16:30.870 21:33:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:16:30.870 21:33:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:16:30.870 21:33:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:31.171 21:33:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c14439c1-3821-4888-9ce2-89bbabdf8b33 -t 2000 00:16:31.171 [ 00:16:31.171 { 00:16:31.171 "name": "c14439c1-3821-4888-9ce2-89bbabdf8b33", 00:16:31.171 "aliases": [ 00:16:31.171 "lvs/lvol" 00:16:31.171 ], 00:16:31.171 "product_name": "Logical Volume", 00:16:31.171 "block_size": 4096, 00:16:31.171 "num_blocks": 38912, 00:16:31.171 "uuid": "c14439c1-3821-4888-9ce2-89bbabdf8b33", 00:16:31.171 "assigned_rate_limits": { 00:16:31.171 "rw_ios_per_sec": 0, 00:16:31.171 "rw_mbytes_per_sec": 0, 00:16:31.171 "r_mbytes_per_sec": 0, 00:16:31.171 "w_mbytes_per_sec": 0 00:16:31.171 }, 00:16:31.171 "claimed": false, 00:16:31.171 "zoned": false, 00:16:31.171 "supported_io_types": { 00:16:31.171 "read": true, 00:16:31.171 "write": true, 00:16:31.171 "unmap": true, 00:16:31.171 "write_zeroes": true, 00:16:31.171 "flush": false, 00:16:31.171 "reset": true, 00:16:31.171 "compare": false, 00:16:31.171 "compare_and_write": false, 00:16:31.171 "abort": false, 00:16:31.171 "nvme_admin": false, 00:16:31.171 "nvme_io": false 00:16:31.171 }, 00:16:31.171 "driver_specific": { 00:16:31.171 "lvol": { 00:16:31.171 "lvol_store_uuid": "d5241b85-b210-4286-85cc-e8e8ba4ce7d0", 00:16:31.171 "base_bdev": "aio_bdev", 00:16:31.171 "thin_provision": false, 00:16:31.171 "num_allocated_clusters": 38, 00:16:31.171 "snapshot": false, 00:16:31.171 "clone": false, 00:16:31.171 "esnap_clone": false 00:16:31.171 } 00:16:31.171 } 00:16:31.171 } 00:16:31.171 ] 00:16:31.458 21:33:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # return 0 00:16:31.458 21:33:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:16:31.458 21:33:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5241b85-b210-4286-85cc-e8e8ba4ce7d0 00:16:31.458 21:33:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:16:31.458 21:33:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5241b85-b210-4286-85cc-e8e8ba4ce7d0 00:16:31.458 21:33:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:16:31.717 21:33:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:16:31.717 21:33:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:31.976 [2024-06-07 21:33:32.043973] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:31.976 21:33:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5241b85-b210-4286-85cc-e8e8ba4ce7d0 00:16:31.976 21:33:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@649 -- # local es=0 00:16:31.976 21:33:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5241b85-b210-4286-85cc-e8e8ba4ce7d0 00:16:31.976 21:33:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:31.976 21:33:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:31.976 21:33:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:31.976 21:33:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:31.976 21:33:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:31.976 21:33:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:31.976 21:33:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:31.976 21:33:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:31.976 21:33:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5241b85-b210-4286-85cc-e8e8ba4ce7d0 00:16:32.235 request: 00:16:32.235 { 00:16:32.235 "uuid": "d5241b85-b210-4286-85cc-e8e8ba4ce7d0", 00:16:32.235 "method": "bdev_lvol_get_lvstores", 00:16:32.235 "req_id": 1 00:16:32.235 } 00:16:32.235 Got JSON-RPC error response 00:16:32.235 response: 00:16:32.235 { 00:16:32.235 "code": -19, 00:16:32.235 "message": "No such device" 00:16:32.235 } 00:16:32.235 21:33:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # es=1 00:16:32.235 21:33:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:16:32.235 21:33:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:16:32.235 21:33:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:16:32.235 21:33:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:32.494 aio_bdev 00:16:32.494 21:33:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c14439c1-3821-4888-9ce2-89bbabdf8b33 00:16:32.494 21:33:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_name=c14439c1-3821-4888-9ce2-89bbabdf8b33 00:16:32.494 21:33:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:16:32.494 21:33:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local i 00:16:32.494 21:33:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:16:32.494 21:33:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:16:32.494 21:33:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:32.753 21:33:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c14439c1-3821-4888-9ce2-89bbabdf8b33 -t 2000 00:16:33.011 [ 00:16:33.011 { 00:16:33.011 "name": "c14439c1-3821-4888-9ce2-89bbabdf8b33", 00:16:33.011 "aliases": [ 00:16:33.011 "lvs/lvol" 00:16:33.011 ], 00:16:33.011 "product_name": "Logical Volume", 00:16:33.011 "block_size": 4096, 00:16:33.011 "num_blocks": 38912, 00:16:33.011 "uuid": "c14439c1-3821-4888-9ce2-89bbabdf8b33", 00:16:33.011 "assigned_rate_limits": { 00:16:33.011 "rw_ios_per_sec": 0, 00:16:33.011 "rw_mbytes_per_sec": 0, 00:16:33.011 "r_mbytes_per_sec": 0, 00:16:33.011 "w_mbytes_per_sec": 0 00:16:33.011 }, 00:16:33.011 "claimed": false, 00:16:33.011 "zoned": false, 00:16:33.011 "supported_io_types": { 00:16:33.011 "read": true, 00:16:33.011 "write": true, 00:16:33.011 "unmap": true, 00:16:33.011 "write_zeroes": true, 00:16:33.011 "flush": false, 00:16:33.011 "reset": true, 00:16:33.011 "compare": false, 00:16:33.011 "compare_and_write": false, 00:16:33.011 "abort": false, 00:16:33.011 "nvme_admin": false, 00:16:33.011 "nvme_io": false 00:16:33.011 }, 00:16:33.011 "driver_specific": { 00:16:33.011 "lvol": { 00:16:33.011 "lvol_store_uuid": "d5241b85-b210-4286-85cc-e8e8ba4ce7d0", 00:16:33.011 "base_bdev": "aio_bdev", 00:16:33.011 "thin_provision": false, 00:16:33.011 "num_allocated_clusters": 38, 00:16:33.011 "snapshot": false, 00:16:33.011 "clone": false, 00:16:33.011 "esnap_clone": false 00:16:33.011 } 00:16:33.011 } 00:16:33.011 } 00:16:33.011 ] 00:16:33.011 21:33:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # return 0 00:16:33.011 21:33:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5241b85-b210-4286-85cc-e8e8ba4ce7d0 00:16:33.011 21:33:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:16:33.268 21:33:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:16:33.268 21:33:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5241b85-b210-4286-85cc-e8e8ba4ce7d0 00:16:33.268 21:33:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:16:33.268 21:33:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:16:33.268 21:33:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c14439c1-3821-4888-9ce2-89bbabdf8b33 00:16:33.527 21:33:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d5241b85-b210-4286-85cc-e8e8ba4ce7d0 00:16:33.785 21:33:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:34.043 21:33:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:34.043 00:16:34.043 real 0m19.188s 00:16:34.043 user 0m48.624s 00:16:34.043 sys 0m4.017s 00:16:34.043 21:33:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:34.043 21:33:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:34.043 ************************************ 00:16:34.043 END TEST lvs_grow_dirty 00:16:34.043 ************************************ 00:16:34.302 21:33:34 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:16:34.302 21:33:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # type=--id 00:16:34.302 21:33:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # id=0 00:16:34.302 21:33:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:16:34.302 21:33:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:34.302 21:33:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:16:34.302 21:33:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:16:34.302 21:33:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # for n in $shm_files 00:16:34.302 21:33:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:34.302 nvmf_trace.0 00:16:34.302 21:33:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # return 0 00:16:34.302 21:33:34 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:16:34.302 21:33:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:34.302 21:33:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:16:34.302 21:33:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:34.302 21:33:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:16:34.302 21:33:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:34.302 21:33:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:34.302 rmmod nvme_tcp 00:16:34.302 rmmod nvme_fabrics 00:16:34.302 rmmod nvme_keyring 00:16:34.302 21:33:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:34.302 21:33:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:16:34.302 21:33:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:16:34.302 21:33:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1400248 ']' 00:16:34.302 21:33:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1400248 00:16:34.302 21:33:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@949 -- # '[' -z 1400248 ']' 00:16:34.302 21:33:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # kill -0 1400248 00:16:34.302 21:33:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # uname 00:16:34.302 21:33:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:34.302 21:33:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1400248 00:16:34.302 21:33:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:16:34.303 21:33:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:16:34.303 21:33:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1400248' 00:16:34.303 killing process with pid 1400248 00:16:34.303 21:33:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # kill 1400248 00:16:34.303 21:33:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # wait 1400248 00:16:34.562 21:33:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:34.562 21:33:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:34.562 21:33:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:34.562 21:33:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:34.562 21:33:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:34.562 21:33:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:34.562 21:33:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:34.562 21:33:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:37.098 21:33:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:37.098 00:16:37.098 real 0m46.788s 00:16:37.098 user 1m12.810s 00:16:37.098 sys 0m11.037s 00:16:37.098 21:33:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:37.098 21:33:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:37.098 ************************************ 00:16:37.098 END TEST nvmf_lvs_grow 00:16:37.098 ************************************ 00:16:37.098 21:33:36 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:16:37.098 21:33:36 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:37.098 21:33:36 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:37.098 21:33:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:37.098 ************************************ 00:16:37.098 START TEST nvmf_bdev_io_wait 00:16:37.098 ************************************ 00:16:37.098 21:33:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:16:37.098 * Looking for test storage... 00:16:37.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:37.098 21:33:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:37.098 21:33:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:16:37.098 21:33:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:37.098 21:33:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:37.098 21:33:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:37.098 21:33:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:37.098 21:33:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:37.098 21:33:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:37.098 21:33:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:37.098 21:33:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:37.098 21:33:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:37.098 21:33:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:37.098 21:33:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:37.098 21:33:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:16:37.098 21:33:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:37.098 21:33:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:37.098 21:33:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:37.098 21:33:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:37.098 21:33:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:37.098 21:33:36 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:37.098 21:33:36 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:37.098 21:33:36 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:37.098 21:33:36 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.098 21:33:36 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.098 21:33:36 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.098 21:33:36 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:16:37.098 21:33:36 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.098 21:33:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:16:37.098 21:33:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:37.098 21:33:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:37.098 21:33:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:37.098 21:33:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:37.098 21:33:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:37.098 21:33:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:37.098 21:33:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:37.098 21:33:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:37.098 21:33:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:37.098 21:33:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:37.098 21:33:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:16:37.098 21:33:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:37.098 21:33:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:37.098 21:33:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:37.098 21:33:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:37.098 21:33:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:37.098 21:33:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:37.098 21:33:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:37.098 21:33:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:37.098 21:33:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:37.098 21:33:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:37.098 21:33:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:16:37.098 21:33:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:43.668 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:43.668 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:16:43.668 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:43.668 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:43.668 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:43.668 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:43.668 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:43.668 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:16:43.668 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:43.668 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:16:43.668 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:16:43.668 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:16:43.668 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:16:43.668 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:16:43.668 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:16:43.668 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:43.668 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:43.668 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:43.668 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:43.668 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:43.668 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:43.668 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:43.669 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:43.669 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:43.669 Found net devices under 0000:af:00.0: cvl_0_0 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:43.669 Found net devices under 0000:af:00.1: cvl_0_1 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:43.669 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:43.669 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:16:43.669 00:16:43.669 --- 10.0.0.2 ping statistics --- 00:16:43.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.669 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:43.669 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:43.669 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:16:43.669 00:16:43.669 --- 10.0.0.1 ping statistics --- 00:16:43.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.669 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@723 -- # xtrace_disable 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1405342 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1405342 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@830 -- # '[' -z 1405342 ']' 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:43.669 21:33:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:43.669 [2024-06-07 21:33:43.670263] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:16:43.670 [2024-06-07 21:33:43.670320] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:43.670 EAL: No free 2048 kB hugepages reported on node 1 00:16:43.670 [2024-06-07 21:33:43.758538] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:43.670 [2024-06-07 21:33:43.850881] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:43.670 [2024-06-07 21:33:43.850926] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:43.670 [2024-06-07 21:33:43.850936] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:43.670 [2024-06-07 21:33:43.850945] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:43.670 [2024-06-07 21:33:43.850952] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:43.670 [2024-06-07 21:33:43.854048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:43.670 [2024-06-07 21:33:43.854069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:43.670 [2024-06-07 21:33:43.854187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:43.670 [2024-06-07 21:33:43.854189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.608 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:44.608 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@863 -- # return 0 00:16:44.608 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:44.608 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@729 -- # xtrace_disable 00:16:44.608 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:44.608 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:44.608 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:16:44.608 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:44.608 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:44.608 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:44.608 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:16:44.608 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:44.608 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:44.608 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:44.608 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:44.608 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:44.608 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:44.608 [2024-06-07 21:33:44.744693] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:44.608 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:44.608 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:44.608 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:44.608 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:44.608 Malloc0 00:16:44.608 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:44.608 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:44.608 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:44.608 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:44.608 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:44.608 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:44.608 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:44.608 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:44.608 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:44.608 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:44.608 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:44.608 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:44.608 [2024-06-07 21:33:44.813045] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:44.608 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:44.608 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1405622 00:16:44.608 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:16:44.608 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:16:44.608 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1405624 00:16:44.608 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:44.608 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:44.608 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:44.608 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:44.608 { 00:16:44.608 "params": { 00:16:44.608 "name": "Nvme$subsystem", 00:16:44.608 "trtype": "$TEST_TRANSPORT", 00:16:44.608 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:44.608 "adrfam": "ipv4", 00:16:44.608 "trsvcid": "$NVMF_PORT", 00:16:44.608 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:44.608 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:44.608 "hdgst": ${hdgst:-false}, 00:16:44.608 "ddgst": ${ddgst:-false} 00:16:44.608 }, 00:16:44.608 "method": "bdev_nvme_attach_controller" 00:16:44.608 } 00:16:44.608 EOF 00:16:44.608 )") 00:16:44.609 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1405627 00:16:44.609 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:16:44.609 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:16:44.609 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:44.609 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:44.609 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:44.609 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:16:44.609 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:16:44.609 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:44.609 { 00:16:44.609 "params": { 00:16:44.609 "name": "Nvme$subsystem", 00:16:44.609 "trtype": "$TEST_TRANSPORT", 00:16:44.609 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:44.609 "adrfam": "ipv4", 00:16:44.609 "trsvcid": "$NVMF_PORT", 00:16:44.609 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:44.609 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:44.609 "hdgst": ${hdgst:-false}, 00:16:44.609 "ddgst": ${ddgst:-false} 00:16:44.609 }, 00:16:44.609 "method": "bdev_nvme_attach_controller" 00:16:44.609 } 00:16:44.609 EOF 00:16:44.609 )") 00:16:44.609 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1405630 00:16:44.609 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:16:44.609 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:44.609 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:44.609 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:44.609 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:44.609 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:44.609 { 00:16:44.609 "params": { 00:16:44.609 "name": "Nvme$subsystem", 00:16:44.609 "trtype": "$TEST_TRANSPORT", 00:16:44.609 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:44.609 "adrfam": "ipv4", 00:16:44.609 "trsvcid": "$NVMF_PORT", 00:16:44.609 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:44.609 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:44.609 "hdgst": ${hdgst:-false}, 00:16:44.609 "ddgst": ${ddgst:-false} 00:16:44.609 }, 00:16:44.609 "method": "bdev_nvme_attach_controller" 00:16:44.609 } 00:16:44.609 EOF 00:16:44.609 )") 00:16:44.609 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:16:44.609 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:16:44.609 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:44.609 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:44.609 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:44.609 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:44.609 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:44.609 { 00:16:44.609 "params": { 00:16:44.609 "name": "Nvme$subsystem", 00:16:44.609 "trtype": "$TEST_TRANSPORT", 00:16:44.609 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:44.609 "adrfam": "ipv4", 00:16:44.609 "trsvcid": "$NVMF_PORT", 00:16:44.609 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:44.609 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:44.609 "hdgst": ${hdgst:-false}, 00:16:44.609 "ddgst": ${ddgst:-false} 00:16:44.609 }, 00:16:44.609 "method": "bdev_nvme_attach_controller" 00:16:44.609 } 00:16:44.609 EOF 00:16:44.609 )") 00:16:44.609 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:44.609 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1405622 00:16:44.609 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:44.609 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:44.609 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:44.609 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:44.609 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:44.609 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:44.609 "params": { 00:16:44.609 "name": "Nvme1", 00:16:44.609 "trtype": "tcp", 00:16:44.609 "traddr": "10.0.0.2", 00:16:44.609 "adrfam": "ipv4", 00:16:44.609 "trsvcid": "4420", 00:16:44.609 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:44.609 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:44.609 "hdgst": false, 00:16:44.609 "ddgst": false 00:16:44.609 }, 00:16:44.609 "method": "bdev_nvme_attach_controller" 00:16:44.609 }' 00:16:44.609 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:44.609 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:44.609 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:44.609 "params": { 00:16:44.609 "name": "Nvme1", 00:16:44.609 "trtype": "tcp", 00:16:44.609 "traddr": "10.0.0.2", 00:16:44.609 "adrfam": "ipv4", 00:16:44.609 "trsvcid": "4420", 00:16:44.609 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:44.609 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:44.609 "hdgst": false, 00:16:44.609 "ddgst": false 00:16:44.609 }, 00:16:44.609 "method": "bdev_nvme_attach_controller" 00:16:44.609 }' 00:16:44.609 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:44.609 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:44.609 "params": { 00:16:44.609 "name": "Nvme1", 00:16:44.609 "trtype": "tcp", 00:16:44.609 "traddr": "10.0.0.2", 00:16:44.609 "adrfam": "ipv4", 00:16:44.609 "trsvcid": "4420", 00:16:44.609 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:44.609 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:44.609 "hdgst": false, 00:16:44.609 "ddgst": false 00:16:44.609 }, 00:16:44.609 "method": "bdev_nvme_attach_controller" 00:16:44.609 }' 00:16:44.609 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:44.609 21:33:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:44.609 "params": { 00:16:44.609 "name": "Nvme1", 00:16:44.609 "trtype": "tcp", 00:16:44.609 "traddr": "10.0.0.2", 00:16:44.609 "adrfam": "ipv4", 00:16:44.609 "trsvcid": "4420", 00:16:44.609 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:44.609 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:44.609 "hdgst": false, 00:16:44.609 "ddgst": false 00:16:44.609 }, 00:16:44.609 "method": "bdev_nvme_attach_controller" 00:16:44.609 }' 00:16:44.609 [2024-06-07 21:33:44.861177] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:16:44.609 [2024-06-07 21:33:44.861225] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:16:44.609 [2024-06-07 21:33:44.865525] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:16:44.609 [2024-06-07 21:33:44.865561] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:16:44.609 [2024-06-07 21:33:44.868519] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:16:44.609 [2024-06-07 21:33:44.868576] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:16:44.609 [2024-06-07 21:33:44.870273] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:16:44.609 [2024-06-07 21:33:44.870327] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:16:44.868 EAL: No free 2048 kB hugepages reported on node 1 00:16:44.868 EAL: No free 2048 kB hugepages reported on node 1 00:16:44.868 [2024-06-07 21:33:45.037378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.868 EAL: No free 2048 kB hugepages reported on node 1 00:16:44.868 [2024-06-07 21:33:45.124333] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.868 [2024-06-07 21:33:45.125915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:45.127 EAL: No free 2048 kB hugepages reported on node 1 00:16:45.127 [2024-06-07 21:33:45.211714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:45.127 [2024-06-07 21:33:45.220048] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.127 [2024-06-07 21:33:45.279955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.127 [2024-06-07 21:33:45.328666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:16:45.127 [2024-06-07 21:33:45.368400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:45.127 Running I/O for 1 seconds... 00:16:45.386 Running I/O for 1 seconds... 00:16:45.386 Running I/O for 1 seconds... 00:16:45.645 Running I/O for 1 seconds... 00:16:46.213 00:16:46.213 Latency(us) 00:16:46.213 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:46.213 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:16:46.213 Nvme1n1 : 1.03 5931.24 23.17 0.00 0.00 21168.43 8817.57 37653.41 00:16:46.213 =================================================================================================================== 00:16:46.213 Total : 5931.24 23.17 0.00 0.00 21168.43 8817.57 37653.41 00:16:46.473 00:16:46.473 Latency(us) 00:16:46.473 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:46.473 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:16:46.473 Nvme1n1 : 1.01 5037.98 19.68 0.00 0.00 25175.28 14834.97 46947.61 00:16:46.473 =================================================================================================================== 00:16:46.473 Total : 5037.98 19.68 0.00 0.00 25175.28 14834.97 46947.61 00:16:46.473 00:16:46.473 Latency(us) 00:16:46.473 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:46.473 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:16:46.473 Nvme1n1 : 1.00 162964.70 636.58 0.00 0.00 782.24 329.54 927.19 00:16:46.473 =================================================================================================================== 00:16:46.473 Total : 162964.70 636.58 0.00 0.00 782.24 329.54 927.19 00:16:46.473 21:33:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1405624 00:16:46.473 00:16:46.473 Latency(us) 00:16:46.473 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:46.473 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:16:46.473 Nvme1n1 : 1.01 5863.52 22.90 0.00 0.00 21750.99 6702.55 59578.18 00:16:46.473 =================================================================================================================== 00:16:46.473 Total : 5863.52 22.90 0.00 0.00 21750.99 6702.55 59578.18 00:16:46.733 21:33:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1405627 00:16:46.733 21:33:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1405630 00:16:46.733 21:33:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:46.733 21:33:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:46.733 21:33:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:46.733 21:33:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:46.733 21:33:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:16:46.733 21:33:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:16:46.733 21:33:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:46.733 21:33:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:16:46.733 21:33:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:46.733 21:33:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:16:46.733 21:33:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:46.733 21:33:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:46.733 rmmod nvme_tcp 00:16:46.733 rmmod nvme_fabrics 00:16:46.733 rmmod nvme_keyring 00:16:46.733 21:33:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:46.733 21:33:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:16:46.733 21:33:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:16:46.733 21:33:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1405342 ']' 00:16:46.733 21:33:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1405342 00:16:46.733 21:33:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@949 -- # '[' -z 1405342 ']' 00:16:46.733 21:33:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # kill -0 1405342 00:16:46.733 21:33:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # uname 00:16:46.733 21:33:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:46.733 21:33:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1405342 00:16:46.993 21:33:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:16:46.993 21:33:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:16:46.993 21:33:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1405342' 00:16:46.993 killing process with pid 1405342 00:16:46.993 21:33:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # kill 1405342 00:16:46.993 21:33:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # wait 1405342 00:16:46.993 21:33:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:46.993 21:33:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:46.993 21:33:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:46.993 21:33:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:46.993 21:33:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:46.993 21:33:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:46.993 21:33:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:46.993 21:33:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.527 21:33:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:49.527 00:16:49.527 real 0m12.438s 00:16:49.527 user 0m21.292s 00:16:49.527 sys 0m6.568s 00:16:49.527 21:33:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:49.527 21:33:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:49.527 ************************************ 00:16:49.527 END TEST nvmf_bdev_io_wait 00:16:49.527 ************************************ 00:16:49.527 21:33:49 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:16:49.527 21:33:49 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:49.527 21:33:49 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:49.527 21:33:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:49.527 ************************************ 00:16:49.527 START TEST nvmf_queue_depth 00:16:49.527 ************************************ 00:16:49.527 21:33:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:16:49.527 * Looking for test storage... 00:16:49.527 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:49.527 21:33:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:49.527 21:33:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:16:49.527 21:33:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:49.527 21:33:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:49.527 21:33:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:49.527 21:33:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:49.527 21:33:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:49.527 21:33:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:49.527 21:33:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:49.527 21:33:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:49.527 21:33:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:49.527 21:33:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:49.527 21:33:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:49.528 21:33:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:16:49.528 21:33:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:49.528 21:33:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:49.528 21:33:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:49.528 21:33:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:49.528 21:33:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:49.528 21:33:49 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:49.528 21:33:49 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:49.528 21:33:49 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:49.528 21:33:49 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.528 21:33:49 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.528 21:33:49 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.528 21:33:49 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:16:49.528 21:33:49 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.528 21:33:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:16:49.528 21:33:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:49.528 21:33:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:49.528 21:33:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:49.528 21:33:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:49.528 21:33:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:49.528 21:33:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:49.528 21:33:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:49.528 21:33:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:49.528 21:33:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:16:49.528 21:33:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:16:49.528 21:33:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:49.528 21:33:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:16:49.528 21:33:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:49.528 21:33:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:49.528 21:33:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:49.528 21:33:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:49.528 21:33:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:49.528 21:33:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:49.528 21:33:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:49.528 21:33:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.528 21:33:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:49.528 21:33:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:49.528 21:33:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:16:49.528 21:33:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:56.095 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:56.095 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:16:56.095 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:56.095 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:56.095 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:56.096 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:56.096 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:56.096 Found net devices under 0000:af:00.0: cvl_0_0 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:56.096 Found net devices under 0000:af:00.1: cvl_0_1 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:56.096 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:56.096 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:16:56.096 00:16:56.096 --- 10.0.0.2 ping statistics --- 00:16:56.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.096 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:56.096 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:56.096 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:16:56.096 00:16:56.096 --- 10.0.0.1 ping statistics --- 00:16:56.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.096 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:56.096 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:56.097 21:33:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:16:56.097 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:56.097 21:33:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@723 -- # xtrace_disable 00:16:56.097 21:33:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:56.097 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1409924 00:16:56.097 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1409924 00:16:56.097 21:33:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:56.097 21:33:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@830 -- # '[' -z 1409924 ']' 00:16:56.097 21:33:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.097 21:33:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:56.097 21:33:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.097 21:33:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:56.097 21:33:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:56.097 [2024-06-07 21:33:55.600491] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:16:56.097 [2024-06-07 21:33:55.600547] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:56.097 EAL: No free 2048 kB hugepages reported on node 1 00:16:56.097 [2024-06-07 21:33:55.687149] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.097 [2024-06-07 21:33:55.775707] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:56.097 [2024-06-07 21:33:55.775749] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:56.097 [2024-06-07 21:33:55.775760] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:56.097 [2024-06-07 21:33:55.775768] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:56.097 [2024-06-07 21:33:55.775776] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:56.097 [2024-06-07 21:33:55.775798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:56.356 21:33:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:56.356 21:33:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@863 -- # return 0 00:16:56.356 21:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:56.356 21:33:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@729 -- # xtrace_disable 00:16:56.356 21:33:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:56.356 21:33:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:56.356 21:33:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:56.356 21:33:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:56.356 21:33:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:56.356 [2024-06-07 21:33:56.572962] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:56.356 21:33:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:56.356 21:33:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:56.356 21:33:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:56.356 21:33:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:56.356 Malloc0 00:16:56.356 21:33:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:56.357 21:33:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:56.357 21:33:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:56.357 21:33:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:56.357 21:33:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:56.357 21:33:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:56.357 21:33:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:56.357 21:33:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:56.357 21:33:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:56.357 21:33:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:56.357 21:33:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:56.357 21:33:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:56.614 [2024-06-07 21:33:56.625789] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:56.614 21:33:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:56.614 21:33:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1410176 00:16:56.614 21:33:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:56.614 21:33:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:16:56.614 21:33:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1410176 /var/tmp/bdevperf.sock 00:16:56.614 21:33:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@830 -- # '[' -z 1410176 ']' 00:16:56.614 21:33:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:56.614 21:33:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:56.614 21:33:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:56.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:56.614 21:33:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:56.614 21:33:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:56.614 [2024-06-07 21:33:56.680525] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:16:56.614 [2024-06-07 21:33:56.680581] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1410176 ] 00:16:56.614 EAL: No free 2048 kB hugepages reported on node 1 00:16:56.614 [2024-06-07 21:33:56.770069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.615 [2024-06-07 21:33:56.860060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.548 21:33:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:57.548 21:33:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@863 -- # return 0 00:16:57.548 21:33:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:57.548 21:33:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:57.548 21:33:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:57.548 NVMe0n1 00:16:57.548 21:33:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:57.548 21:33:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:57.807 Running I/O for 10 seconds... 00:17:07.785 00:17:07.785 Latency(us) 00:17:07.785 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:07.785 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:17:07.785 Verification LBA range: start 0x0 length 0x4000 00:17:07.785 NVMe0n1 : 10.08 8124.85 31.74 0.00 0.00 125458.98 20852.36 82456.20 00:17:07.785 =================================================================================================================== 00:17:07.785 Total : 8124.85 31.74 0.00 0.00 125458.98 20852.36 82456.20 00:17:07.785 0 00:17:07.785 21:34:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1410176 00:17:07.785 21:34:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@949 -- # '[' -z 1410176 ']' 00:17:07.785 21:34:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # kill -0 1410176 00:17:07.785 21:34:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # uname 00:17:07.785 21:34:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:07.785 21:34:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1410176 00:17:07.785 21:34:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:17:07.785 21:34:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:17:07.785 21:34:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1410176' 00:17:07.785 killing process with pid 1410176 00:17:07.785 21:34:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@968 -- # kill 1410176 00:17:07.785 Received shutdown signal, test time was about 10.000000 seconds 00:17:07.785 00:17:07.785 Latency(us) 00:17:07.785 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:07.785 =================================================================================================================== 00:17:07.785 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:07.785 21:34:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@973 -- # wait 1410176 00:17:08.044 21:34:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:08.044 21:34:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:17:08.044 21:34:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:08.044 21:34:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:17:08.044 21:34:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:08.044 21:34:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:17:08.044 21:34:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:08.044 21:34:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:08.044 rmmod nvme_tcp 00:17:08.044 rmmod nvme_fabrics 00:17:08.044 rmmod nvme_keyring 00:17:08.044 21:34:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:08.044 21:34:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:17:08.044 21:34:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:17:08.044 21:34:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1409924 ']' 00:17:08.044 21:34:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1409924 00:17:08.044 21:34:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@949 -- # '[' -z 1409924 ']' 00:17:08.044 21:34:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # kill -0 1409924 00:17:08.044 21:34:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # uname 00:17:08.044 21:34:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:08.044 21:34:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1409924 00:17:08.304 21:34:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:17:08.304 21:34:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:17:08.304 21:34:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1409924' 00:17:08.304 killing process with pid 1409924 00:17:08.304 21:34:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@968 -- # kill 1409924 00:17:08.304 21:34:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@973 -- # wait 1409924 00:17:08.304 21:34:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:08.304 21:34:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:08.304 21:34:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:08.304 21:34:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:08.304 21:34:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:08.304 21:34:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.304 21:34:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:08.304 21:34:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.841 21:34:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:10.841 00:17:10.841 real 0m21.250s 00:17:10.841 user 0m25.832s 00:17:10.841 sys 0m6.120s 00:17:10.841 21:34:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:10.841 21:34:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:10.841 ************************************ 00:17:10.841 END TEST nvmf_queue_depth 00:17:10.841 ************************************ 00:17:10.841 21:34:10 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:10.841 21:34:10 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:17:10.841 21:34:10 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:10.841 21:34:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:10.841 ************************************ 00:17:10.841 START TEST nvmf_target_multipath 00:17:10.841 ************************************ 00:17:10.841 21:34:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:10.841 * Looking for test storage... 00:17:10.841 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:10.841 21:34:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:10.841 21:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:17:10.841 21:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:10.841 21:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:10.841 21:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:10.841 21:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:10.841 21:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:10.841 21:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:10.841 21:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:10.841 21:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:10.841 21:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:10.841 21:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:10.841 21:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:10.841 21:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:17:10.841 21:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:10.841 21:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:10.841 21:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:10.841 21:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:10.841 21:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:10.841 21:34:10 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:10.841 21:34:10 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:10.841 21:34:10 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:10.841 21:34:10 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.841 21:34:10 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.841 21:34:10 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.841 21:34:10 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:17:10.841 21:34:10 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.841 21:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:17:10.841 21:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:10.841 21:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:10.841 21:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:10.841 21:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:10.841 21:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:10.841 21:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:10.841 21:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:10.841 21:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:10.841 21:34:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:10.841 21:34:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:10.841 21:34:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:10.841 21:34:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:10.841 21:34:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:17:10.842 21:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:10.842 21:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:10.842 21:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:10.842 21:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:10.842 21:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:10.842 21:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:10.842 21:34:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:10.842 21:34:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.842 21:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:10.842 21:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:10.842 21:34:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:17:10.842 21:34:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:17.481 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:17.481 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:17:17.481 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:17.481 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:17.481 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:17.481 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:17.481 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:17.481 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:17:17.481 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:17.481 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:17:17.481 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:17:17.481 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:17:17.481 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:17:17.481 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:17:17.481 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:17:17.481 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:17.481 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:17.481 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:17.481 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:17.481 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:17.481 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:17.481 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:17.481 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:17.481 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:17.481 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:17.481 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:17.481 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:17.481 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:17.481 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:17.481 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:17.482 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:17.482 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:17.482 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:17.482 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:17.482 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:17.482 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:17.482 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:17.482 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:17.482 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:17.482 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:17.482 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:17.482 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:17.482 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:17.482 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:17.482 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:17.482 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:17.482 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:17.482 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:17.482 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:17.482 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:17.482 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:17.482 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:17.482 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:17.482 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:17.482 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:17.482 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:17.482 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:17.482 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:17.482 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:17.482 Found net devices under 0000:af:00.0: cvl_0_0 00:17:17.482 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:17.482 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:17.482 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:17.482 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:17.482 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:17.482 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:17.482 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:17.482 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:17.482 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:17.482 Found net devices under 0000:af:00.1: cvl_0_1 00:17:17.482 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:17.482 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:17.482 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:17:17.482 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:17.482 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:17.482 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:17.482 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:17.482 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:17.482 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:17.482 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:17.482 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:17.482 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:17.482 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:17.482 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:17.482 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:17.482 21:34:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:17.482 21:34:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:17.482 21:34:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:17.482 21:34:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:17.482 21:34:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:17.482 21:34:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:17.482 21:34:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:17.482 21:34:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:17.482 21:34:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:17.482 21:34:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:17.482 21:34:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:17.482 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:17.482 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:17:17.482 00:17:17.482 --- 10.0.0.2 ping statistics --- 00:17:17.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.482 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:17:17.482 21:34:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:17.482 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:17.482 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:17:17.482 00:17:17.482 --- 10.0.0.1 ping statistics --- 00:17:17.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.482 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:17:17.482 21:34:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:17.482 21:34:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:17:17.482 21:34:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:17.482 21:34:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:17.482 21:34:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:17.482 21:34:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:17.482 21:34:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:17.482 21:34:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:17.482 21:34:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:17.482 21:34:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:17:17.482 21:34:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:17:17.482 only one NIC for nvmf test 00:17:17.482 21:34:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:17:17.482 21:34:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:17.482 21:34:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:17:17.482 21:34:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:17.482 21:34:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:17:17.482 21:34:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:17.482 21:34:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:17.482 rmmod nvme_tcp 00:17:17.482 rmmod nvme_fabrics 00:17:17.482 rmmod nvme_keyring 00:17:17.482 21:34:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:17.482 21:34:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:17:17.482 21:34:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:17:17.482 21:34:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:17.482 21:34:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:17.482 21:34:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:17.482 21:34:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:17.482 21:34:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:17.482 21:34:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:17.482 21:34:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:17.482 21:34:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:17.482 21:34:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.389 21:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:19.389 21:34:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:17:19.389 21:34:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:17:19.389 21:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:19.389 21:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:17:19.389 21:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:19.389 21:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:17:19.389 21:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:19.389 21:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:19.389 21:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:19.389 21:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:17:19.389 21:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:17:19.389 21:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:19.389 21:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:19.389 21:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:19.389 21:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:19.389 21:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:19.389 21:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:19.389 21:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:19.389 21:34:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:19.389 21:34:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.389 21:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:19.389 00:17:19.389 real 0m8.775s 00:17:19.389 user 0m1.751s 00:17:19.389 sys 0m4.966s 00:17:19.389 21:34:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:19.389 21:34:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:19.389 ************************************ 00:17:19.389 END TEST nvmf_target_multipath 00:17:19.389 ************************************ 00:17:19.389 21:34:19 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:19.389 21:34:19 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:17:19.389 21:34:19 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:19.389 21:34:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:19.389 ************************************ 00:17:19.389 START TEST nvmf_zcopy 00:17:19.389 ************************************ 00:17:19.389 21:34:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:19.389 * Looking for test storage... 00:17:19.389 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:19.389 21:34:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:19.389 21:34:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:17:19.389 21:34:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:19.389 21:34:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:19.389 21:34:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:19.389 21:34:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:19.389 21:34:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:19.389 21:34:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:19.389 21:34:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:19.389 21:34:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:19.389 21:34:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:19.389 21:34:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:19.389 21:34:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:19.389 21:34:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:17:19.390 21:34:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:19.390 21:34:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:19.390 21:34:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:19.390 21:34:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:19.390 21:34:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:19.390 21:34:19 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:19.390 21:34:19 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:19.390 21:34:19 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:19.390 21:34:19 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.390 21:34:19 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.390 21:34:19 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.390 21:34:19 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:17:19.390 21:34:19 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.390 21:34:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:17:19.390 21:34:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:19.390 21:34:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:19.390 21:34:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:19.390 21:34:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:19.390 21:34:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:19.390 21:34:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:19.390 21:34:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:19.390 21:34:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:19.390 21:34:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:17:19.390 21:34:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:19.390 21:34:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:19.390 21:34:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:19.390 21:34:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:19.390 21:34:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:19.390 21:34:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:19.390 21:34:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:19.390 21:34:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.390 21:34:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:19.390 21:34:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:19.390 21:34:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:17:19.390 21:34:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:25.958 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:25.958 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:25.958 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:25.959 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:25.959 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:25.959 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:25.959 Found net devices under 0000:af:00.0: cvl_0_0 00:17:25.959 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:25.959 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:25.959 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:25.959 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:25.959 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:25.959 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:25.959 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:25.959 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:25.959 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:25.959 Found net devices under 0000:af:00.1: cvl_0_1 00:17:25.959 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:25.959 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:25.959 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:17:25.959 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:25.959 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:25.959 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:25.959 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:25.959 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:25.959 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:25.959 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:25.959 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:25.959 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:25.959 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:25.959 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:25.959 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:25.959 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:25.959 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:25.959 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:25.959 21:34:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:25.959 21:34:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:25.959 21:34:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:25.959 21:34:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:25.959 21:34:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:25.959 21:34:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:25.959 21:34:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:25.959 21:34:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:25.959 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:25.959 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:17:25.959 00:17:25.959 --- 10.0.0.2 ping statistics --- 00:17:25.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:25.959 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:17:25.959 21:34:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:25.959 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:25.959 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:17:25.959 00:17:25.959 --- 10.0.0.1 ping statistics --- 00:17:25.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:25.959 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:17:25.959 21:34:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:25.959 21:34:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:17:25.959 21:34:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:25.959 21:34:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:25.959 21:34:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:25.959 21:34:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:25.959 21:34:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:25.959 21:34:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:25.959 21:34:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:26.217 21:34:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:17:26.217 21:34:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:26.217 21:34:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@723 -- # xtrace_disable 00:17:26.217 21:34:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:26.217 21:34:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1420193 00:17:26.217 21:34:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1420193 00:17:26.217 21:34:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:26.217 21:34:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@830 -- # '[' -z 1420193 ']' 00:17:26.217 21:34:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:26.217 21:34:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:26.217 21:34:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:26.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:26.217 21:34:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:26.217 21:34:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:26.217 [2024-06-07 21:34:26.291076] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:17:26.217 [2024-06-07 21:34:26.291133] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:26.217 EAL: No free 2048 kB hugepages reported on node 1 00:17:26.217 [2024-06-07 21:34:26.378852] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.217 [2024-06-07 21:34:26.464827] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:26.217 [2024-06-07 21:34:26.464873] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:26.217 [2024-06-07 21:34:26.464884] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:26.217 [2024-06-07 21:34:26.464893] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:26.217 [2024-06-07 21:34:26.464900] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:26.218 [2024-06-07 21:34:26.464923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:27.152 21:34:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:27.152 21:34:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@863 -- # return 0 00:17:27.152 21:34:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:27.152 21:34:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@729 -- # xtrace_disable 00:17:27.152 21:34:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:27.152 21:34:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:27.152 21:34:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:17:27.152 21:34:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:17:27.152 21:34:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:27.152 21:34:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:27.152 [2024-06-07 21:34:27.266400] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:27.152 21:34:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:27.152 21:34:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:27.152 21:34:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:27.152 21:34:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:27.152 21:34:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:27.152 21:34:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:27.152 21:34:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:27.153 21:34:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:27.153 [2024-06-07 21:34:27.286579] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:27.153 21:34:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:27.153 21:34:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:27.153 21:34:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:27.153 21:34:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:27.153 21:34:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:27.153 21:34:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:17:27.153 21:34:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:27.153 21:34:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:27.153 malloc0 00:17:27.153 21:34:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:27.153 21:34:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:27.153 21:34:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:27.153 21:34:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:27.153 21:34:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:27.153 21:34:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:17:27.153 21:34:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:17:27.153 21:34:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:17:27.153 21:34:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:17:27.153 21:34:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:27.153 21:34:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:27.153 { 00:17:27.153 "params": { 00:17:27.153 "name": "Nvme$subsystem", 00:17:27.153 "trtype": "$TEST_TRANSPORT", 00:17:27.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:27.153 "adrfam": "ipv4", 00:17:27.153 "trsvcid": "$NVMF_PORT", 00:17:27.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:27.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:27.153 "hdgst": ${hdgst:-false}, 00:17:27.153 "ddgst": ${ddgst:-false} 00:17:27.153 }, 00:17:27.153 "method": "bdev_nvme_attach_controller" 00:17:27.153 } 00:17:27.153 EOF 00:17:27.153 )") 00:17:27.153 21:34:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:17:27.153 21:34:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:17:27.153 21:34:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:17:27.153 21:34:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:27.153 "params": { 00:17:27.153 "name": "Nvme1", 00:17:27.153 "trtype": "tcp", 00:17:27.153 "traddr": "10.0.0.2", 00:17:27.153 "adrfam": "ipv4", 00:17:27.153 "trsvcid": "4420", 00:17:27.153 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:27.153 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:27.153 "hdgst": false, 00:17:27.153 "ddgst": false 00:17:27.153 }, 00:17:27.153 "method": "bdev_nvme_attach_controller" 00:17:27.153 }' 00:17:27.153 [2024-06-07 21:34:27.370000] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:17:27.153 [2024-06-07 21:34:27.370063] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1420343 ] 00:17:27.153 EAL: No free 2048 kB hugepages reported on node 1 00:17:27.411 [2024-06-07 21:34:27.457721] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.411 [2024-06-07 21:34:27.547788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.669 Running I/O for 10 seconds... 00:17:37.645 00:17:37.645 Latency(us) 00:17:37.645 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:37.645 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:17:37.645 Verification LBA range: start 0x0 length 0x1000 00:17:37.645 Nvme1n1 : 10.01 5786.44 45.21 0.00 0.00 22050.57 2666.12 35270.28 00:17:37.645 =================================================================================================================== 00:17:37.645 Total : 5786.44 45.21 0.00 0.00 22050.57 2666.12 35270.28 00:17:37.904 21:34:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1422255 00:17:37.904 21:34:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:17:37.904 21:34:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:37.904 21:34:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:17:37.904 21:34:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:17:37.904 21:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:17:37.904 21:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:17:37.904 21:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:37.904 21:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:37.904 { 00:17:37.904 "params": { 00:17:37.904 "name": "Nvme$subsystem", 00:17:37.904 "trtype": "$TEST_TRANSPORT", 00:17:37.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:37.904 "adrfam": "ipv4", 00:17:37.904 "trsvcid": "$NVMF_PORT", 00:17:37.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:37.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:37.904 "hdgst": ${hdgst:-false}, 00:17:37.904 "ddgst": ${ddgst:-false} 00:17:37.904 }, 00:17:37.904 "method": "bdev_nvme_attach_controller" 00:17:37.904 } 00:17:37.904 EOF 00:17:37.904 )") 00:17:37.904 21:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:17:37.904 [2024-06-07 21:34:38.006310] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.904 [2024-06-07 21:34:38.006350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.904 21:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:17:37.904 21:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:17:37.904 21:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:37.904 "params": { 00:17:37.904 "name": "Nvme1", 00:17:37.904 "trtype": "tcp", 00:17:37.905 "traddr": "10.0.0.2", 00:17:37.905 "adrfam": "ipv4", 00:17:37.905 "trsvcid": "4420", 00:17:37.905 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:37.905 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:37.905 "hdgst": false, 00:17:37.905 "ddgst": false 00:17:37.905 }, 00:17:37.905 "method": "bdev_nvme_attach_controller" 00:17:37.905 }' 00:17:37.905 [2024-06-07 21:34:38.018305] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.905 [2024-06-07 21:34:38.018321] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.905 [2024-06-07 21:34:38.030338] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.905 [2024-06-07 21:34:38.030353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.905 [2024-06-07 21:34:38.042374] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.905 [2024-06-07 21:34:38.042389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.905 [2024-06-07 21:34:38.046033] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:17:37.905 [2024-06-07 21:34:38.046088] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1422255 ] 00:17:37.905 [2024-06-07 21:34:38.054402] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.905 [2024-06-07 21:34:38.054418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.905 [2024-06-07 21:34:38.066436] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.905 [2024-06-07 21:34:38.066450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.905 [2024-06-07 21:34:38.078463] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.905 [2024-06-07 21:34:38.078477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.905 EAL: No free 2048 kB hugepages reported on node 1 00:17:37.905 [2024-06-07 21:34:38.090502] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.905 [2024-06-07 21:34:38.090516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.905 [2024-06-07 21:34:38.102534] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.905 [2024-06-07 21:34:38.102548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.905 [2024-06-07 21:34:38.114569] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.905 [2024-06-07 21:34:38.114583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.905 [2024-06-07 21:34:38.126602] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.905 [2024-06-07 21:34:38.126615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.905 [2024-06-07 21:34:38.137411] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.905 [2024-06-07 21:34:38.138632] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.905 [2024-06-07 21:34:38.138645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.905 [2024-06-07 21:34:38.150669] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.905 [2024-06-07 21:34:38.150685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.905 [2024-06-07 21:34:38.162701] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.905 [2024-06-07 21:34:38.162714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.164 [2024-06-07 21:34:38.174744] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.164 [2024-06-07 21:34:38.174768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.164 [2024-06-07 21:34:38.186784] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.164 [2024-06-07 21:34:38.186811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.164 [2024-06-07 21:34:38.198802] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.164 [2024-06-07 21:34:38.198819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.164 [2024-06-07 21:34:38.210837] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.164 [2024-06-07 21:34:38.210851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.164 [2024-06-07 21:34:38.222871] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.164 [2024-06-07 21:34:38.222884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.164 [2024-06-07 21:34:38.226077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.164 [2024-06-07 21:34:38.234906] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.164 [2024-06-07 21:34:38.234920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.164 [2024-06-07 21:34:38.246946] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.164 [2024-06-07 21:34:38.246969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.164 [2024-06-07 21:34:38.258970] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.164 [2024-06-07 21:34:38.258988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.164 [2024-06-07 21:34:38.271005] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.164 [2024-06-07 21:34:38.271021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.164 [2024-06-07 21:34:38.283044] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.164 [2024-06-07 21:34:38.283058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.164 [2024-06-07 21:34:38.295078] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.164 [2024-06-07 21:34:38.295095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.164 [2024-06-07 21:34:38.307112] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.164 [2024-06-07 21:34:38.307127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.164 [2024-06-07 21:34:38.319158] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.164 [2024-06-07 21:34:38.319184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.164 [2024-06-07 21:34:38.331194] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.164 [2024-06-07 21:34:38.331218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.164 [2024-06-07 21:34:38.343218] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.164 [2024-06-07 21:34:38.343236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.164 [2024-06-07 21:34:38.355244] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.164 [2024-06-07 21:34:38.355258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.164 [2024-06-07 21:34:38.367279] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.164 [2024-06-07 21:34:38.367292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.164 [2024-06-07 21:34:38.379308] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.164 [2024-06-07 21:34:38.379326] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.164 [2024-06-07 21:34:38.391348] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.164 [2024-06-07 21:34:38.391366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.164 [2024-06-07 21:34:38.403383] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.165 [2024-06-07 21:34:38.403402] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.165 [2024-06-07 21:34:38.415425] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.165 [2024-06-07 21:34:38.415447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.165 Running I/O for 5 seconds... 00:17:38.165 [2024-06-07 21:34:38.427452] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.165 [2024-06-07 21:34:38.427468] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.424 [2024-06-07 21:34:38.441836] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.424 [2024-06-07 21:34:38.441861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.424 [2024-06-07 21:34:38.452827] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.424 [2024-06-07 21:34:38.452850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.424 [2024-06-07 21:34:38.468115] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.424 [2024-06-07 21:34:38.468138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.424 [2024-06-07 21:34:38.485523] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.424 [2024-06-07 21:34:38.485546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.424 [2024-06-07 21:34:38.501891] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.424 [2024-06-07 21:34:38.501915] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.424 [2024-06-07 21:34:38.518234] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.424 [2024-06-07 21:34:38.518257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.424 [2024-06-07 21:34:38.535490] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.424 [2024-06-07 21:34:38.535515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.424 [2024-06-07 21:34:38.553649] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.424 [2024-06-07 21:34:38.553673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.424 [2024-06-07 21:34:38.569274] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.424 [2024-06-07 21:34:38.569297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.424 [2024-06-07 21:34:38.586817] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.424 [2024-06-07 21:34:38.586840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.424 [2024-06-07 21:34:38.602780] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.424 [2024-06-07 21:34:38.602803] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.424 [2024-06-07 21:34:38.622218] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.424 [2024-06-07 21:34:38.622242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.424 [2024-06-07 21:34:38.633404] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.424 [2024-06-07 21:34:38.633426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.424 [2024-06-07 21:34:38.646400] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.424 [2024-06-07 21:34:38.646424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.424 [2024-06-07 21:34:38.664003] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.424 [2024-06-07 21:34:38.664034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.424 [2024-06-07 21:34:38.680070] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.424 [2024-06-07 21:34:38.680093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.683 [2024-06-07 21:34:38.696776] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.683 [2024-06-07 21:34:38.696801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.683 [2024-06-07 21:34:38.713321] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.683 [2024-06-07 21:34:38.713345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.683 [2024-06-07 21:34:38.729237] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.683 [2024-06-07 21:34:38.729261] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.683 [2024-06-07 21:34:38.739306] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.683 [2024-06-07 21:34:38.739329] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.683 [2024-06-07 21:34:38.754607] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.683 [2024-06-07 21:34:38.754630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.683 [2024-06-07 21:34:38.771831] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.683 [2024-06-07 21:34:38.771856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.683 [2024-06-07 21:34:38.787807] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.683 [2024-06-07 21:34:38.787830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.683 [2024-06-07 21:34:38.805179] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.683 [2024-06-07 21:34:38.805203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.683 [2024-06-07 21:34:38.821108] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.683 [2024-06-07 21:34:38.821132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.683 [2024-06-07 21:34:38.831278] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.683 [2024-06-07 21:34:38.831305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.683 [2024-06-07 21:34:38.846704] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.683 [2024-06-07 21:34:38.846728] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.683 [2024-06-07 21:34:38.864001] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.683 [2024-06-07 21:34:38.864032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.683 [2024-06-07 21:34:38.880469] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.683 [2024-06-07 21:34:38.880493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.683 [2024-06-07 21:34:38.897621] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.683 [2024-06-07 21:34:38.897645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.683 [2024-06-07 21:34:38.914477] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.683 [2024-06-07 21:34:38.914500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.683 [2024-06-07 21:34:38.930335] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.683 [2024-06-07 21:34:38.930359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.683 [2024-06-07 21:34:38.940652] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.683 [2024-06-07 21:34:38.940675] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.942 [2024-06-07 21:34:38.952922] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.942 [2024-06-07 21:34:38.952945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.942 [2024-06-07 21:34:38.968114] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.942 [2024-06-07 21:34:38.968138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.942 [2024-06-07 21:34:38.985695] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.942 [2024-06-07 21:34:38.985718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.942 [2024-06-07 21:34:39.002056] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.942 [2024-06-07 21:34:39.002080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.942 [2024-06-07 21:34:39.019344] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.942 [2024-06-07 21:34:39.019368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.942 [2024-06-07 21:34:39.035613] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.942 [2024-06-07 21:34:39.035636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.942 [2024-06-07 21:34:39.053299] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.942 [2024-06-07 21:34:39.053322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.942 [2024-06-07 21:34:39.069290] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.942 [2024-06-07 21:34:39.069314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.942 [2024-06-07 21:34:39.086791] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.942 [2024-06-07 21:34:39.086814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.942 [2024-06-07 21:34:39.103174] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.942 [2024-06-07 21:34:39.103197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.942 [2024-06-07 21:34:39.121947] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.942 [2024-06-07 21:34:39.121971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.942 [2024-06-07 21:34:39.133143] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.942 [2024-06-07 21:34:39.133167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.942 [2024-06-07 21:34:39.144455] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.942 [2024-06-07 21:34:39.144477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.942 [2024-06-07 21:34:39.159703] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.943 [2024-06-07 21:34:39.159727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.943 [2024-06-07 21:34:39.177204] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.943 [2024-06-07 21:34:39.177227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.943 [2024-06-07 21:34:39.193215] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.943 [2024-06-07 21:34:39.193238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.943 [2024-06-07 21:34:39.203293] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.943 [2024-06-07 21:34:39.203315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.201 [2024-06-07 21:34:39.218363] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.201 [2024-06-07 21:34:39.218387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.201 [2024-06-07 21:34:39.235595] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.201 [2024-06-07 21:34:39.235618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.201 [2024-06-07 21:34:39.252186] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.201 [2024-06-07 21:34:39.252210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.202 [2024-06-07 21:34:39.268710] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.202 [2024-06-07 21:34:39.268733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.202 [2024-06-07 21:34:39.285115] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.202 [2024-06-07 21:34:39.285139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.202 [2024-06-07 21:34:39.302481] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.202 [2024-06-07 21:34:39.302505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.202 [2024-06-07 21:34:39.318937] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.202 [2024-06-07 21:34:39.318961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.202 [2024-06-07 21:34:39.336537] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.202 [2024-06-07 21:34:39.336562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.202 [2024-06-07 21:34:39.352597] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.202 [2024-06-07 21:34:39.352620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.202 [2024-06-07 21:34:39.370123] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.202 [2024-06-07 21:34:39.370147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.202 [2024-06-07 21:34:39.385687] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.202 [2024-06-07 21:34:39.385711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.202 [2024-06-07 21:34:39.395959] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.202 [2024-06-07 21:34:39.395983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.202 [2024-06-07 21:34:39.411292] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.202 [2024-06-07 21:34:39.411316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.202 [2024-06-07 21:34:39.427587] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.202 [2024-06-07 21:34:39.427611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.202 [2024-06-07 21:34:39.444853] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.202 [2024-06-07 21:34:39.444877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.202 [2024-06-07 21:34:39.459883] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.202 [2024-06-07 21:34:39.459906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.461 [2024-06-07 21:34:39.476223] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.461 [2024-06-07 21:34:39.476248] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.461 [2024-06-07 21:34:39.493681] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.461 [2024-06-07 21:34:39.493705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.461 [2024-06-07 21:34:39.509532] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.461 [2024-06-07 21:34:39.509556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.461 [2024-06-07 21:34:39.526444] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.461 [2024-06-07 21:34:39.526468] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.461 [2024-06-07 21:34:39.543876] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.461 [2024-06-07 21:34:39.543904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.461 [2024-06-07 21:34:39.559774] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.461 [2024-06-07 21:34:39.559798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.461 [2024-06-07 21:34:39.578076] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.461 [2024-06-07 21:34:39.578100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.461 [2024-06-07 21:34:39.593055] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.461 [2024-06-07 21:34:39.593078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.461 [2024-06-07 21:34:39.611241] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.461 [2024-06-07 21:34:39.611264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.461 [2024-06-07 21:34:39.626292] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.461 [2024-06-07 21:34:39.626315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.461 [2024-06-07 21:34:39.637336] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.461 [2024-06-07 21:34:39.637359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.461 [2024-06-07 21:34:39.648553] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.461 [2024-06-07 21:34:39.648576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.461 [2024-06-07 21:34:39.661850] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.461 [2024-06-07 21:34:39.661873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.461 [2024-06-07 21:34:39.677654] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.461 [2024-06-07 21:34:39.677678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.461 [2024-06-07 21:34:39.688055] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.461 [2024-06-07 21:34:39.688079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.461 [2024-06-07 21:34:39.702499] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.461 [2024-06-07 21:34:39.702522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.461 [2024-06-07 21:34:39.712646] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.461 [2024-06-07 21:34:39.712668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.461 [2024-06-07 21:34:39.727434] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.461 [2024-06-07 21:34:39.727459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.720 [2024-06-07 21:34:39.737918] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.720 [2024-06-07 21:34:39.737943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.720 [2024-06-07 21:34:39.752744] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.720 [2024-06-07 21:34:39.752768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.720 [2024-06-07 21:34:39.770462] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.720 [2024-06-07 21:34:39.770486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.720 [2024-06-07 21:34:39.785419] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.720 [2024-06-07 21:34:39.785442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.720 [2024-06-07 21:34:39.802964] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.720 [2024-06-07 21:34:39.802988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.720 [2024-06-07 21:34:39.817986] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.720 [2024-06-07 21:34:39.818015] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.720 [2024-06-07 21:34:39.835561] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.720 [2024-06-07 21:34:39.835584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.720 [2024-06-07 21:34:39.851241] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.720 [2024-06-07 21:34:39.851264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.720 [2024-06-07 21:34:39.861983] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.720 [2024-06-07 21:34:39.862006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.720 [2024-06-07 21:34:39.876673] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.720 [2024-06-07 21:34:39.876696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.720 [2024-06-07 21:34:39.886517] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.720 [2024-06-07 21:34:39.886539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.720 [2024-06-07 21:34:39.901490] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.720 [2024-06-07 21:34:39.901514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.720 [2024-06-07 21:34:39.912097] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.720 [2024-06-07 21:34:39.912120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.720 [2024-06-07 21:34:39.927513] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.720 [2024-06-07 21:34:39.927536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.720 [2024-06-07 21:34:39.942482] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.720 [2024-06-07 21:34:39.942505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.720 [2024-06-07 21:34:39.959610] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.720 [2024-06-07 21:34:39.959633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.720 [2024-06-07 21:34:39.976719] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.720 [2024-06-07 21:34:39.976742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.979 [2024-06-07 21:34:39.993610] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.979 [2024-06-07 21:34:39.993636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.979 [2024-06-07 21:34:40.009939] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.979 [2024-06-07 21:34:40.009963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.979 [2024-06-07 21:34:40.026677] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.979 [2024-06-07 21:34:40.026704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.979 [2024-06-07 21:34:40.036981] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.979 [2024-06-07 21:34:40.037005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.979 [2024-06-07 21:34:40.051953] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.979 [2024-06-07 21:34:40.051975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.979 [2024-06-07 21:34:40.068237] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.979 [2024-06-07 21:34:40.068261] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.979 [2024-06-07 21:34:40.085583] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.979 [2024-06-07 21:34:40.085609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.979 [2024-06-07 21:34:40.100641] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.979 [2024-06-07 21:34:40.100669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.979 [2024-06-07 21:34:40.117940] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.979 [2024-06-07 21:34:40.117964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.979 [2024-06-07 21:34:40.133810] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.979 [2024-06-07 21:34:40.133834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.979 [2024-06-07 21:34:40.151571] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.979 [2024-06-07 21:34:40.151595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.979 [2024-06-07 21:34:40.167697] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.979 [2024-06-07 21:34:40.167720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.979 [2024-06-07 21:34:40.184895] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.980 [2024-06-07 21:34:40.184919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.980 [2024-06-07 21:34:40.200967] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.980 [2024-06-07 21:34:40.200991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.980 [2024-06-07 21:34:40.218631] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.980 [2024-06-07 21:34:40.218654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.980 [2024-06-07 21:34:40.234530] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.980 [2024-06-07 21:34:40.234553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.238 [2024-06-07 21:34:40.252221] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.238 [2024-06-07 21:34:40.252247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.238 [2024-06-07 21:34:40.269477] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.238 [2024-06-07 21:34:40.269501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.239 [2024-06-07 21:34:40.284974] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.239 [2024-06-07 21:34:40.284998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.239 [2024-06-07 21:34:40.302714] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.239 [2024-06-07 21:34:40.302738] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.239 [2024-06-07 21:34:40.318448] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.239 [2024-06-07 21:34:40.318472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.239 [2024-06-07 21:34:40.336272] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.239 [2024-06-07 21:34:40.336297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.239 [2024-06-07 21:34:40.351756] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.239 [2024-06-07 21:34:40.351780] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.239 [2024-06-07 21:34:40.368424] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.239 [2024-06-07 21:34:40.368448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.239 [2024-06-07 21:34:40.384922] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.239 [2024-06-07 21:34:40.384945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.239 [2024-06-07 21:34:40.402202] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.239 [2024-06-07 21:34:40.402225] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.239 [2024-06-07 21:34:40.417391] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.239 [2024-06-07 21:34:40.417420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.239 [2024-06-07 21:34:40.433786] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.239 [2024-06-07 21:34:40.433809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.239 [2024-06-07 21:34:40.450439] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.239 [2024-06-07 21:34:40.450463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.239 [2024-06-07 21:34:40.468297] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.239 [2024-06-07 21:34:40.468321] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.239 [2024-06-07 21:34:40.479158] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.239 [2024-06-07 21:34:40.479182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.239 [2024-06-07 21:34:40.493962] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.239 [2024-06-07 21:34:40.493986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.239 [2024-06-07 21:34:40.504270] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.239 [2024-06-07 21:34:40.504294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.498 [2024-06-07 21:34:40.519291] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.498 [2024-06-07 21:34:40.519316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.498 [2024-06-07 21:34:40.536414] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.498 [2024-06-07 21:34:40.536437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.498 [2024-06-07 21:34:40.553414] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.498 [2024-06-07 21:34:40.553437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.498 [2024-06-07 21:34:40.569451] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.498 [2024-06-07 21:34:40.569474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.498 [2024-06-07 21:34:40.586713] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.498 [2024-06-07 21:34:40.586736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.498 [2024-06-07 21:34:40.602647] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.498 [2024-06-07 21:34:40.602671] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.498 [2024-06-07 21:34:40.620103] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.498 [2024-06-07 21:34:40.620135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.498 [2024-06-07 21:34:40.636811] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.498 [2024-06-07 21:34:40.636834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.498 [2024-06-07 21:34:40.653508] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.498 [2024-06-07 21:34:40.653532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.498 [2024-06-07 21:34:40.669554] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.498 [2024-06-07 21:34:40.669576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.498 [2024-06-07 21:34:40.686632] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.498 [2024-06-07 21:34:40.686656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.498 [2024-06-07 21:34:40.702496] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.498 [2024-06-07 21:34:40.702520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.498 [2024-06-07 21:34:40.713054] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.498 [2024-06-07 21:34:40.713077] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.498 [2024-06-07 21:34:40.728036] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.498 [2024-06-07 21:34:40.728060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.498 [2024-06-07 21:34:40.738125] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.498 [2024-06-07 21:34:40.738147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.498 [2024-06-07 21:34:40.749929] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.498 [2024-06-07 21:34:40.749952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.498 [2024-06-07 21:34:40.765351] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.498 [2024-06-07 21:34:40.765375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.757 [2024-06-07 21:34:40.782810] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.757 [2024-06-07 21:34:40.782835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.757 [2024-06-07 21:34:40.798543] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.757 [2024-06-07 21:34:40.798567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.757 [2024-06-07 21:34:40.809318] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.757 [2024-06-07 21:34:40.809341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.757 [2024-06-07 21:34:40.824675] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.757 [2024-06-07 21:34:40.824699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.757 [2024-06-07 21:34:40.841906] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.757 [2024-06-07 21:34:40.841930] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.757 [2024-06-07 21:34:40.859171] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.757 [2024-06-07 21:34:40.859194] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.757 [2024-06-07 21:34:40.875185] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.757 [2024-06-07 21:34:40.875209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.757 [2024-06-07 21:34:40.892351] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.757 [2024-06-07 21:34:40.892375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.757 [2024-06-07 21:34:40.909164] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.757 [2024-06-07 21:34:40.909187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.757 [2024-06-07 21:34:40.926054] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.757 [2024-06-07 21:34:40.926078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.757 [2024-06-07 21:34:40.942576] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.757 [2024-06-07 21:34:40.942600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.757 [2024-06-07 21:34:40.959552] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.757 [2024-06-07 21:34:40.959575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.757 [2024-06-07 21:34:40.974749] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.757 [2024-06-07 21:34:40.974772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.757 [2024-06-07 21:34:40.991143] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.757 [2024-06-07 21:34:40.991167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.757 [2024-06-07 21:34:41.008562] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.757 [2024-06-07 21:34:41.008586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.757 [2024-06-07 21:34:41.024982] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.757 [2024-06-07 21:34:41.025007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.016 [2024-06-07 21:34:41.042325] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.016 [2024-06-07 21:34:41.042351] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.016 [2024-06-07 21:34:41.058197] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.016 [2024-06-07 21:34:41.058222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.017 [2024-06-07 21:34:41.067512] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.017 [2024-06-07 21:34:41.067536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.017 [2024-06-07 21:34:41.082818] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.017 [2024-06-07 21:34:41.082841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.017 [2024-06-07 21:34:41.092776] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.017 [2024-06-07 21:34:41.092799] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.017 [2024-06-07 21:34:41.107809] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.017 [2024-06-07 21:34:41.107833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.017 [2024-06-07 21:34:41.125212] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.017 [2024-06-07 21:34:41.125236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.017 [2024-06-07 21:34:41.141546] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.017 [2024-06-07 21:34:41.141570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.017 [2024-06-07 21:34:41.158608] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.017 [2024-06-07 21:34:41.158631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.017 [2024-06-07 21:34:41.174426] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.017 [2024-06-07 21:34:41.174450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.017 [2024-06-07 21:34:41.192451] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.017 [2024-06-07 21:34:41.192475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.017 [2024-06-07 21:34:41.208444] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.017 [2024-06-07 21:34:41.208468] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.017 [2024-06-07 21:34:41.225451] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.017 [2024-06-07 21:34:41.225474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.017 [2024-06-07 21:34:41.241134] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.017 [2024-06-07 21:34:41.241158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.017 [2024-06-07 21:34:41.251608] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.017 [2024-06-07 21:34:41.251631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.017 [2024-06-07 21:34:41.267075] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.017 [2024-06-07 21:34:41.267099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.017 [2024-06-07 21:34:41.283746] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.017 [2024-06-07 21:34:41.283771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.276 [2024-06-07 21:34:41.300974] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.276 [2024-06-07 21:34:41.300998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.276 [2024-06-07 21:34:41.316493] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.276 [2024-06-07 21:34:41.316517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.276 [2024-06-07 21:34:41.326900] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.276 [2024-06-07 21:34:41.326923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.276 [2024-06-07 21:34:41.341740] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.276 [2024-06-07 21:34:41.341763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.276 [2024-06-07 21:34:41.351634] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.276 [2024-06-07 21:34:41.351660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.276 [2024-06-07 21:34:41.367599] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.276 [2024-06-07 21:34:41.367623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.276 [2024-06-07 21:34:41.384671] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.276 [2024-06-07 21:34:41.384694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.276 [2024-06-07 21:34:41.401373] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.276 [2024-06-07 21:34:41.401395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.276 [2024-06-07 21:34:41.418708] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.276 [2024-06-07 21:34:41.418732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.276 [2024-06-07 21:34:41.433670] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.276 [2024-06-07 21:34:41.433693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.276 [2024-06-07 21:34:41.450789] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.276 [2024-06-07 21:34:41.450813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.276 [2024-06-07 21:34:41.466477] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.276 [2024-06-07 21:34:41.466500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.276 [2024-06-07 21:34:41.476671] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.276 [2024-06-07 21:34:41.476694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.276 [2024-06-07 21:34:41.491732] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.276 [2024-06-07 21:34:41.491756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.276 [2024-06-07 21:34:41.508562] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.276 [2024-06-07 21:34:41.508585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.276 [2024-06-07 21:34:41.524890] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.276 [2024-06-07 21:34:41.524914] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.276 [2024-06-07 21:34:41.541236] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.276 [2024-06-07 21:34:41.541260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.535 [2024-06-07 21:34:41.551516] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.535 [2024-06-07 21:34:41.551541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.535 [2024-06-07 21:34:41.566437] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.535 [2024-06-07 21:34:41.566461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.535 [2024-06-07 21:34:41.585889] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.535 [2024-06-07 21:34:41.585913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.535 [2024-06-07 21:34:41.600918] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.535 [2024-06-07 21:34:41.600942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.535 [2024-06-07 21:34:41.611309] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.535 [2024-06-07 21:34:41.611332] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.535 [2024-06-07 21:34:41.626525] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.535 [2024-06-07 21:34:41.626548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.535 [2024-06-07 21:34:41.642527] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.535 [2024-06-07 21:34:41.642550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.535 [2024-06-07 21:34:41.653225] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.535 [2024-06-07 21:34:41.653248] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.535 [2024-06-07 21:34:41.669194] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.535 [2024-06-07 21:34:41.669218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.535 [2024-06-07 21:34:41.686167] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.535 [2024-06-07 21:34:41.686190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.535 [2024-06-07 21:34:41.704510] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.535 [2024-06-07 21:34:41.704533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.535 [2024-06-07 21:34:41.720154] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.535 [2024-06-07 21:34:41.720177] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.535 [2024-06-07 21:34:41.730490] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.535 [2024-06-07 21:34:41.730513] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.535 [2024-06-07 21:34:41.745439] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.535 [2024-06-07 21:34:41.745462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.535 [2024-06-07 21:34:41.765337] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.535 [2024-06-07 21:34:41.765360] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.535 [2024-06-07 21:34:41.780340] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.535 [2024-06-07 21:34:41.780363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.535 [2024-06-07 21:34:41.797693] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.535 [2024-06-07 21:34:41.797716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.794 [2024-06-07 21:34:41.812977] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.794 [2024-06-07 21:34:41.813002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.794 [2024-06-07 21:34:41.822986] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.794 [2024-06-07 21:34:41.823010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.794 [2024-06-07 21:34:41.837822] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.794 [2024-06-07 21:34:41.837845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.794 [2024-06-07 21:34:41.855500] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.794 [2024-06-07 21:34:41.855528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.794 [2024-06-07 21:34:41.871605] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.794 [2024-06-07 21:34:41.871630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.794 [2024-06-07 21:34:41.889126] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.794 [2024-06-07 21:34:41.889149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.794 [2024-06-07 21:34:41.905247] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.794 [2024-06-07 21:34:41.905269] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.794 [2024-06-07 21:34:41.922338] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.794 [2024-06-07 21:34:41.922361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.794 [2024-06-07 21:34:41.939605] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.794 [2024-06-07 21:34:41.939628] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.794 [2024-06-07 21:34:41.954508] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.794 [2024-06-07 21:34:41.954531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.794 [2024-06-07 21:34:41.971514] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.794 [2024-06-07 21:34:41.971536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.794 [2024-06-07 21:34:41.987502] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.794 [2024-06-07 21:34:41.987526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.794 [2024-06-07 21:34:42.006373] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.794 [2024-06-07 21:34:42.006397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.794 [2024-06-07 21:34:42.021913] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.794 [2024-06-07 21:34:42.021936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.794 [2024-06-07 21:34:42.039111] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.794 [2024-06-07 21:34:42.039135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.794 [2024-06-07 21:34:42.055608] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.794 [2024-06-07 21:34:42.055631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.052 [2024-06-07 21:34:42.072916] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.052 [2024-06-07 21:34:42.072941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.052 [2024-06-07 21:34:42.088700] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.052 [2024-06-07 21:34:42.088723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.052 [2024-06-07 21:34:42.099094] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.052 [2024-06-07 21:34:42.099117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.052 [2024-06-07 21:34:42.114194] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.052 [2024-06-07 21:34:42.114217] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.052 [2024-06-07 21:34:42.131524] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.052 [2024-06-07 21:34:42.131548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.052 [2024-06-07 21:34:42.146197] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.052 [2024-06-07 21:34:42.146220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.052 [2024-06-07 21:34:42.164490] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.052 [2024-06-07 21:34:42.164518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.052 [2024-06-07 21:34:42.179418] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.052 [2024-06-07 21:34:42.179442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.052 [2024-06-07 21:34:42.196058] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.052 [2024-06-07 21:34:42.196081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.052 [2024-06-07 21:34:42.213316] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.052 [2024-06-07 21:34:42.213340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.052 [2024-06-07 21:34:42.229414] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.052 [2024-06-07 21:34:42.229437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.052 [2024-06-07 21:34:42.247797] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.052 [2024-06-07 21:34:42.247823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.052 [2024-06-07 21:34:42.264101] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.052 [2024-06-07 21:34:42.264124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.052 [2024-06-07 21:34:42.280513] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.052 [2024-06-07 21:34:42.280538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.052 [2024-06-07 21:34:42.298160] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.052 [2024-06-07 21:34:42.298184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.052 [2024-06-07 21:34:42.313147] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.052 [2024-06-07 21:34:42.313171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.311 [2024-06-07 21:34:42.330834] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.311 [2024-06-07 21:34:42.330860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.311 [2024-06-07 21:34:42.346626] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.311 [2024-06-07 21:34:42.346650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.311 [2024-06-07 21:34:42.356349] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.311 [2024-06-07 21:34:42.356372] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.311 [2024-06-07 21:34:42.368283] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.311 [2024-06-07 21:34:42.368306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.311 [2024-06-07 21:34:42.383668] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.311 [2024-06-07 21:34:42.383693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.311 [2024-06-07 21:34:42.400224] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.311 [2024-06-07 21:34:42.400248] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.311 [2024-06-07 21:34:42.417327] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.311 [2024-06-07 21:34:42.417351] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.311 [2024-06-07 21:34:42.434084] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.311 [2024-06-07 21:34:42.434108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.311 [2024-06-07 21:34:42.451381] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.311 [2024-06-07 21:34:42.451405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.311 [2024-06-07 21:34:42.467382] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.311 [2024-06-07 21:34:42.467411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.311 [2024-06-07 21:34:42.476588] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.311 [2024-06-07 21:34:42.476611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.311 [2024-06-07 21:34:42.492424] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.311 [2024-06-07 21:34:42.492448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.311 [2024-06-07 21:34:42.508993] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.311 [2024-06-07 21:34:42.509017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.311 [2024-06-07 21:34:42.525616] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.311 [2024-06-07 21:34:42.525640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.311 [2024-06-07 21:34:42.542562] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.311 [2024-06-07 21:34:42.542585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.311 [2024-06-07 21:34:42.560415] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.311 [2024-06-07 21:34:42.560438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.311 [2024-06-07 21:34:42.576811] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.311 [2024-06-07 21:34:42.576836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.569 [2024-06-07 21:34:42.594523] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.569 [2024-06-07 21:34:42.594548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.569 [2024-06-07 21:34:42.610390] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.569 [2024-06-07 21:34:42.610414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.569 [2024-06-07 21:34:42.620735] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.569 [2024-06-07 21:34:42.620759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.569 [2024-06-07 21:34:42.635974] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.569 [2024-06-07 21:34:42.635997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.569 [2024-06-07 21:34:42.653598] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.569 [2024-06-07 21:34:42.653622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.569 [2024-06-07 21:34:42.669769] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.569 [2024-06-07 21:34:42.669793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.569 [2024-06-07 21:34:42.687300] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.569 [2024-06-07 21:34:42.687324] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.570 [2024-06-07 21:34:42.703644] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.570 [2024-06-07 21:34:42.703667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.570 [2024-06-07 21:34:42.720997] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.570 [2024-06-07 21:34:42.721021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.570 [2024-06-07 21:34:42.737054] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.570 [2024-06-07 21:34:42.737077] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.570 [2024-06-07 21:34:42.753485] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.570 [2024-06-07 21:34:42.753507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.570 [2024-06-07 21:34:42.771004] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.570 [2024-06-07 21:34:42.771040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.570 [2024-06-07 21:34:42.786916] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.570 [2024-06-07 21:34:42.786939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.570 [2024-06-07 21:34:42.804555] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.570 [2024-06-07 21:34:42.804578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.570 [2024-06-07 21:34:42.820059] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.570 [2024-06-07 21:34:42.820082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.570 [2024-06-07 21:34:42.829788] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.570 [2024-06-07 21:34:42.829811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.828 [2024-06-07 21:34:42.845155] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.828 [2024-06-07 21:34:42.845180] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.828 [2024-06-07 21:34:42.862649] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.828 [2024-06-07 21:34:42.862672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.828 [2024-06-07 21:34:42.878629] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.828 [2024-06-07 21:34:42.878652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.828 [2024-06-07 21:34:42.896319] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.828 [2024-06-07 21:34:42.896342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.828 [2024-06-07 21:34:42.912579] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.828 [2024-06-07 21:34:42.912604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.828 [2024-06-07 21:34:42.929817] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.828 [2024-06-07 21:34:42.929841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.828 [2024-06-07 21:34:42.946640] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.828 [2024-06-07 21:34:42.946663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.828 [2024-06-07 21:34:42.963335] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.828 [2024-06-07 21:34:42.963358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.828 [2024-06-07 21:34:42.980696] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.828 [2024-06-07 21:34:42.980719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.828 [2024-06-07 21:34:42.996481] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.828 [2024-06-07 21:34:42.996505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.828 [2024-06-07 21:34:43.006307] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.828 [2024-06-07 21:34:43.006331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.828 [2024-06-07 21:34:43.021984] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.828 [2024-06-07 21:34:43.022007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.828 [2024-06-07 21:34:43.031774] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.828 [2024-06-07 21:34:43.031797] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.828 [2024-06-07 21:34:43.046947] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.828 [2024-06-07 21:34:43.046972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.828 [2024-06-07 21:34:43.056912] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.828 [2024-06-07 21:34:43.056936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.828 [2024-06-07 21:34:43.071585] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.828 [2024-06-07 21:34:43.071609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.828 [2024-06-07 21:34:43.089553] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.828 [2024-06-07 21:34:43.089576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.087 [2024-06-07 21:34:43.104669] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.087 [2024-06-07 21:34:43.104694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.087 [2024-06-07 21:34:43.123986] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.087 [2024-06-07 21:34:43.124010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.087 [2024-06-07 21:34:43.138957] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.087 [2024-06-07 21:34:43.138980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.087 [2024-06-07 21:34:43.155940] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.087 [2024-06-07 21:34:43.155964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.087 [2024-06-07 21:34:43.173147] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.087 [2024-06-07 21:34:43.173170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.087 [2024-06-07 21:34:43.189104] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.087 [2024-06-07 21:34:43.189127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.087 [2024-06-07 21:34:43.206378] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.087 [2024-06-07 21:34:43.206401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.087 [2024-06-07 21:34:43.221827] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.087 [2024-06-07 21:34:43.221851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.087 [2024-06-07 21:34:43.232082] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.087 [2024-06-07 21:34:43.232104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.087 [2024-06-07 21:34:43.247192] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.087 [2024-06-07 21:34:43.247216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.087 [2024-06-07 21:34:43.263624] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.087 [2024-06-07 21:34:43.263647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.087 [2024-06-07 21:34:43.280103] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.087 [2024-06-07 21:34:43.280126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.087 [2024-06-07 21:34:43.295594] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.087 [2024-06-07 21:34:43.295617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.087 [2024-06-07 21:34:43.312369] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.087 [2024-06-07 21:34:43.312392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.087 [2024-06-07 21:34:43.328414] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.087 [2024-06-07 21:34:43.328438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.087 [2024-06-07 21:34:43.345710] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.087 [2024-06-07 21:34:43.345734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.345 [2024-06-07 21:34:43.361077] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.345 [2024-06-07 21:34:43.361102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.345 [2024-06-07 21:34:43.378057] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.345 [2024-06-07 21:34:43.378081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.345 [2024-06-07 21:34:43.394360] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.345 [2024-06-07 21:34:43.394385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.345 [2024-06-07 21:34:43.412160] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.345 [2024-06-07 21:34:43.412185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.345 [2024-06-07 21:34:43.427312] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.345 [2024-06-07 21:34:43.427335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.345 [2024-06-07 21:34:43.442162] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.345 [2024-06-07 21:34:43.442186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.345 00:17:43.345 Latency(us) 00:17:43.345 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:43.345 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:17:43.345 Nvme1n1 : 5.01 11350.43 88.68 0.00 0.00 11264.14 5213.09 20852.36 00:17:43.345 =================================================================================================================== 00:17:43.345 Total : 11350.43 88.68 0.00 0.00 11264.14 5213.09 20852.36 00:17:43.345 [2024-06-07 21:34:43.453756] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.345 [2024-06-07 21:34:43.453776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.345 [2024-06-07 21:34:43.465802] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.345 [2024-06-07 21:34:43.465822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.345 [2024-06-07 21:34:43.477836] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.345 [2024-06-07 21:34:43.477853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.345 [2024-06-07 21:34:43.489870] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.345 [2024-06-07 21:34:43.489889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.345 [2024-06-07 21:34:43.501903] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.345 [2024-06-07 21:34:43.501921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.345 [2024-06-07 21:34:43.513932] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.345 [2024-06-07 21:34:43.513948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.345 [2024-06-07 21:34:43.525968] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.345 [2024-06-07 21:34:43.525985] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.345 [2024-06-07 21:34:43.538002] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.345 [2024-06-07 21:34:43.538018] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.345 [2024-06-07 21:34:43.550037] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.345 [2024-06-07 21:34:43.550053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.346 [2024-06-07 21:34:43.562072] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.346 [2024-06-07 21:34:43.562087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.346 [2024-06-07 21:34:43.574107] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.346 [2024-06-07 21:34:43.574120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.346 [2024-06-07 21:34:43.586139] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.346 [2024-06-07 21:34:43.586154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.346 [2024-06-07 21:34:43.598173] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.346 [2024-06-07 21:34:43.598187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.346 [2024-06-07 21:34:43.610209] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.346 [2024-06-07 21:34:43.610226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.605 [2024-06-07 21:34:43.622250] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.605 [2024-06-07 21:34:43.622270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.605 [2024-06-07 21:34:43.634276] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.605 [2024-06-07 21:34:43.634290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.605 [2024-06-07 21:34:43.646313] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.605 [2024-06-07 21:34:43.646326] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1422255) - No such process 00:17:43.605 21:34:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1422255 00:17:43.605 21:34:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:43.605 21:34:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:43.605 21:34:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:43.605 21:34:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:43.605 21:34:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:17:43.605 21:34:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:43.605 21:34:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:43.605 delay0 00:17:43.605 21:34:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:43.605 21:34:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:17:43.605 21:34:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:43.605 21:34:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:43.605 21:34:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:43.605 21:34:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:17:43.605 EAL: No free 2048 kB hugepages reported on node 1 00:17:43.605 [2024-06-07 21:34:43.786060] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:17:50.171 Initializing NVMe Controllers 00:17:50.171 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:50.171 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:50.171 Initialization complete. Launching workers. 00:17:50.171 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 2267 00:17:50.171 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 2551, failed to submit 36 00:17:50.171 success 2366, unsuccess 185, failed 0 00:17:50.171 21:34:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:17:50.171 21:34:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:17:50.171 21:34:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:50.171 21:34:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:17:50.171 21:34:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:50.171 21:34:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:17:50.171 21:34:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:50.171 21:34:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:50.171 rmmod nvme_tcp 00:17:50.171 rmmod nvme_fabrics 00:17:50.171 rmmod nvme_keyring 00:17:50.171 21:34:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:50.171 21:34:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:17:50.171 21:34:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:17:50.171 21:34:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1420193 ']' 00:17:50.171 21:34:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1420193 00:17:50.171 21:34:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@949 -- # '[' -z 1420193 ']' 00:17:50.171 21:34:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # kill -0 1420193 00:17:50.172 21:34:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # uname 00:17:50.172 21:34:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:50.172 21:34:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1420193 00:17:50.172 21:34:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:17:50.172 21:34:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:17:50.172 21:34:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1420193' 00:17:50.172 killing process with pid 1420193 00:17:50.172 21:34:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@968 -- # kill 1420193 00:17:50.172 21:34:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@973 -- # wait 1420193 00:17:50.431 21:34:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:50.431 21:34:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:50.431 21:34:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:50.431 21:34:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:50.431 21:34:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:50.431 21:34:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.431 21:34:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:50.431 21:34:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.964 21:34:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:52.964 00:17:52.964 real 0m33.164s 00:17:52.964 user 0m44.354s 00:17:52.964 sys 0m10.987s 00:17:52.964 21:34:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:52.964 21:34:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:52.964 ************************************ 00:17:52.964 END TEST nvmf_zcopy 00:17:52.964 ************************************ 00:17:52.964 21:34:52 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:17:52.964 21:34:52 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:17:52.964 21:34:52 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:52.964 21:34:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:52.964 ************************************ 00:17:52.964 START TEST nvmf_nmic 00:17:52.964 ************************************ 00:17:52.964 21:34:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:17:52.964 * Looking for test storage... 00:17:52.964 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:52.964 21:34:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:52.964 21:34:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:17:52.964 21:34:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:52.964 21:34:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:52.964 21:34:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:52.964 21:34:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:52.964 21:34:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:52.964 21:34:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:52.964 21:34:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:52.964 21:34:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:52.964 21:34:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:52.964 21:34:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:52.964 21:34:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:52.964 21:34:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:17:52.964 21:34:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:52.964 21:34:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:52.964 21:34:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:52.964 21:34:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:52.964 21:34:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:52.964 21:34:52 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:52.964 21:34:52 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:52.964 21:34:52 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:52.964 21:34:52 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.964 21:34:52 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.964 21:34:52 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.964 21:34:52 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:17:52.964 21:34:52 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.964 21:34:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:17:52.964 21:34:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:52.964 21:34:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:52.964 21:34:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:52.964 21:34:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:52.964 21:34:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:52.964 21:34:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:52.964 21:34:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:52.964 21:34:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:52.964 21:34:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:52.964 21:34:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:52.964 21:34:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:17:52.964 21:34:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:52.964 21:34:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:52.964 21:34:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:52.964 21:34:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:52.964 21:34:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:52.964 21:34:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:52.964 21:34:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:52.964 21:34:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.964 21:34:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:52.964 21:34:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:52.964 21:34:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:17:52.964 21:34:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:59.624 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:59.624 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:17:59.624 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:59.624 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:59.624 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:59.624 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:59.624 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:59.624 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:17:59.624 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:59.624 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:17:59.624 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:17:59.624 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:17:59.624 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:17:59.624 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:17:59.624 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:17:59.624 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:59.624 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:59.624 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:59.624 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:59.625 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:59.625 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:59.625 Found net devices under 0000:af:00.0: cvl_0_0 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:59.625 Found net devices under 0000:af:00.1: cvl_0_1 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:59.625 21:34:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:59.625 21:34:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:59.625 21:34:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:59.625 21:34:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:59.625 21:34:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:59.625 21:34:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:59.625 21:34:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:59.625 21:34:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:59.625 21:34:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:59.625 21:34:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:59.625 21:34:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:59.625 21:34:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:59.625 21:34:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:59.625 21:34:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:59.625 21:34:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:59.625 21:34:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:59.625 21:34:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:59.625 21:34:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:59.625 21:34:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:59.625 21:34:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:59.625 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:59.625 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:17:59.625 00:17:59.625 --- 10.0.0.2 ping statistics --- 00:17:59.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.625 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:17:59.625 21:34:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:59.625 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:59.625 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:17:59.625 00:17:59.625 --- 10.0.0.1 ping statistics --- 00:17:59.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.625 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:17:59.625 21:34:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:59.625 21:34:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:17:59.625 21:34:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:59.625 21:34:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:59.626 21:34:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:59.626 21:34:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:59.626 21:34:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:59.626 21:34:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:59.626 21:34:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:59.626 21:34:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:17:59.626 21:34:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:59.626 21:34:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@723 -- # xtrace_disable 00:17:59.626 21:34:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:59.626 21:34:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1428518 00:17:59.626 21:34:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1428518 00:17:59.626 21:34:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:59.626 21:34:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@830 -- # '[' -z 1428518 ']' 00:17:59.626 21:34:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.626 21:34:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:59.626 21:34:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.626 21:34:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:59.626 21:34:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:59.626 [2024-06-07 21:34:59.369434] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:17:59.626 [2024-06-07 21:34:59.369491] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:59.626 EAL: No free 2048 kB hugepages reported on node 1 00:17:59.626 [2024-06-07 21:34:59.463757] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:59.626 [2024-06-07 21:34:59.553482] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:59.626 [2024-06-07 21:34:59.553531] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:59.626 [2024-06-07 21:34:59.553541] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:59.626 [2024-06-07 21:34:59.553550] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:59.626 [2024-06-07 21:34:59.553557] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:59.626 [2024-06-07 21:34:59.553615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:59.626 [2024-06-07 21:34:59.553714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:59.626 [2024-06-07 21:34:59.553832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:59.626 [2024-06-07 21:34:59.553832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.228 21:35:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:00.228 21:35:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@863 -- # return 0 00:18:00.228 21:35:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:00.228 21:35:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@729 -- # xtrace_disable 00:18:00.228 21:35:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:00.228 21:35:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:00.228 21:35:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:00.228 21:35:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:00.228 21:35:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:00.228 [2024-06-07 21:35:00.358660] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:00.228 21:35:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:00.228 21:35:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:00.228 21:35:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:00.228 21:35:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:00.228 Malloc0 00:18:00.228 21:35:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:00.228 21:35:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:00.228 21:35:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:00.228 21:35:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:00.228 21:35:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:00.228 21:35:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:00.228 21:35:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:00.228 21:35:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:00.228 21:35:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:00.228 21:35:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:00.228 21:35:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:00.228 21:35:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:00.228 [2024-06-07 21:35:00.414461] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:00.228 21:35:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:00.228 21:35:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:00.228 test case1: single bdev can't be used in multiple subsystems 00:18:00.228 21:35:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:00.228 21:35:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:00.228 21:35:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:00.228 21:35:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:00.228 21:35:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:00.228 21:35:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:00.228 21:35:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:00.228 21:35:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:00.228 21:35:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:18:00.228 21:35:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:00.228 21:35:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:00.228 21:35:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:00.228 [2024-06-07 21:35:00.438348] bdev.c:8035:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:00.228 [2024-06-07 21:35:00.438372] subsystem.c:2066:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:00.228 [2024-06-07 21:35:00.438382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.228 request: 00:18:00.228 { 00:18:00.228 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:00.228 "namespace": { 00:18:00.228 "bdev_name": "Malloc0", 00:18:00.228 "no_auto_visible": false 00:18:00.228 }, 00:18:00.228 "method": "nvmf_subsystem_add_ns", 00:18:00.228 "req_id": 1 00:18:00.228 } 00:18:00.228 Got JSON-RPC error response 00:18:00.228 response: 00:18:00.228 { 00:18:00.228 "code": -32602, 00:18:00.228 "message": "Invalid parameters" 00:18:00.228 } 00:18:00.228 21:35:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:18:00.228 21:35:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:18:00.228 21:35:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:00.228 21:35:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:00.228 Adding namespace failed - expected result. 00:18:00.228 21:35:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:00.228 test case2: host connect to nvmf target in multiple paths 00:18:00.228 21:35:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:00.228 21:35:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:00.228 21:35:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:00.228 [2024-06-07 21:35:00.450488] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:00.228 21:35:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:00.228 21:35:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:01.607 21:35:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:02.987 21:35:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:02.987 21:35:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1197 -- # local i=0 00:18:02.987 21:35:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:18:02.987 21:35:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:18:02.987 21:35:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # sleep 2 00:18:04.891 21:35:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:18:04.891 21:35:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:04.891 21:35:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:18:04.891 21:35:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:18:04.891 21:35:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:18:04.891 21:35:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # return 0 00:18:04.891 21:35:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:04.891 [global] 00:18:04.891 thread=1 00:18:04.891 invalidate=1 00:18:04.891 rw=write 00:18:04.891 time_based=1 00:18:04.891 runtime=1 00:18:04.891 ioengine=libaio 00:18:04.891 direct=1 00:18:04.891 bs=4096 00:18:04.891 iodepth=1 00:18:04.891 norandommap=0 00:18:04.891 numjobs=1 00:18:04.891 00:18:04.891 verify_dump=1 00:18:04.891 verify_backlog=512 00:18:04.891 verify_state_save=0 00:18:04.891 do_verify=1 00:18:04.891 verify=crc32c-intel 00:18:04.891 [job0] 00:18:04.891 filename=/dev/nvme0n1 00:18:04.891 Could not set queue depth (nvme0n1) 00:18:05.150 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:05.150 fio-3.35 00:18:05.150 Starting 1 thread 00:18:06.528 00:18:06.528 job0: (groupid=0, jobs=1): err= 0: pid=1429747: Fri Jun 7 21:35:06 2024 00:18:06.528 read: IOPS=21, BW=85.3KiB/s (87.3kB/s)(88.0KiB/1032msec) 00:18:06.528 slat (nsec): min=9812, max=24708, avg=21931.86, stdev=2824.86 00:18:06.528 clat (usec): min=40859, max=41752, avg=41004.20, stdev=181.72 00:18:06.528 lat (usec): min=40881, max=41761, avg=41026.13, stdev=179.20 00:18:06.528 clat percentiles (usec): 00:18:06.528 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:18:06.528 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:06.528 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:06.528 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:18:06.528 | 99.99th=[41681] 00:18:06.528 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:18:06.528 slat (nsec): min=10008, max=45036, avg=11471.63, stdev=2321.43 00:18:06.528 clat (usec): min=207, max=447, avg=237.09, stdev=23.52 00:18:06.528 lat (usec): min=218, max=492, avg=248.56, stdev=24.49 00:18:06.528 clat percentiles (usec): 00:18:06.528 | 1.00th=[ 212], 5.00th=[ 215], 10.00th=[ 217], 20.00th=[ 219], 00:18:06.528 | 30.00th=[ 221], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 233], 00:18:06.528 | 70.00th=[ 260], 80.00th=[ 265], 90.00th=[ 269], 95.00th=[ 273], 00:18:06.528 | 99.00th=[ 281], 99.50th=[ 285], 99.90th=[ 449], 99.95th=[ 449], 00:18:06.528 | 99.99th=[ 449] 00:18:06.528 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:18:06.528 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:06.528 lat (usec) : 250=62.73%, 500=33.15% 00:18:06.528 lat (msec) : 50=4.12% 00:18:06.528 cpu : usr=0.58%, sys=0.78%, ctx=534, majf=0, minf=2 00:18:06.528 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:06.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:06.528 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:06.528 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:06.528 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:06.528 00:18:06.528 Run status group 0 (all jobs): 00:18:06.528 READ: bw=85.3KiB/s (87.3kB/s), 85.3KiB/s-85.3KiB/s (87.3kB/s-87.3kB/s), io=88.0KiB (90.1kB), run=1032-1032msec 00:18:06.528 WRITE: bw=1984KiB/s (2032kB/s), 1984KiB/s-1984KiB/s (2032kB/s-2032kB/s), io=2048KiB (2097kB), run=1032-1032msec 00:18:06.528 00:18:06.528 Disk stats (read/write): 00:18:06.528 nvme0n1: ios=68/512, merge=0/0, ticks=788/109, in_queue=897, util=92.38% 00:18:06.528 21:35:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:06.528 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:06.528 21:35:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:06.528 21:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1218 -- # local i=0 00:18:06.528 21:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:18:06.528 21:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:06.528 21:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:18:06.528 21:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:06.528 21:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1230 -- # return 0 00:18:06.528 21:35:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:06.528 21:35:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:18:06.528 21:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:06.528 21:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:18:06.528 21:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:06.528 21:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:18:06.528 21:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:06.528 21:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:06.528 rmmod nvme_tcp 00:18:06.528 rmmod nvme_fabrics 00:18:06.787 rmmod nvme_keyring 00:18:06.787 21:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:06.787 21:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:18:06.787 21:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:18:06.787 21:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1428518 ']' 00:18:06.787 21:35:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1428518 00:18:06.787 21:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@949 -- # '[' -z 1428518 ']' 00:18:06.787 21:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # kill -0 1428518 00:18:06.787 21:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # uname 00:18:06.787 21:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:06.787 21:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1428518 00:18:06.787 21:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:18:06.787 21:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:18:06.787 21:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1428518' 00:18:06.787 killing process with pid 1428518 00:18:06.787 21:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@968 -- # kill 1428518 00:18:06.787 21:35:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@973 -- # wait 1428518 00:18:07.046 21:35:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:07.046 21:35:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:07.046 21:35:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:07.046 21:35:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:07.046 21:35:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:07.046 21:35:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.046 21:35:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:07.046 21:35:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.951 21:35:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:08.951 00:18:08.951 real 0m16.418s 00:18:08.951 user 0m43.355s 00:18:08.951 sys 0m5.717s 00:18:08.951 21:35:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:08.951 21:35:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:08.951 ************************************ 00:18:08.951 END TEST nvmf_nmic 00:18:08.951 ************************************ 00:18:08.951 21:35:09 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:08.951 21:35:09 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:18:08.951 21:35:09 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:08.951 21:35:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:09.211 ************************************ 00:18:09.211 START TEST nvmf_fio_target 00:18:09.211 ************************************ 00:18:09.211 21:35:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:09.211 * Looking for test storage... 00:18:09.211 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:09.211 21:35:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:09.211 21:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:18:09.211 21:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:09.211 21:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:09.211 21:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:09.211 21:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:09.211 21:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:09.211 21:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:09.211 21:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:09.211 21:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:09.211 21:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:09.211 21:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:09.211 21:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:09.211 21:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:18:09.211 21:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:09.211 21:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:09.211 21:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:09.211 21:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:09.211 21:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:09.211 21:35:09 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:09.211 21:35:09 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:09.211 21:35:09 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:09.211 21:35:09 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.211 21:35:09 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.211 21:35:09 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.211 21:35:09 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:18:09.211 21:35:09 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.211 21:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:18:09.211 21:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:09.211 21:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:09.211 21:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:09.211 21:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:09.211 21:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:09.211 21:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:09.211 21:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:09.211 21:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:09.211 21:35:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:09.211 21:35:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:09.211 21:35:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:09.211 21:35:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:18:09.211 21:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:09.211 21:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:09.211 21:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:09.211 21:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:09.211 21:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:09.211 21:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.211 21:35:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:09.211 21:35:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.211 21:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:09.211 21:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:09.211 21:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:09.211 21:35:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:15.777 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:15.777 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:15.777 Found net devices under 0000:af:00.0: cvl_0_0 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:15.777 Found net devices under 0000:af:00.1: cvl_0_1 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:15.777 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:15.778 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:15.778 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:15.778 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:15.778 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:15.778 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:15.778 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:15.778 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:15.778 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:15.778 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:15.778 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:15.778 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:15.778 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:15.778 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:15.778 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:15.778 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:15.778 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:15.778 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:15.778 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:15.778 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:15.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:15.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.292 ms 00:18:15.778 00:18:15.778 --- 10.0.0.2 ping statistics --- 00:18:15.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.778 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:18:15.778 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:15.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:15.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:18:15.778 00:18:15.778 --- 10.0.0.1 ping statistics --- 00:18:15.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.778 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:18:15.778 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:15.778 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:18:15.778 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:15.778 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:15.778 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:15.778 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:15.778 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:15.778 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:15.778 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:15.778 21:35:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:18:15.778 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:15.778 21:35:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:18:15.778 21:35:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.778 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1434022 00:18:15.778 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1434022 00:18:15.778 21:35:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:15.778 21:35:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@830 -- # '[' -z 1434022 ']' 00:18:15.778 21:35:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.778 21:35:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:15.778 21:35:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.778 21:35:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:15.778 21:35:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.778 [2024-06-07 21:35:15.975335] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:18:15.778 [2024-06-07 21:35:15.975390] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:15.778 EAL: No free 2048 kB hugepages reported on node 1 00:18:16.037 [2024-06-07 21:35:16.072269] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:16.037 [2024-06-07 21:35:16.165297] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:16.037 [2024-06-07 21:35:16.165337] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:16.037 [2024-06-07 21:35:16.165347] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:16.037 [2024-06-07 21:35:16.165356] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:16.037 [2024-06-07 21:35:16.165363] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:16.037 [2024-06-07 21:35:16.165412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:16.037 [2024-06-07 21:35:16.165512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:16.037 [2024-06-07 21:35:16.165645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:16.037 [2024-06-07 21:35:16.165645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:16.972 21:35:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:16.972 21:35:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@863 -- # return 0 00:18:16.972 21:35:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:16.972 21:35:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:18:16.973 21:35:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.973 21:35:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:16.973 21:35:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:16.973 [2024-06-07 21:35:17.183672] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:16.973 21:35:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:17.231 21:35:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:18:17.231 21:35:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:17.489 21:35:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:18:17.489 21:35:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:18.057 21:35:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:18:18.057 21:35:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:18.057 21:35:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:18:18.057 21:35:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:18:18.315 21:35:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:18.574 21:35:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:18:18.574 21:35:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:18.833 21:35:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:18:18.833 21:35:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:19.092 21:35:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:18:19.092 21:35:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:18:19.351 21:35:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:19.609 21:35:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:19.609 21:35:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:19.868 21:35:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:19.868 21:35:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:20.127 21:35:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:20.385 [2024-06-07 21:35:20.407056] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:20.385 21:35:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:18:20.385 21:35:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:18:20.644 21:35:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:22.022 21:35:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:18:22.022 21:35:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # local i=0 00:18:22.022 21:35:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:18:22.022 21:35:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # [[ -n 4 ]] 00:18:22.022 21:35:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # nvme_device_counter=4 00:18:22.022 21:35:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # sleep 2 00:18:24.556 21:35:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:18:24.556 21:35:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:24.556 21:35:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:18:24.556 21:35:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # nvme_devices=4 00:18:24.556 21:35:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:18:24.556 21:35:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # return 0 00:18:24.556 21:35:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:24.556 [global] 00:18:24.556 thread=1 00:18:24.556 invalidate=1 00:18:24.556 rw=write 00:18:24.556 time_based=1 00:18:24.556 runtime=1 00:18:24.556 ioengine=libaio 00:18:24.556 direct=1 00:18:24.556 bs=4096 00:18:24.556 iodepth=1 00:18:24.556 norandommap=0 00:18:24.556 numjobs=1 00:18:24.556 00:18:24.556 verify_dump=1 00:18:24.556 verify_backlog=512 00:18:24.556 verify_state_save=0 00:18:24.556 do_verify=1 00:18:24.556 verify=crc32c-intel 00:18:24.556 [job0] 00:18:24.556 filename=/dev/nvme0n1 00:18:24.556 [job1] 00:18:24.556 filename=/dev/nvme0n2 00:18:24.556 [job2] 00:18:24.556 filename=/dev/nvme0n3 00:18:24.556 [job3] 00:18:24.556 filename=/dev/nvme0n4 00:18:24.556 Could not set queue depth (nvme0n1) 00:18:24.556 Could not set queue depth (nvme0n2) 00:18:24.556 Could not set queue depth (nvme0n3) 00:18:24.556 Could not set queue depth (nvme0n4) 00:18:24.556 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:24.556 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:24.556 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:24.556 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:24.556 fio-3.35 00:18:24.556 Starting 4 threads 00:18:25.964 00:18:25.964 job0: (groupid=0, jobs=1): err= 0: pid=1435805: Fri Jun 7 21:35:25 2024 00:18:25.964 read: IOPS=173, BW=694KiB/s (711kB/s)(704KiB/1014msec) 00:18:25.964 slat (nsec): min=7004, max=24664, avg=9528.84, stdev=4643.99 00:18:25.964 clat (usec): min=388, max=41259, avg=4816.70, stdev=12613.88 00:18:25.964 lat (usec): min=397, max=41269, avg=4826.23, stdev=12617.81 00:18:25.964 clat percentiles (usec): 00:18:25.964 | 1.00th=[ 396], 5.00th=[ 400], 10.00th=[ 404], 20.00th=[ 416], 00:18:25.964 | 30.00th=[ 420], 40.00th=[ 429], 50.00th=[ 437], 60.00th=[ 457], 00:18:25.964 | 70.00th=[ 465], 80.00th=[ 482], 90.00th=[40633], 95.00th=[41157], 00:18:25.964 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:25.964 | 99.99th=[41157] 00:18:25.964 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:18:25.964 slat (nsec): min=10305, max=47564, avg=12567.13, stdev=3415.85 00:18:25.964 clat (usec): min=210, max=3695, avg=302.81, stdev=157.49 00:18:25.964 lat (usec): min=221, max=3707, avg=315.38, stdev=157.70 00:18:25.964 clat percentiles (usec): 00:18:25.964 | 1.00th=[ 233], 5.00th=[ 253], 10.00th=[ 260], 20.00th=[ 265], 00:18:25.964 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 289], 00:18:25.964 | 70.00th=[ 302], 80.00th=[ 314], 90.00th=[ 351], 95.00th=[ 408], 00:18:25.964 | 99.00th=[ 494], 99.50th=[ 523], 99.90th=[ 3687], 99.95th=[ 3687], 00:18:25.964 | 99.99th=[ 3687] 00:18:25.964 bw ( KiB/s): min= 4096, max= 4096, per=20.28%, avg=4096.00, stdev= 0.00, samples=1 00:18:25.964 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:25.964 lat (usec) : 250=2.76%, 500=92.88%, 750=1.31%, 1000=0.15% 00:18:25.964 lat (msec) : 4=0.15%, 50=2.76% 00:18:25.964 cpu : usr=0.30%, sys=1.18%, ctx=691, majf=0, minf=1 00:18:25.964 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:25.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:25.964 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:25.964 issued rwts: total=176,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:25.964 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:25.964 job1: (groupid=0, jobs=1): err= 0: pid=1435806: Fri Jun 7 21:35:25 2024 00:18:25.964 read: IOPS=1015, BW=4063KiB/s (4161kB/s)(4100KiB/1009msec) 00:18:25.964 slat (nsec): min=7061, max=36713, avg=8071.11, stdev=1535.57 00:18:25.964 clat (usec): min=437, max=41521, avg=539.59, stdev=1281.79 00:18:25.964 lat (usec): min=445, max=41531, avg=547.66, stdev=1281.84 00:18:25.964 clat percentiles (usec): 00:18:25.964 | 1.00th=[ 449], 5.00th=[ 457], 10.00th=[ 465], 20.00th=[ 474], 00:18:25.964 | 30.00th=[ 478], 40.00th=[ 486], 50.00th=[ 490], 60.00th=[ 502], 00:18:25.964 | 70.00th=[ 515], 80.00th=[ 529], 90.00th=[ 545], 95.00th=[ 562], 00:18:25.964 | 99.00th=[ 586], 99.50th=[ 603], 99.90th=[ 930], 99.95th=[41681], 00:18:25.964 | 99.99th=[41681] 00:18:25.964 write: IOPS=1522, BW=6089KiB/s (6235kB/s)(6144KiB/1009msec); 0 zone resets 00:18:25.964 slat (usec): min=10, max=152, avg=11.68, stdev= 3.96 00:18:25.964 clat (usec): min=218, max=480, avg=273.98, stdev=25.85 00:18:25.964 lat (usec): min=229, max=627, avg=285.66, stdev=27.15 00:18:25.964 clat percentiles (usec): 00:18:25.964 | 1.00th=[ 237], 5.00th=[ 247], 10.00th=[ 251], 20.00th=[ 258], 00:18:25.964 | 30.00th=[ 265], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 273], 00:18:25.964 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 297], 95.00th=[ 314], 00:18:25.964 | 99.00th=[ 383], 99.50th=[ 396], 99.90th=[ 482], 99.95th=[ 482], 00:18:25.964 | 99.99th=[ 482] 00:18:25.964 bw ( KiB/s): min= 5664, max= 6624, per=30.42%, avg=6144.00, stdev=678.82, samples=2 00:18:25.964 iops : min= 1416, max= 1656, avg=1536.00, stdev=169.71, samples=2 00:18:25.964 lat (usec) : 250=5.43%, 500=78.64%, 750=15.85%, 1000=0.04% 00:18:25.964 lat (msec) : 50=0.04% 00:18:25.964 cpu : usr=2.18%, sys=4.07%, ctx=2562, majf=0, minf=1 00:18:25.964 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:25.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:25.964 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:25.964 issued rwts: total=1025,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:25.964 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:25.964 job2: (groupid=0, jobs=1): err= 0: pid=1435807: Fri Jun 7 21:35:25 2024 00:18:25.964 read: IOPS=1062, BW=4252KiB/s (4354kB/s)(4256KiB/1001msec) 00:18:25.964 slat (nsec): min=6510, max=30398, avg=8069.93, stdev=1235.63 00:18:25.964 clat (usec): min=404, max=913, avg=524.37, stdev=41.20 00:18:25.964 lat (usec): min=411, max=921, avg=532.44, stdev=41.23 00:18:25.964 clat percentiles (usec): 00:18:25.964 | 1.00th=[ 437], 5.00th=[ 461], 10.00th=[ 478], 20.00th=[ 490], 00:18:25.964 | 30.00th=[ 502], 40.00th=[ 515], 50.00th=[ 529], 60.00th=[ 537], 00:18:25.964 | 70.00th=[ 545], 80.00th=[ 553], 90.00th=[ 570], 95.00th=[ 578], 00:18:25.964 | 99.00th=[ 611], 99.50th=[ 652], 99.90th=[ 742], 99.95th=[ 914], 00:18:25.964 | 99.99th=[ 914] 00:18:25.964 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:18:25.964 slat (usec): min=9, max=102, avg=12.28, stdev= 7.00 00:18:25.964 clat (usec): min=171, max=463, avg=265.49, stdev=25.92 00:18:25.964 lat (usec): min=214, max=535, avg=277.77, stdev=27.53 00:18:25.964 clat percentiles (usec): 00:18:25.964 | 1.00th=[ 210], 5.00th=[ 233], 10.00th=[ 237], 20.00th=[ 247], 00:18:25.964 | 30.00th=[ 253], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 269], 00:18:25.964 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 297], 95.00th=[ 310], 00:18:25.964 | 99.00th=[ 347], 99.50th=[ 367], 99.90th=[ 453], 99.95th=[ 465], 00:18:25.964 | 99.99th=[ 465] 00:18:25.964 bw ( KiB/s): min= 6408, max= 6408, per=31.73%, avg=6408.00, stdev= 0.00, samples=1 00:18:25.964 iops : min= 1602, max= 1602, avg=1602.00, stdev= 0.00, samples=1 00:18:25.964 lat (usec) : 250=14.85%, 500=55.73%, 750=29.38%, 1000=0.04% 00:18:25.964 cpu : usr=2.00%, sys=2.30%, ctx=2603, majf=0, minf=1 00:18:25.964 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:25.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:25.964 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:25.964 issued rwts: total=1064,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:25.964 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:25.964 job3: (groupid=0, jobs=1): err= 0: pid=1435808: Fri Jun 7 21:35:25 2024 00:18:25.964 read: IOPS=1083, BW=4336KiB/s (4440kB/s)(4340KiB/1001msec) 00:18:25.964 slat (nsec): min=6601, max=29412, avg=7376.00, stdev=1058.76 00:18:25.964 clat (usec): min=420, max=642, avg=523.35, stdev=39.86 00:18:25.964 lat (usec): min=428, max=649, avg=530.73, stdev=39.80 00:18:25.964 clat percentiles (usec): 00:18:25.964 | 1.00th=[ 437], 5.00th=[ 453], 10.00th=[ 469], 20.00th=[ 486], 00:18:25.964 | 30.00th=[ 502], 40.00th=[ 519], 50.00th=[ 529], 60.00th=[ 537], 00:18:25.964 | 70.00th=[ 553], 80.00th=[ 562], 90.00th=[ 570], 95.00th=[ 578], 00:18:25.964 | 99.00th=[ 594], 99.50th=[ 603], 99.90th=[ 644], 99.95th=[ 644], 00:18:25.964 | 99.99th=[ 644] 00:18:25.964 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:18:25.964 slat (nsec): min=9437, max=37819, avg=10611.48, stdev=1513.91 00:18:25.964 clat (usec): min=216, max=522, avg=262.06, stdev=17.98 00:18:25.964 lat (usec): min=226, max=560, avg=272.67, stdev=18.32 00:18:25.964 clat percentiles (usec): 00:18:25.964 | 1.00th=[ 227], 5.00th=[ 237], 10.00th=[ 241], 20.00th=[ 249], 00:18:25.964 | 30.00th=[ 253], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 265], 00:18:25.964 | 70.00th=[ 269], 80.00th=[ 273], 90.00th=[ 281], 95.00th=[ 289], 00:18:25.964 | 99.00th=[ 310], 99.50th=[ 330], 99.90th=[ 379], 99.95th=[ 523], 00:18:25.964 | 99.99th=[ 523] 00:18:25.964 bw ( KiB/s): min= 6688, max= 6688, per=33.11%, avg=6688.00, stdev= 0.00, samples=1 00:18:25.964 iops : min= 1672, max= 1672, avg=1672.00, stdev= 0.00, samples=1 00:18:25.964 lat (usec) : 250=13.74%, 500=57.23%, 750=29.03% 00:18:25.964 cpu : usr=1.30%, sys=2.50%, ctx=2621, majf=0, minf=2 00:18:25.964 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:25.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:25.964 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:25.964 issued rwts: total=1085,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:25.964 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:25.964 00:18:25.965 Run status group 0 (all jobs): 00:18:25.965 READ: bw=12.9MiB/s (13.5MB/s), 694KiB/s-4336KiB/s (711kB/s-4440kB/s), io=13.1MiB (13.7MB), run=1001-1014msec 00:18:25.965 WRITE: bw=19.7MiB/s (20.7MB/s), 2020KiB/s-6138KiB/s (2068kB/s-6285kB/s), io=20.0MiB (21.0MB), run=1001-1014msec 00:18:25.965 00:18:25.965 Disk stats (read/write): 00:18:25.965 nvme0n1: ios=194/512, merge=0/0, ticks=1503/142, in_queue=1645, util=83.27% 00:18:25.965 nvme0n2: ios=1074/1035, merge=0/0, ticks=567/264, in_queue=831, util=88.93% 00:18:25.965 nvme0n3: ios=1013/1024, merge=0/0, ticks=1417/253, in_queue=1670, util=91.26% 00:18:25.965 nvme0n4: ios=1068/1024, merge=0/0, ticks=610/266, in_queue=876, util=96.22% 00:18:25.965 21:35:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:18:25.965 [global] 00:18:25.965 thread=1 00:18:25.965 invalidate=1 00:18:25.965 rw=randwrite 00:18:25.965 time_based=1 00:18:25.965 runtime=1 00:18:25.965 ioengine=libaio 00:18:25.965 direct=1 00:18:25.965 bs=4096 00:18:25.965 iodepth=1 00:18:25.965 norandommap=0 00:18:25.965 numjobs=1 00:18:25.965 00:18:25.965 verify_dump=1 00:18:25.965 verify_backlog=512 00:18:25.965 verify_state_save=0 00:18:25.965 do_verify=1 00:18:25.965 verify=crc32c-intel 00:18:25.965 [job0] 00:18:25.965 filename=/dev/nvme0n1 00:18:25.965 [job1] 00:18:25.965 filename=/dev/nvme0n2 00:18:25.965 [job2] 00:18:25.965 filename=/dev/nvme0n3 00:18:25.965 [job3] 00:18:25.965 filename=/dev/nvme0n4 00:18:25.965 Could not set queue depth (nvme0n1) 00:18:25.965 Could not set queue depth (nvme0n2) 00:18:25.965 Could not set queue depth (nvme0n3) 00:18:25.965 Could not set queue depth (nvme0n4) 00:18:26.234 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:26.234 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:26.234 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:26.234 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:26.234 fio-3.35 00:18:26.234 Starting 4 threads 00:18:27.621 00:18:27.621 job0: (groupid=0, jobs=1): err= 0: pid=1436234: Fri Jun 7 21:35:27 2024 00:18:27.621 read: IOPS=20, BW=83.7KiB/s (85.8kB/s)(84.0KiB/1003msec) 00:18:27.621 slat (nsec): min=6713, max=25229, avg=19797.86, stdev=4726.48 00:18:27.621 clat (usec): min=40810, max=41935, avg=41029.14, stdev=218.92 00:18:27.621 lat (usec): min=40833, max=41958, avg=41048.94, stdev=219.40 00:18:27.621 clat percentiles (usec): 00:18:27.621 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:18:27.621 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:27.621 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:27.621 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:18:27.621 | 99.99th=[41681] 00:18:27.621 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:18:27.621 slat (nsec): min=5921, max=41125, avg=11234.54, stdev=2708.71 00:18:27.621 clat (usec): min=195, max=505, avg=260.83, stdev=24.59 00:18:27.621 lat (usec): min=209, max=546, avg=272.06, stdev=25.33 00:18:27.621 clat percentiles (usec): 00:18:27.621 | 1.00th=[ 223], 5.00th=[ 235], 10.00th=[ 241], 20.00th=[ 245], 00:18:27.621 | 30.00th=[ 249], 40.00th=[ 251], 50.00th=[ 258], 60.00th=[ 262], 00:18:27.621 | 70.00th=[ 269], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 302], 00:18:27.621 | 99.00th=[ 334], 99.50th=[ 371], 99.90th=[ 506], 99.95th=[ 506], 00:18:27.621 | 99.99th=[ 506] 00:18:27.621 bw ( KiB/s): min= 4096, max= 4096, per=28.74%, avg=4096.00, stdev= 0.00, samples=1 00:18:27.621 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:27.621 lat (usec) : 250=33.21%, 500=62.66%, 750=0.19% 00:18:27.621 lat (msec) : 50=3.94% 00:18:27.621 cpu : usr=0.20%, sys=0.70%, ctx=533, majf=0, minf=2 00:18:27.621 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:27.621 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:27.621 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:27.621 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:27.621 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:27.621 job1: (groupid=0, jobs=1): err= 0: pid=1436235: Fri Jun 7 21:35:27 2024 00:18:27.621 read: IOPS=586, BW=2346KiB/s (2402kB/s)(2360KiB/1006msec) 00:18:27.621 slat (usec): min=7, max=115, avg= 9.67, stdev= 4.94 00:18:27.621 clat (usec): min=381, max=41970, avg=1237.32, stdev=5501.33 00:18:27.621 lat (usec): min=389, max=41993, avg=1246.99, stdev=5502.71 00:18:27.621 clat percentiles (usec): 00:18:27.621 | 1.00th=[ 388], 5.00th=[ 400], 10.00th=[ 404], 20.00th=[ 433], 00:18:27.621 | 30.00th=[ 449], 40.00th=[ 465], 50.00th=[ 482], 60.00th=[ 494], 00:18:27.621 | 70.00th=[ 498], 80.00th=[ 506], 90.00th=[ 523], 95.00th=[ 619], 00:18:27.621 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:18:27.621 | 99.99th=[42206] 00:18:27.621 write: IOPS=1017, BW=4072KiB/s (4169kB/s)(4096KiB/1006msec); 0 zone resets 00:18:27.621 slat (nsec): min=9129, max=45439, avg=12428.79, stdev=2956.60 00:18:27.621 clat (usec): min=164, max=1060, avg=245.82, stdev=45.66 00:18:27.621 lat (usec): min=187, max=1076, avg=258.25, stdev=46.03 00:18:27.621 clat percentiles (usec): 00:18:27.621 | 1.00th=[ 182], 5.00th=[ 190], 10.00th=[ 200], 20.00th=[ 212], 00:18:27.621 | 30.00th=[ 223], 40.00th=[ 233], 50.00th=[ 243], 60.00th=[ 251], 00:18:27.621 | 70.00th=[ 260], 80.00th=[ 273], 90.00th=[ 297], 95.00th=[ 318], 00:18:27.621 | 99.00th=[ 355], 99.50th=[ 371], 99.90th=[ 388], 99.95th=[ 1057], 00:18:27.621 | 99.99th=[ 1057] 00:18:27.621 bw ( KiB/s): min= 568, max= 7624, per=28.74%, avg=4096.00, stdev=4989.35, samples=2 00:18:27.621 iops : min= 142, max= 1906, avg=1024.00, stdev=1247.34, samples=2 00:18:27.621 lat (usec) : 250=37.42%, 500=52.17%, 750=9.42% 00:18:27.621 lat (msec) : 2=0.31%, 50=0.68% 00:18:27.621 cpu : usr=1.59%, sys=2.19%, ctx=1615, majf=0, minf=1 00:18:27.621 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:27.621 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:27.621 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:27.621 issued rwts: total=590,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:27.621 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:27.621 job2: (groupid=0, jobs=1): err= 0: pid=1436236: Fri Jun 7 21:35:27 2024 00:18:27.621 read: IOPS=20, BW=83.5KiB/s (85.5kB/s)(84.0KiB/1006msec) 00:18:27.621 slat (nsec): min=11955, max=33111, avg=22169.33, stdev=3652.65 00:18:27.621 clat (usec): min=40879, max=41981, avg=41049.49, stdev=251.84 00:18:27.621 lat (usec): min=40901, max=42014, avg=41071.66, stdev=252.88 00:18:27.621 clat percentiles (usec): 00:18:27.621 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:18:27.621 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:27.621 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:18:27.621 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:27.621 | 99.99th=[42206] 00:18:27.621 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:18:27.621 slat (nsec): min=10238, max=44919, avg=12492.04, stdev=2438.58 00:18:27.621 clat (usec): min=210, max=480, avg=263.67, stdev=29.00 00:18:27.621 lat (usec): min=222, max=518, avg=276.17, stdev=29.37 00:18:27.621 clat percentiles (usec): 00:18:27.621 | 1.00th=[ 221], 5.00th=[ 229], 10.00th=[ 235], 20.00th=[ 243], 00:18:27.621 | 30.00th=[ 249], 40.00th=[ 251], 50.00th=[ 258], 60.00th=[ 262], 00:18:27.621 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 306], 95.00th=[ 322], 00:18:27.621 | 99.00th=[ 347], 99.50th=[ 355], 99.90th=[ 482], 99.95th=[ 482], 00:18:27.621 | 99.99th=[ 482] 00:18:27.621 bw ( KiB/s): min= 4096, max= 4096, per=28.74%, avg=4096.00, stdev= 0.00, samples=1 00:18:27.621 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:27.621 lat (usec) : 250=35.65%, 500=60.41% 00:18:27.621 lat (msec) : 50=3.94% 00:18:27.622 cpu : usr=0.20%, sys=1.19%, ctx=533, majf=0, minf=1 00:18:27.622 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:27.622 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:27.622 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:27.622 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:27.622 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:27.622 job3: (groupid=0, jobs=1): err= 0: pid=1436237: Fri Jun 7 21:35:27 2024 00:18:27.622 read: IOPS=1301, BW=5207KiB/s (5332kB/s)(5212KiB/1001msec) 00:18:27.622 slat (nsec): min=6451, max=41512, avg=7812.60, stdev=1346.60 00:18:27.622 clat (usec): min=291, max=41089, avg=459.78, stdev=1585.79 00:18:27.622 lat (usec): min=298, max=41101, avg=467.59, stdev=1585.94 00:18:27.622 clat percentiles (usec): 00:18:27.622 | 1.00th=[ 302], 5.00th=[ 314], 10.00th=[ 318], 20.00th=[ 326], 00:18:27.622 | 30.00th=[ 334], 40.00th=[ 343], 50.00th=[ 437], 60.00th=[ 449], 00:18:27.622 | 70.00th=[ 457], 80.00th=[ 461], 90.00th=[ 474], 95.00th=[ 482], 00:18:27.622 | 99.00th=[ 498], 99.50th=[ 519], 99.90th=[40633], 99.95th=[41157], 00:18:27.622 | 99.99th=[41157] 00:18:27.622 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:18:27.622 slat (nsec): min=9363, max=38398, avg=11450.03, stdev=1798.80 00:18:27.622 clat (usec): min=181, max=436, avg=238.20, stdev=29.51 00:18:27.622 lat (usec): min=192, max=459, avg=249.65, stdev=29.80 00:18:27.622 clat percentiles (usec): 00:18:27.622 | 1.00th=[ 188], 5.00th=[ 194], 10.00th=[ 200], 20.00th=[ 215], 00:18:27.622 | 30.00th=[ 223], 40.00th=[ 231], 50.00th=[ 239], 60.00th=[ 247], 00:18:27.622 | 70.00th=[ 253], 80.00th=[ 260], 90.00th=[ 269], 95.00th=[ 281], 00:18:27.622 | 99.00th=[ 322], 99.50th=[ 388], 99.90th=[ 429], 99.95th=[ 437], 00:18:27.622 | 99.99th=[ 437] 00:18:27.622 bw ( KiB/s): min= 5328, max= 5328, per=37.39%, avg=5328.00, stdev= 0.00, samples=1 00:18:27.622 iops : min= 1332, max= 1332, avg=1332.00, stdev= 0.00, samples=1 00:18:27.622 lat (usec) : 250=35.82%, 500=63.75%, 750=0.35% 00:18:27.622 lat (msec) : 50=0.07% 00:18:27.622 cpu : usr=1.30%, sys=3.20%, ctx=2840, majf=0, minf=1 00:18:27.622 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:27.622 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:27.622 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:27.622 issued rwts: total=1303,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:27.622 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:27.622 00:18:27.622 Run status group 0 (all jobs): 00:18:27.622 READ: bw=7694KiB/s (7878kB/s), 83.5KiB/s-5207KiB/s (85.5kB/s-5332kB/s), io=7740KiB (7926kB), run=1001-1006msec 00:18:27.622 WRITE: bw=13.9MiB/s (14.6MB/s), 2036KiB/s-6138KiB/s (2085kB/s-6285kB/s), io=14.0MiB (14.7MB), run=1001-1006msec 00:18:27.622 00:18:27.622 Disk stats (read/write): 00:18:27.622 nvme0n1: ios=67/512, merge=0/0, ticks=738/133, in_queue=871, util=88.08% 00:18:27.622 nvme0n2: ios=621/1024, merge=0/0, ticks=1484/240, in_queue=1724, util=97.97% 00:18:27.622 nvme0n3: ios=74/512, merge=0/0, ticks=779/129, in_queue=908, util=92.01% 00:18:27.622 nvme0n4: ios=1062/1308, merge=0/0, ticks=1359/314, in_queue=1673, util=99.79% 00:18:27.622 21:35:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:18:27.622 [global] 00:18:27.622 thread=1 00:18:27.622 invalidate=1 00:18:27.622 rw=write 00:18:27.622 time_based=1 00:18:27.622 runtime=1 00:18:27.622 ioengine=libaio 00:18:27.622 direct=1 00:18:27.622 bs=4096 00:18:27.622 iodepth=128 00:18:27.622 norandommap=0 00:18:27.622 numjobs=1 00:18:27.622 00:18:27.622 verify_dump=1 00:18:27.622 verify_backlog=512 00:18:27.622 verify_state_save=0 00:18:27.622 do_verify=1 00:18:27.622 verify=crc32c-intel 00:18:27.622 [job0] 00:18:27.622 filename=/dev/nvme0n1 00:18:27.622 [job1] 00:18:27.622 filename=/dev/nvme0n2 00:18:27.622 [job2] 00:18:27.622 filename=/dev/nvme0n3 00:18:27.622 [job3] 00:18:27.622 filename=/dev/nvme0n4 00:18:27.622 Could not set queue depth (nvme0n1) 00:18:27.622 Could not set queue depth (nvme0n2) 00:18:27.622 Could not set queue depth (nvme0n3) 00:18:27.622 Could not set queue depth (nvme0n4) 00:18:27.880 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:27.880 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:27.880 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:27.880 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:27.880 fio-3.35 00:18:27.880 Starting 4 threads 00:18:29.255 00:18:29.255 job0: (groupid=0, jobs=1): err= 0: pid=1436650: Fri Jun 7 21:35:29 2024 00:18:29.255 read: IOPS=3126, BW=12.2MiB/s (12.8MB/s)(12.2MiB/1003msec) 00:18:29.255 slat (nsec): min=1652, max=16617k, avg=146289.26, stdev=1052380.76 00:18:29.255 clat (usec): min=767, max=49999, avg=18173.84, stdev=6845.13 00:18:29.255 lat (usec): min=2880, max=50041, avg=18320.13, stdev=6919.93 00:18:29.255 clat percentiles (usec): 00:18:29.255 | 1.00th=[ 7111], 5.00th=[10028], 10.00th=[11338], 20.00th=[13566], 00:18:29.255 | 30.00th=[15139], 40.00th=[15533], 50.00th=[15926], 60.00th=[17695], 00:18:29.255 | 70.00th=[19268], 80.00th=[20841], 90.00th=[28443], 95.00th=[32637], 00:18:29.255 | 99.00th=[42206], 99.50th=[43779], 99.90th=[50070], 99.95th=[50070], 00:18:29.255 | 99.99th=[50070] 00:18:29.255 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:18:29.255 slat (usec): min=2, max=13813, avg=140.94, stdev=755.98 00:18:29.255 clat (usec): min=4068, max=49967, avg=19441.42, stdev=8803.91 00:18:29.255 lat (usec): min=4089, max=49973, avg=19582.36, stdev=8870.40 00:18:29.255 clat percentiles (usec): 00:18:29.255 | 1.00th=[ 5473], 5.00th=[ 8848], 10.00th=[11731], 20.00th=[13173], 00:18:29.255 | 30.00th=[13698], 40.00th=[14615], 50.00th=[15795], 60.00th=[17171], 00:18:29.255 | 70.00th=[21365], 80.00th=[29230], 90.00th=[34341], 95.00th=[35914], 00:18:29.255 | 99.00th=[40633], 99.50th=[40633], 99.90th=[42206], 99.95th=[50070], 00:18:29.255 | 99.99th=[50070] 00:18:29.255 bw ( KiB/s): min=12288, max=15872, per=24.99%, avg=14080.00, stdev=2534.27, samples=2 00:18:29.255 iops : min= 3072, max= 3968, avg=3520.00, stdev=633.57, samples=2 00:18:29.255 lat (usec) : 1000=0.01% 00:18:29.255 lat (msec) : 4=0.07%, 10=5.60%, 20=63.35%, 50=30.97% 00:18:29.255 cpu : usr=2.89%, sys=4.09%, ctx=322, majf=0, minf=1 00:18:29.255 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:18:29.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:29.255 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:29.255 issued rwts: total=3136,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:29.255 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:29.255 job1: (groupid=0, jobs=1): err= 0: pid=1436651: Fri Jun 7 21:35:29 2024 00:18:29.255 read: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec) 00:18:29.255 slat (nsec): min=1866, max=33364k, avg=143203.76, stdev=1192231.83 00:18:29.255 clat (usec): min=5205, max=70257, avg=17651.33, stdev=10083.77 00:18:29.255 lat (usec): min=5211, max=70265, avg=17794.53, stdev=10171.02 00:18:29.255 clat percentiles (usec): 00:18:29.255 | 1.00th=[ 7111], 5.00th=[ 9896], 10.00th=[12125], 20.00th=[13042], 00:18:29.255 | 30.00th=[13304], 40.00th=[13566], 50.00th=[13960], 60.00th=[14877], 00:18:29.255 | 70.00th=[16188], 80.00th=[20317], 90.00th=[30278], 95.00th=[34866], 00:18:29.255 | 99.00th=[65799], 99.50th=[68682], 99.90th=[69731], 99.95th=[69731], 00:18:29.255 | 99.99th=[70779] 00:18:29.255 write: IOPS=4182, BW=16.3MiB/s (17.1MB/s)(16.4MiB/1006msec); 0 zone resets 00:18:29.255 slat (usec): min=3, max=10598, avg=92.64, stdev=494.28 00:18:29.255 clat (usec): min=1954, max=70261, avg=13138.79, stdev=4030.82 00:18:29.255 lat (usec): min=3390, max=70271, avg=13231.44, stdev=4049.11 00:18:29.255 clat percentiles (usec): 00:18:29.255 | 1.00th=[ 4490], 5.00th=[ 7046], 10.00th=[ 8225], 20.00th=[10290], 00:18:29.255 | 30.00th=[11994], 40.00th=[13304], 50.00th=[13829], 60.00th=[13960], 00:18:29.255 | 70.00th=[14091], 80.00th=[14353], 90.00th=[15795], 95.00th=[18482], 00:18:29.255 | 99.00th=[25297], 99.50th=[25560], 99.90th=[38011], 99.95th=[38011], 00:18:29.255 | 99.99th=[70779] 00:18:29.255 bw ( KiB/s): min=12304, max=20464, per=29.08%, avg=16384.00, stdev=5769.99, samples=2 00:18:29.255 iops : min= 3076, max= 5116, avg=4096.00, stdev=1442.50, samples=2 00:18:29.255 lat (msec) : 2=0.01%, 4=0.16%, 10=11.93%, 20=75.81%, 50=10.65% 00:18:29.256 lat (msec) : 100=1.45% 00:18:29.256 cpu : usr=3.68%, sys=4.98%, ctx=492, majf=0, minf=1 00:18:29.256 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:29.256 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:29.256 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:29.256 issued rwts: total=4096,4208,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:29.256 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:29.256 job2: (groupid=0, jobs=1): err= 0: pid=1436652: Fri Jun 7 21:35:29 2024 00:18:29.256 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:18:29.256 slat (usec): min=2, max=9278, avg=121.14, stdev=719.50 00:18:29.256 clat (usec): min=6789, max=43338, avg=16230.16, stdev=6698.59 00:18:29.256 lat (usec): min=6799, max=44982, avg=16351.30, stdev=6761.33 00:18:29.256 clat percentiles (usec): 00:18:29.256 | 1.00th=[ 8586], 5.00th=[10159], 10.00th=[10814], 20.00th=[11469], 00:18:29.256 | 30.00th=[12256], 40.00th=[13304], 50.00th=[14615], 60.00th=[15139], 00:18:29.256 | 70.00th=[16057], 80.00th=[18744], 90.00th=[28705], 95.00th=[33162], 00:18:29.256 | 99.00th=[35914], 99.50th=[40633], 99.90th=[43254], 99.95th=[43254], 00:18:29.256 | 99.99th=[43254] 00:18:29.256 write: IOPS=3764, BW=14.7MiB/s (15.4MB/s)(14.7MiB/1002msec); 0 zone resets 00:18:29.256 slat (usec): min=3, max=35493, avg=142.65, stdev=1058.46 00:18:29.256 clat (usec): min=1016, max=51722, avg=15732.31, stdev=7300.91 00:18:29.256 lat (usec): min=1025, max=77969, avg=15874.96, stdev=7419.10 00:18:29.256 clat percentiles (usec): 00:18:29.256 | 1.00th=[ 6521], 5.00th=[ 9634], 10.00th=[10945], 20.00th=[11469], 00:18:29.256 | 30.00th=[12780], 40.00th=[13698], 50.00th=[14615], 60.00th=[14877], 00:18:29.256 | 70.00th=[15401], 80.00th=[15926], 90.00th=[20579], 95.00th=[36439], 00:18:29.256 | 99.00th=[44827], 99.50th=[46400], 99.90th=[51643], 99.95th=[51643], 00:18:29.256 | 99.99th=[51643] 00:18:29.256 bw ( KiB/s): min=15288, max=15288, per=27.14%, avg=15288.00, stdev= 0.00, samples=1 00:18:29.256 iops : min= 3822, max= 3822, avg=3822.00, stdev= 0.00, samples=1 00:18:29.256 lat (msec) : 2=0.16%, 10=5.11%, 20=81.35%, 50=13.17%, 100=0.20% 00:18:29.256 cpu : usr=3.80%, sys=4.80%, ctx=359, majf=0, minf=1 00:18:29.256 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:18:29.256 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:29.256 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:29.256 issued rwts: total=3584,3772,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:29.256 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:29.256 job3: (groupid=0, jobs=1): err= 0: pid=1436653: Fri Jun 7 21:35:29 2024 00:18:29.256 read: IOPS=2524, BW=9.86MiB/s (10.3MB/s)(10.0MiB/1014msec) 00:18:29.256 slat (usec): min=2, max=29065, avg=207.58, stdev=1448.19 00:18:29.256 clat (usec): min=9009, max=63652, avg=24756.45, stdev=8911.66 00:18:29.256 lat (usec): min=9014, max=63660, avg=24964.03, stdev=9015.87 00:18:29.256 clat percentiles (usec): 00:18:29.256 | 1.00th=[ 9110], 5.00th=[13960], 10.00th=[15926], 20.00th=[19268], 00:18:29.256 | 30.00th=[20841], 40.00th=[23200], 50.00th=[24249], 60.00th=[24773], 00:18:29.256 | 70.00th=[26084], 80.00th=[27657], 90.00th=[34341], 95.00th=[43779], 00:18:29.256 | 99.00th=[63701], 99.50th=[63701], 99.90th=[63701], 99.95th=[63701], 00:18:29.256 | 99.99th=[63701] 00:18:29.256 write: IOPS=2680, BW=10.5MiB/s (11.0MB/s)(10.6MiB/1014msec); 0 zone resets 00:18:29.256 slat (usec): min=3, max=17953, avg=164.99, stdev=953.06 00:18:29.256 clat (usec): min=1531, max=77114, avg=23999.62, stdev=15026.19 00:18:29.256 lat (usec): min=1542, max=77123, avg=24164.61, stdev=15102.16 00:18:29.256 clat percentiles (usec): 00:18:29.256 | 1.00th=[10028], 5.00th=[10683], 10.00th=[12387], 20.00th=[14222], 00:18:29.256 | 30.00th=[15533], 40.00th=[15926], 50.00th=[18220], 60.00th=[21103], 00:18:29.256 | 70.00th=[27657], 80.00th=[30278], 90.00th=[39584], 95.00th=[62653], 00:18:29.256 | 99.00th=[77071], 99.50th=[77071], 99.90th=[77071], 99.95th=[77071], 00:18:29.256 | 99.99th=[77071] 00:18:29.256 bw ( KiB/s): min= 8192, max=12528, per=18.39%, avg=10360.00, stdev=3066.02, samples=2 00:18:29.256 iops : min= 2048, max= 3132, avg=2590.00, stdev=766.50, samples=2 00:18:29.256 lat (msec) : 2=0.04%, 10=1.86%, 20=39.52%, 50=52.69%, 100=5.89% 00:18:29.256 cpu : usr=2.57%, sys=3.46%, ctx=310, majf=0, minf=1 00:18:29.256 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:29.256 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:29.256 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:29.256 issued rwts: total=2560,2718,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:29.256 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:29.256 00:18:29.256 Run status group 0 (all jobs): 00:18:29.256 READ: bw=51.5MiB/s (54.0MB/s), 9.86MiB/s-15.9MiB/s (10.3MB/s-16.7MB/s), io=52.2MiB (54.8MB), run=1002-1014msec 00:18:29.256 WRITE: bw=55.0MiB/s (57.7MB/s), 10.5MiB/s-16.3MiB/s (11.0MB/s-17.1MB/s), io=55.8MiB (58.5MB), run=1002-1014msec 00:18:29.256 00:18:29.256 Disk stats (read/write): 00:18:29.256 nvme0n1: ios=2584/2798, merge=0/0, ticks=39060/43200, in_queue=82260, util=98.30% 00:18:29.256 nvme0n2: ios=3633/4087, merge=0/0, ticks=51109/52255, in_queue=103364, util=89.04% 00:18:29.256 nvme0n3: ios=2674/3072, merge=0/0, ticks=24603/25148, in_queue=49751, util=94.81% 00:18:29.256 nvme0n4: ios=2105/2399, merge=0/0, ticks=20124/22229, in_queue=42353, util=95.72% 00:18:29.256 21:35:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:18:29.256 [global] 00:18:29.256 thread=1 00:18:29.256 invalidate=1 00:18:29.256 rw=randwrite 00:18:29.256 time_based=1 00:18:29.256 runtime=1 00:18:29.256 ioengine=libaio 00:18:29.256 direct=1 00:18:29.256 bs=4096 00:18:29.256 iodepth=128 00:18:29.256 norandommap=0 00:18:29.256 numjobs=1 00:18:29.256 00:18:29.256 verify_dump=1 00:18:29.256 verify_backlog=512 00:18:29.256 verify_state_save=0 00:18:29.256 do_verify=1 00:18:29.256 verify=crc32c-intel 00:18:29.256 [job0] 00:18:29.256 filename=/dev/nvme0n1 00:18:29.256 [job1] 00:18:29.256 filename=/dev/nvme0n2 00:18:29.256 [job2] 00:18:29.256 filename=/dev/nvme0n3 00:18:29.256 [job3] 00:18:29.256 filename=/dev/nvme0n4 00:18:29.256 Could not set queue depth (nvme0n1) 00:18:29.256 Could not set queue depth (nvme0n2) 00:18:29.256 Could not set queue depth (nvme0n3) 00:18:29.256 Could not set queue depth (nvme0n4) 00:18:29.514 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:29.514 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:29.514 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:29.514 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:29.514 fio-3.35 00:18:29.514 Starting 4 threads 00:18:30.889 00:18:30.889 job0: (groupid=0, jobs=1): err= 0: pid=1437077: Fri Jun 7 21:35:30 2024 00:18:30.889 read: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec) 00:18:30.889 slat (nsec): min=1589, max=22847k, avg=153153.32, stdev=1069742.46 00:18:30.889 clat (usec): min=4503, max=41473, avg=19460.16, stdev=6341.64 00:18:30.889 lat (usec): min=5585, max=41496, avg=19613.31, stdev=6394.28 00:18:30.889 clat percentiles (usec): 00:18:30.889 | 1.00th=[ 5604], 5.00th=[ 8094], 10.00th=[10421], 20.00th=[14091], 00:18:30.889 | 30.00th=[16319], 40.00th=[17695], 50.00th=[19530], 60.00th=[21103], 00:18:30.889 | 70.00th=[22938], 80.00th=[25560], 90.00th=[27657], 95.00th=[29230], 00:18:30.889 | 99.00th=[31851], 99.50th=[32113], 99.90th=[36439], 99.95th=[40109], 00:18:30.889 | 99.99th=[41681] 00:18:30.889 write: IOPS=3095, BW=12.1MiB/s (12.7MB/s)(12.1MiB/1002msec); 0 zone resets 00:18:30.889 slat (usec): min=2, max=26057, avg=163.48, stdev=971.94 00:18:30.889 clat (usec): min=1147, max=61473, avg=21241.26, stdev=9152.85 00:18:30.889 lat (usec): min=3394, max=61503, avg=21404.74, stdev=9220.45 00:18:30.889 clat percentiles (usec): 00:18:30.889 | 1.00th=[ 7046], 5.00th=[ 9896], 10.00th=[10552], 20.00th=[12911], 00:18:30.889 | 30.00th=[15926], 40.00th=[16909], 50.00th=[19268], 60.00th=[22938], 00:18:30.889 | 70.00th=[27132], 80.00th=[28443], 90.00th=[34341], 95.00th=[39060], 00:18:30.889 | 99.00th=[42206], 99.50th=[53740], 99.90th=[53740], 99.95th=[53740], 00:18:30.889 | 99.99th=[61604] 00:18:30.889 bw ( KiB/s): min=12288, max=12288, per=21.64%, avg=12288.00, stdev= 0.00, samples=2 00:18:30.889 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:18:30.889 lat (msec) : 2=0.02%, 4=0.11%, 10=7.42%, 20=47.36%, 50=44.77% 00:18:30.889 lat (msec) : 100=0.32% 00:18:30.889 cpu : usr=2.40%, sys=3.00%, ctx=336, majf=0, minf=1 00:18:30.889 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:18:30.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:30.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:30.889 issued rwts: total=3072,3102,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:30.889 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:30.889 job1: (groupid=0, jobs=1): err= 0: pid=1437078: Fri Jun 7 21:35:30 2024 00:18:30.889 read: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec) 00:18:30.889 slat (nsec): min=1093, max=15880k, avg=139489.70, stdev=888450.85 00:18:30.889 clat (usec): min=4568, max=43410, avg=17741.23, stdev=7857.67 00:18:30.889 lat (usec): min=4574, max=44897, avg=17880.72, stdev=7896.35 00:18:30.889 clat percentiles (usec): 00:18:30.889 | 1.00th=[ 8356], 5.00th=[10683], 10.00th=[11207], 20.00th=[13173], 00:18:30.889 | 30.00th=[13698], 40.00th=[14484], 50.00th=[15008], 60.00th=[15926], 00:18:30.889 | 70.00th=[17433], 80.00th=[21103], 90.00th=[27395], 95.00th=[39060], 00:18:30.889 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:18:30.889 | 99.99th=[43254] 00:18:30.889 write: IOPS=3915, BW=15.3MiB/s (16.0MB/s)(15.4MiB/1007msec); 0 zone resets 00:18:30.889 slat (nsec): min=1653, max=12743k, avg=120563.20, stdev=747036.65 00:18:30.889 clat (usec): min=6161, max=43330, avg=16055.68, stdev=5052.79 00:18:30.889 lat (usec): min=6171, max=43338, avg=16176.24, stdev=5086.22 00:18:30.890 clat percentiles (usec): 00:18:30.890 | 1.00th=[ 6915], 5.00th=[10421], 10.00th=[11863], 20.00th=[12911], 00:18:30.890 | 30.00th=[13304], 40.00th=[13829], 50.00th=[14091], 60.00th=[14877], 00:18:30.890 | 70.00th=[16909], 80.00th=[19530], 90.00th=[23725], 95.00th=[28443], 00:18:30.890 | 99.00th=[30540], 99.50th=[30802], 99.90th=[34341], 99.95th=[43254], 00:18:30.890 | 99.99th=[43254] 00:18:30.890 bw ( KiB/s): min=14144, max=16384, per=26.88%, avg=15264.00, stdev=1583.92, samples=2 00:18:30.890 iops : min= 3536, max= 4096, avg=3816.00, stdev=395.98, samples=2 00:18:30.890 lat (msec) : 10=4.22%, 20=75.51%, 50=20.26% 00:18:30.890 cpu : usr=2.29%, sys=4.47%, ctx=353, majf=0, minf=1 00:18:30.890 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:30.890 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:30.890 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:30.890 issued rwts: total=3584,3943,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:30.890 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:30.890 job2: (groupid=0, jobs=1): err= 0: pid=1437080: Fri Jun 7 21:35:30 2024 00:18:30.890 read: IOPS=2914, BW=11.4MiB/s (11.9MB/s)(11.5MiB/1007msec) 00:18:30.890 slat (nsec): min=1014, max=21754k, avg=178570.40, stdev=1224117.35 00:18:30.890 clat (usec): min=5029, max=62861, avg=22747.05, stdev=11777.07 00:18:30.890 lat (usec): min=5036, max=62884, avg=22925.62, stdev=11833.26 00:18:30.890 clat percentiles (usec): 00:18:30.890 | 1.00th=[ 9110], 5.00th=[10945], 10.00th=[12125], 20.00th=[13566], 00:18:30.890 | 30.00th=[15926], 40.00th=[16319], 50.00th=[17695], 60.00th=[20055], 00:18:30.890 | 70.00th=[27919], 80.00th=[31851], 90.00th=[43254], 95.00th=[47449], 00:18:30.890 | 99.00th=[57934], 99.50th=[59507], 99.90th=[59507], 99.95th=[59507], 00:18:30.890 | 99.99th=[62653] 00:18:30.890 write: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec); 0 zone resets 00:18:30.890 slat (nsec): min=1913, max=20310k, avg=140535.33, stdev=908058.22 00:18:30.890 clat (usec): min=4566, max=55683, avg=19475.52, stdev=8959.12 00:18:30.890 lat (usec): min=4574, max=55690, avg=19616.05, stdev=9007.39 00:18:30.890 clat percentiles (usec): 00:18:30.890 | 1.00th=[ 5276], 5.00th=[ 7832], 10.00th=[11600], 20.00th=[14222], 00:18:30.890 | 30.00th=[15533], 40.00th=[16057], 50.00th=[16450], 60.00th=[17171], 00:18:30.890 | 70.00th=[20317], 80.00th=[25035], 90.00th=[30802], 95.00th=[41157], 00:18:30.890 | 99.00th=[48497], 99.50th=[48497], 99.90th=[54789], 99.95th=[54789], 00:18:30.890 | 99.99th=[55837] 00:18:30.890 bw ( KiB/s): min=12288, max=12288, per=21.64%, avg=12288.00, stdev= 0.00, samples=2 00:18:30.890 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:18:30.890 lat (msec) : 10=5.94%, 20=58.73%, 50=33.33%, 100=2.00% 00:18:30.890 cpu : usr=2.19%, sys=4.37%, ctx=228, majf=0, minf=1 00:18:30.890 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:18:30.890 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:30.890 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:30.890 issued rwts: total=2935,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:30.890 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:30.890 job3: (groupid=0, jobs=1): err= 0: pid=1437081: Fri Jun 7 21:35:30 2024 00:18:30.890 read: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec) 00:18:30.890 slat (nsec): min=1623, max=22511k, avg=119906.54, stdev=875970.44 00:18:30.890 clat (usec): min=4541, max=62175, avg=17637.52, stdev=11669.51 00:18:30.890 lat (usec): min=4544, max=62181, avg=17757.43, stdev=11737.99 00:18:30.890 clat percentiles (usec): 00:18:30.890 | 1.00th=[ 7242], 5.00th=[ 8586], 10.00th=[ 9503], 20.00th=[11076], 00:18:30.890 | 30.00th=[11469], 40.00th=[11731], 50.00th=[12518], 60.00th=[13698], 00:18:30.890 | 70.00th=[16909], 80.00th=[23987], 90.00th=[36439], 95.00th=[44827], 00:18:30.890 | 99.00th=[60556], 99.50th=[62129], 99.90th=[62129], 99.95th=[62129], 00:18:30.890 | 99.99th=[62129] 00:18:30.890 write: IOPS=4147, BW=16.2MiB/s (17.0MB/s)(16.3MiB/1007msec); 0 zone resets 00:18:30.890 slat (usec): min=2, max=16244, avg=85.64, stdev=673.47 00:18:30.890 clat (usec): min=2271, max=52737, avg=13138.21, stdev=8173.97 00:18:30.890 lat (usec): min=2275, max=52744, avg=13223.85, stdev=8212.41 00:18:30.890 clat percentiles (usec): 00:18:30.890 | 1.00th=[ 3589], 5.00th=[ 5276], 10.00th=[ 6194], 20.00th=[ 7898], 00:18:30.890 | 30.00th=[ 8586], 40.00th=[ 9241], 50.00th=[10159], 60.00th=[11338], 00:18:30.890 | 70.00th=[13698], 80.00th=[19006], 90.00th=[22676], 95.00th=[30540], 00:18:30.890 | 99.00th=[44827], 99.50th=[51643], 99.90th=[52691], 99.95th=[52691], 00:18:30.890 | 99.99th=[52691] 00:18:30.890 bw ( KiB/s): min=16384, max=16408, per=28.88%, avg=16396.00, stdev=16.97, samples=2 00:18:30.890 iops : min= 4096, max= 4102, avg=4099.00, stdev= 4.24, samples=2 00:18:30.890 lat (msec) : 4=1.14%, 10=29.48%, 20=48.63%, 50=18.41%, 100=2.34% 00:18:30.890 cpu : usr=1.79%, sys=6.26%, ctx=287, majf=0, minf=1 00:18:30.890 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:30.890 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:30.890 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:30.890 issued rwts: total=4096,4177,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:30.890 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:30.890 00:18:30.890 Run status group 0 (all jobs): 00:18:30.890 READ: bw=53.1MiB/s (55.7MB/s), 11.4MiB/s-15.9MiB/s (11.9MB/s-16.7MB/s), io=53.5MiB (56.1MB), run=1002-1007msec 00:18:30.890 WRITE: bw=55.4MiB/s (58.1MB/s), 11.9MiB/s-16.2MiB/s (12.5MB/s-17.0MB/s), io=55.8MiB (58.5MB), run=1002-1007msec 00:18:30.890 00:18:30.890 Disk stats (read/write): 00:18:30.890 nvme0n1: ios=2400/2560, merge=0/0, ticks=25670/29000, in_queue=54670, util=84.37% 00:18:30.890 nvme0n2: ios=3122/3551, merge=0/0, ticks=18947/22415, in_queue=41362, util=90.46% 00:18:30.890 nvme0n3: ios=2133/2560, merge=0/0, ticks=19003/14669, in_queue=33672, util=92.21% 00:18:30.890 nvme0n4: ios=3632/3978, merge=0/0, ticks=35695/35754, in_queue=71449, util=94.46% 00:18:30.890 21:35:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:18:30.890 21:35:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1437339 00:18:30.890 21:35:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:18:30.890 21:35:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:18:30.890 [global] 00:18:30.890 thread=1 00:18:30.890 invalidate=1 00:18:30.890 rw=read 00:18:30.890 time_based=1 00:18:30.890 runtime=10 00:18:30.890 ioengine=libaio 00:18:30.890 direct=1 00:18:30.890 bs=4096 00:18:30.890 iodepth=1 00:18:30.890 norandommap=1 00:18:30.890 numjobs=1 00:18:30.890 00:18:30.890 [job0] 00:18:30.890 filename=/dev/nvme0n1 00:18:30.890 [job1] 00:18:30.890 filename=/dev/nvme0n2 00:18:30.890 [job2] 00:18:30.890 filename=/dev/nvme0n3 00:18:30.890 [job3] 00:18:30.890 filename=/dev/nvme0n4 00:18:30.890 Could not set queue depth (nvme0n1) 00:18:30.890 Could not set queue depth (nvme0n2) 00:18:30.890 Could not set queue depth (nvme0n3) 00:18:30.890 Could not set queue depth (nvme0n4) 00:18:31.148 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:31.148 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:31.148 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:31.148 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:31.148 fio-3.35 00:18:31.148 Starting 4 threads 00:18:33.676 21:35:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:18:33.934 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=4939776, buflen=4096 00:18:33.934 fio: pid=1437516, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:33.934 21:35:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:18:34.192 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=25870336, buflen=4096 00:18:34.192 fio: pid=1437512, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:34.192 21:35:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:34.192 21:35:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:18:34.450 21:35:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:34.450 21:35:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:18:34.450 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=311296, buflen=4096 00:18:34.450 fio: pid=1437495, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:34.709 21:35:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:34.709 21:35:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:18:34.709 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=352256, buflen=4096 00:18:34.709 fio: pid=1437500, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:34.709 00:18:34.709 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1437495: Fri Jun 7 21:35:34 2024 00:18:34.709 read: IOPS=24, BW=97.1KiB/s (99.4kB/s)(304KiB/3132msec) 00:18:34.709 slat (usec): min=10, max=25660, avg=442.10, stdev=3010.64 00:18:34.709 clat (usec): min=587, max=41891, avg=40472.11, stdev=4639.04 00:18:34.709 lat (usec): min=617, max=66975, avg=40919.71, stdev=5602.87 00:18:34.709 clat percentiles (usec): 00:18:34.709 | 1.00th=[ 586], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:18:34.709 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:34.709 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:34.709 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:18:34.709 | 99.99th=[41681] 00:18:34.709 bw ( KiB/s): min= 92, max= 104, per=1.06%, avg=96.67, stdev= 3.93, samples=6 00:18:34.709 iops : min= 23, max= 26, avg=24.17, stdev= 0.98, samples=6 00:18:34.709 lat (usec) : 750=1.30% 00:18:34.709 lat (msec) : 50=97.40% 00:18:34.709 cpu : usr=0.10%, sys=0.00%, ctx=79, majf=0, minf=1 00:18:34.709 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:34.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.709 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.709 issued rwts: total=77,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:34.709 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:34.709 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1437500: Fri Jun 7 21:35:34 2024 00:18:34.709 read: IOPS=25, BW=101KiB/s (103kB/s)(344KiB/3408msec) 00:18:34.709 slat (usec): min=8, max=29609, avg=683.53, stdev=3837.27 00:18:34.709 clat (usec): min=459, max=42024, avg=38679.67, stdev=9539.23 00:18:34.709 lat (usec): min=467, max=70925, avg=39370.87, stdev=10460.16 00:18:34.709 clat percentiles (usec): 00:18:34.709 | 1.00th=[ 461], 5.00th=[ 644], 10.00th=[40633], 20.00th=[41157], 00:18:34.709 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:34.709 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:18:34.709 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:34.709 | 99.99th=[42206] 00:18:34.709 bw ( KiB/s): min= 96, max= 113, per=1.12%, avg=101.50, stdev= 6.86, samples=6 00:18:34.709 iops : min= 24, max= 28, avg=25.33, stdev= 1.63, samples=6 00:18:34.709 lat (usec) : 500=3.45%, 750=2.30% 00:18:34.709 lat (msec) : 50=93.10% 00:18:34.709 cpu : usr=0.12%, sys=0.00%, ctx=90, majf=0, minf=1 00:18:34.709 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:34.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.709 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.709 issued rwts: total=87,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:34.709 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:34.709 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1437512: Fri Jun 7 21:35:34 2024 00:18:34.709 read: IOPS=2169, BW=8676KiB/s (8884kB/s)(24.7MiB/2912msec) 00:18:34.709 slat (usec): min=6, max=15329, avg=12.36, stdev=231.30 00:18:34.709 clat (usec): min=267, max=2331, avg=445.12, stdev=57.88 00:18:34.709 lat (usec): min=277, max=16018, avg=457.48, stdev=242.45 00:18:34.709 clat percentiles (usec): 00:18:34.709 | 1.00th=[ 314], 5.00th=[ 343], 10.00th=[ 363], 20.00th=[ 412], 00:18:34.709 | 30.00th=[ 441], 40.00th=[ 449], 50.00th=[ 457], 60.00th=[ 461], 00:18:34.709 | 70.00th=[ 465], 80.00th=[ 474], 90.00th=[ 486], 95.00th=[ 510], 00:18:34.709 | 99.00th=[ 619], 99.50th=[ 652], 99.90th=[ 701], 99.95th=[ 725], 00:18:34.709 | 99.99th=[ 2343] 00:18:34.709 bw ( KiB/s): min= 8080, max=10288, per=98.00%, avg=8838.40, stdev=868.72, samples=5 00:18:34.709 iops : min= 2020, max= 2572, avg=2209.60, stdev=217.18, samples=5 00:18:34.709 lat (usec) : 500=93.29%, 750=6.66%, 1000=0.02% 00:18:34.709 lat (msec) : 4=0.02% 00:18:34.709 cpu : usr=1.06%, sys=3.78%, ctx=6319, majf=0, minf=1 00:18:34.709 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:34.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.709 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.709 issued rwts: total=6317,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:34.709 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:34.709 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1437516: Fri Jun 7 21:35:34 2024 00:18:34.709 read: IOPS=451, BW=1804KiB/s (1847kB/s)(4824KiB/2674msec) 00:18:34.709 slat (nsec): min=6101, max=33802, avg=7737.36, stdev=3390.10 00:18:34.709 clat (usec): min=333, max=42071, avg=2187.22, stdev=8362.56 00:18:34.709 lat (usec): min=340, max=42093, avg=2194.95, stdev=8365.58 00:18:34.709 clat percentiles (usec): 00:18:34.709 | 1.00th=[ 351], 5.00th=[ 359], 10.00th=[ 363], 20.00th=[ 367], 00:18:34.709 | 30.00th=[ 375], 40.00th=[ 379], 50.00th=[ 383], 60.00th=[ 388], 00:18:34.709 | 70.00th=[ 400], 80.00th=[ 441], 90.00th=[ 461], 95.00th=[ 537], 00:18:34.709 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:34.709 | 99.99th=[42206] 00:18:34.709 bw ( KiB/s): min= 96, max= 6448, per=21.32%, avg=1923.20, stdev=2802.02, samples=5 00:18:34.709 iops : min= 24, max= 1612, avg=480.80, stdev=700.50, samples=5 00:18:34.709 lat (usec) : 500=93.29%, 750=2.15%, 1000=0.08% 00:18:34.709 lat (msec) : 50=4.39% 00:18:34.709 cpu : usr=0.07%, sys=0.49%, ctx=1207, majf=0, minf=2 00:18:34.709 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:34.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.709 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.709 issued rwts: total=1207,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:34.709 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:34.709 00:18:34.709 Run status group 0 (all jobs): 00:18:34.709 READ: bw=9019KiB/s (9235kB/s), 97.1KiB/s-8676KiB/s (99.4kB/s-8884kB/s), io=30.0MiB (31.5MB), run=2674-3408msec 00:18:34.709 00:18:34.709 Disk stats (read/write): 00:18:34.709 nvme0n1: ios=74/0, merge=0/0, ticks=2995/0, in_queue=2995, util=93.68% 00:18:34.709 nvme0n2: ios=84/0, merge=0/0, ticks=3247/0, in_queue=3247, util=94.25% 00:18:34.709 nvme0n3: ios=6134/0, merge=0/0, ticks=2663/0, in_queue=2663, util=95.55% 00:18:34.709 nvme0n4: ios=1204/0, merge=0/0, ticks=2552/0, in_queue=2552, util=96.47% 00:18:34.968 21:35:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:34.968 21:35:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:18:35.225 21:35:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:35.225 21:35:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:18:35.484 21:35:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:35.484 21:35:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:18:35.742 21:35:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:35.742 21:35:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:18:36.000 21:35:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:18:36.000 21:35:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 1437339 00:18:36.000 21:35:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:18:36.000 21:35:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:36.000 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:36.000 21:35:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:36.000 21:35:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1218 -- # local i=0 00:18:36.000 21:35:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:18:36.000 21:35:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:36.000 21:35:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:18:36.000 21:35:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:36.000 21:35:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1230 -- # return 0 00:18:36.000 21:35:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:18:36.000 21:35:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:18:36.000 nvmf hotplug test: fio failed as expected 00:18:36.000 21:35:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:36.257 21:35:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:18:36.257 21:35:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:18:36.257 21:35:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:18:36.257 21:35:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:18:36.257 21:35:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:18:36.257 21:35:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:36.257 21:35:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:18:36.257 21:35:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:36.257 21:35:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:18:36.257 21:35:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:36.257 21:35:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:36.257 rmmod nvme_tcp 00:18:36.257 rmmod nvme_fabrics 00:18:36.257 rmmod nvme_keyring 00:18:36.257 21:35:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:36.257 21:35:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:18:36.257 21:35:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:18:36.257 21:35:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1434022 ']' 00:18:36.257 21:35:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1434022 00:18:36.257 21:35:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@949 -- # '[' -z 1434022 ']' 00:18:36.257 21:35:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # kill -0 1434022 00:18:36.257 21:35:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # uname 00:18:36.257 21:35:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:36.257 21:35:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1434022 00:18:36.515 21:35:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:18:36.515 21:35:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:18:36.515 21:35:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1434022' 00:18:36.515 killing process with pid 1434022 00:18:36.515 21:35:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@968 -- # kill 1434022 00:18:36.515 21:35:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@973 -- # wait 1434022 00:18:36.515 21:35:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:36.515 21:35:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:36.515 21:35:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:36.515 21:35:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:36.515 21:35:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:36.515 21:35:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:36.515 21:35:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:36.515 21:35:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:39.049 21:35:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:39.049 00:18:39.049 real 0m29.605s 00:18:39.049 user 2m24.955s 00:18:39.049 sys 0m8.967s 00:18:39.049 21:35:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:39.049 21:35:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.049 ************************************ 00:18:39.049 END TEST nvmf_fio_target 00:18:39.049 ************************************ 00:18:39.049 21:35:38 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:39.050 21:35:38 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:18:39.050 21:35:38 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:39.050 21:35:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:39.050 ************************************ 00:18:39.050 START TEST nvmf_bdevio 00:18:39.050 ************************************ 00:18:39.050 21:35:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:39.050 * Looking for test storage... 00:18:39.050 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:39.050 21:35:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:39.050 21:35:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:18:39.050 21:35:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:39.050 21:35:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:39.050 21:35:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:39.050 21:35:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:39.050 21:35:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:39.050 21:35:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:39.050 21:35:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:39.050 21:35:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:39.050 21:35:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:39.050 21:35:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:39.050 21:35:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:39.050 21:35:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:18:39.050 21:35:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:39.050 21:35:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:39.050 21:35:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:39.050 21:35:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:39.050 21:35:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:39.050 21:35:39 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:39.050 21:35:39 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:39.050 21:35:39 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:39.050 21:35:39 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.050 21:35:39 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.050 21:35:39 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.050 21:35:39 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:18:39.050 21:35:39 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.050 21:35:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:18:39.050 21:35:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:39.050 21:35:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:39.050 21:35:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:39.050 21:35:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:39.050 21:35:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:39.050 21:35:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:39.050 21:35:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:39.050 21:35:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:39.050 21:35:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:39.050 21:35:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:39.050 21:35:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:18:39.050 21:35:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:39.050 21:35:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:39.050 21:35:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:39.050 21:35:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:39.050 21:35:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:39.050 21:35:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:39.050 21:35:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:39.050 21:35:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:39.050 21:35:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:39.050 21:35:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:39.050 21:35:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:18:39.050 21:35:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:45.693 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:45.693 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:45.693 Found net devices under 0000:af:00.0: cvl_0_0 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:45.693 Found net devices under 0000:af:00.1: cvl_0_1 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:45.693 21:35:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:45.693 21:35:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:45.693 21:35:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:45.693 21:35:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:45.694 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:45.694 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:18:45.694 00:18:45.694 --- 10.0.0.2 ping statistics --- 00:18:45.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.694 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:45.694 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:45.694 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:18:45.694 00:18:45.694 --- 10.0.0.1 ping statistics --- 00:18:45.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.694 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@723 -- # xtrace_disable 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1442590 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1442590 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@830 -- # '[' -z 1442590 ']' 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:45.694 [2024-06-07 21:35:45.237123] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:18:45.694 [2024-06-07 21:35:45.237186] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:45.694 EAL: No free 2048 kB hugepages reported on node 1 00:18:45.694 [2024-06-07 21:35:45.324580] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:45.694 [2024-06-07 21:35:45.414918] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:45.694 [2024-06-07 21:35:45.414961] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:45.694 [2024-06-07 21:35:45.414971] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:45.694 [2024-06-07 21:35:45.414981] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:45.694 [2024-06-07 21:35:45.414988] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:45.694 [2024-06-07 21:35:45.415111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:45.694 [2024-06-07 21:35:45.415229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:45.694 [2024-06-07 21:35:45.415342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:45.694 [2024-06-07 21:35:45.415342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@863 -- # return 0 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@729 -- # xtrace_disable 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:45.694 [2024-06-07 21:35:45.570558] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:45.694 Malloc0 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:45.694 [2024-06-07 21:35:45.617936] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:45.694 { 00:18:45.694 "params": { 00:18:45.694 "name": "Nvme$subsystem", 00:18:45.694 "trtype": "$TEST_TRANSPORT", 00:18:45.694 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:45.694 "adrfam": "ipv4", 00:18:45.694 "trsvcid": "$NVMF_PORT", 00:18:45.694 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:45.694 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:45.694 "hdgst": ${hdgst:-false}, 00:18:45.694 "ddgst": ${ddgst:-false} 00:18:45.694 }, 00:18:45.694 "method": "bdev_nvme_attach_controller" 00:18:45.694 } 00:18:45.694 EOF 00:18:45.694 )") 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:18:45.694 21:35:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:45.694 "params": { 00:18:45.694 "name": "Nvme1", 00:18:45.694 "trtype": "tcp", 00:18:45.694 "traddr": "10.0.0.2", 00:18:45.694 "adrfam": "ipv4", 00:18:45.694 "trsvcid": "4420", 00:18:45.694 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:45.694 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:45.694 "hdgst": false, 00:18:45.694 "ddgst": false 00:18:45.694 }, 00:18:45.694 "method": "bdev_nvme_attach_controller" 00:18:45.694 }' 00:18:45.694 [2024-06-07 21:35:45.669839] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:18:45.694 [2024-06-07 21:35:45.669896] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1442624 ] 00:18:45.694 EAL: No free 2048 kB hugepages reported on node 1 00:18:45.694 [2024-06-07 21:35:45.760349] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:45.694 [2024-06-07 21:35:45.849537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:45.694 [2024-06-07 21:35:45.849558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:45.694 [2024-06-07 21:35:45.849562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.952 I/O targets: 00:18:45.952 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:45.952 00:18:45.952 00:18:45.952 CUnit - A unit testing framework for C - Version 2.1-3 00:18:45.952 http://cunit.sourceforge.net/ 00:18:45.952 00:18:45.952 00:18:45.952 Suite: bdevio tests on: Nvme1n1 00:18:45.952 Test: blockdev write read block ...passed 00:18:45.952 Test: blockdev write zeroes read block ...passed 00:18:45.952 Test: blockdev write zeroes read no split ...passed 00:18:45.952 Test: blockdev write zeroes read split ...passed 00:18:46.211 Test: blockdev write zeroes read split partial ...passed 00:18:46.211 Test: blockdev reset ...[2024-06-07 21:35:46.263979] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:46.211 [2024-06-07 21:35:46.264060] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x70fb60 (9): Bad file descriptor 00:18:46.211 [2024-06-07 21:35:46.360333] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:46.211 passed 00:18:46.211 Test: blockdev write read 8 blocks ...passed 00:18:46.211 Test: blockdev write read size > 128k ...passed 00:18:46.211 Test: blockdev write read invalid size ...passed 00:18:46.211 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:46.211 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:46.211 Test: blockdev write read max offset ...passed 00:18:46.469 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:46.469 Test: blockdev writev readv 8 blocks ...passed 00:18:46.469 Test: blockdev writev readv 30 x 1block ...passed 00:18:46.469 Test: blockdev writev readv block ...passed 00:18:46.469 Test: blockdev writev readv size > 128k ...passed 00:18:46.469 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:46.469 Test: blockdev comparev and writev ...[2024-06-07 21:35:46.574462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:46.469 [2024-06-07 21:35:46.574490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:46.469 [2024-06-07 21:35:46.574503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:46.469 [2024-06-07 21:35:46.574509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.469 [2024-06-07 21:35:46.574871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:46.469 [2024-06-07 21:35:46.574880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:46.469 [2024-06-07 21:35:46.574890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:46.469 [2024-06-07 21:35:46.574896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:46.469 [2024-06-07 21:35:46.575263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:46.469 [2024-06-07 21:35:46.575273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:46.469 [2024-06-07 21:35:46.575284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:46.469 [2024-06-07 21:35:46.575290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:46.469 [2024-06-07 21:35:46.575656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:46.469 [2024-06-07 21:35:46.575665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:46.469 [2024-06-07 21:35:46.575676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:46.469 [2024-06-07 21:35:46.575686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:46.469 passed 00:18:46.469 Test: blockdev nvme passthru rw ...passed 00:18:46.469 Test: blockdev nvme passthru vendor specific ...[2024-06-07 21:35:46.657603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:46.469 [2024-06-07 21:35:46.657618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:46.469 [2024-06-07 21:35:46.657810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:46.469 [2024-06-07 21:35:46.657819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:46.469 [2024-06-07 21:35:46.657997] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:46.469 [2024-06-07 21:35:46.658005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:46.469 [2024-06-07 21:35:46.658198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:46.469 [2024-06-07 21:35:46.658207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:46.469 passed 00:18:46.469 Test: blockdev nvme admin passthru ...passed 00:18:46.469 Test: blockdev copy ...passed 00:18:46.469 00:18:46.470 Run Summary: Type Total Ran Passed Failed Inactive 00:18:46.470 suites 1 1 n/a 0 0 00:18:46.470 tests 23 23 23 0 0 00:18:46.470 asserts 152 152 152 0 n/a 00:18:46.470 00:18:46.470 Elapsed time = 1.334 seconds 00:18:46.728 21:35:46 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:46.728 21:35:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:46.728 21:35:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:46.728 21:35:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:46.728 21:35:46 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:46.728 21:35:46 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:18:46.728 21:35:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:46.728 21:35:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:18:46.728 21:35:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:46.728 21:35:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:18:46.728 21:35:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:46.728 21:35:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:46.728 rmmod nvme_tcp 00:18:46.728 rmmod nvme_fabrics 00:18:46.728 rmmod nvme_keyring 00:18:46.728 21:35:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:46.728 21:35:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:18:46.728 21:35:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:18:46.728 21:35:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1442590 ']' 00:18:46.728 21:35:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1442590 00:18:46.728 21:35:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@949 -- # '[' -z 1442590 ']' 00:18:46.728 21:35:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # kill -0 1442590 00:18:46.728 21:35:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # uname 00:18:46.728 21:35:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:46.728 21:35:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1442590 00:18:46.986 21:35:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@955 -- # process_name=reactor_3 00:18:46.986 21:35:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' reactor_3 = sudo ']' 00:18:46.986 21:35:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1442590' 00:18:46.986 killing process with pid 1442590 00:18:46.986 21:35:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@968 -- # kill 1442590 00:18:46.986 21:35:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@973 -- # wait 1442590 00:18:46.986 21:35:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:46.986 21:35:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:46.986 21:35:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:46.986 21:35:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:46.986 21:35:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:46.986 21:35:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.986 21:35:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:46.986 21:35:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.524 21:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:49.524 00:18:49.524 real 0m10.396s 00:18:49.524 user 0m11.147s 00:18:49.524 sys 0m5.126s 00:18:49.524 21:35:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:49.524 21:35:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:49.524 ************************************ 00:18:49.524 END TEST nvmf_bdevio 00:18:49.524 ************************************ 00:18:49.524 21:35:49 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:49.524 21:35:49 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:18:49.524 21:35:49 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:49.524 21:35:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:49.524 ************************************ 00:18:49.524 START TEST nvmf_auth_target 00:18:49.524 ************************************ 00:18:49.524 21:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:49.524 * Looking for test storage... 00:18:49.524 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:49.524 21:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:49.524 21:35:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:49.524 21:35:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:49.524 21:35:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:49.524 21:35:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:49.524 21:35:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:49.524 21:35:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:49.524 21:35:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:49.524 21:35:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:49.524 21:35:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:49.524 21:35:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:49.524 21:35:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:49.524 21:35:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:49.524 21:35:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:18:49.524 21:35:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:49.524 21:35:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:49.524 21:35:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:49.524 21:35:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:49.524 21:35:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:49.524 21:35:49 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:49.524 21:35:49 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:49.524 21:35:49 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:49.525 21:35:49 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.525 21:35:49 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.525 21:35:49 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.525 21:35:49 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:49.525 21:35:49 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.525 21:35:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:18:49.525 21:35:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:49.525 21:35:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:49.525 21:35:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:49.525 21:35:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:49.525 21:35:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:49.525 21:35:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:49.525 21:35:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:49.525 21:35:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:49.525 21:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:49.525 21:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:49.525 21:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:49.525 21:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:49.525 21:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:49.525 21:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:49.525 21:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:49.525 21:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:18:49.525 21:35:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:49.525 21:35:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:49.525 21:35:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:49.525 21:35:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:49.525 21:35:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:49.525 21:35:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:49.525 21:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:49.525 21:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.525 21:35:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:49.525 21:35:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:49.525 21:35:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:49.525 21:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:56.093 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:56.093 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:56.093 Found net devices under 0000:af:00.0: cvl_0_0 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:56.093 Found net devices under 0000:af:00.1: cvl_0_1 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:56.093 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:56.093 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:18:56.093 00:18:56.093 --- 10.0.0.2 ping statistics --- 00:18:56.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:56.093 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:56.093 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:56.093 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:18:56.093 00:18:56.093 --- 10.0.0.1 ping statistics --- 00:18:56.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:56.093 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1446885 00:18:56.093 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:56.094 21:35:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1446885 00:18:56.094 21:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 1446885 ']' 00:18:56.094 21:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:56.094 21:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:56.094 21:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:56.094 21:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:56.094 21:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.353 21:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:56.353 21:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:18:56.353 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:56.353 21:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:18:56.353 21:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.353 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:56.353 21:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1446968 00:18:56.353 21:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:56.353 21:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:56.353 21:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:18:56.353 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:56.353 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:56.353 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:56.353 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:18:56.353 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:56.353 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:56.353 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=239f219442f3e4792c9d3c09765ec1d4454bd53aeeefc4d4 00:18:56.353 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:56.353 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.4Yi 00:18:56.353 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 239f219442f3e4792c9d3c09765ec1d4454bd53aeeefc4d4 0 00:18:56.353 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 239f219442f3e4792c9d3c09765ec1d4454bd53aeeefc4d4 0 00:18:56.353 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:56.353 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:56.353 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=239f219442f3e4792c9d3c09765ec1d4454bd53aeeefc4d4 00:18:56.353 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:18:56.353 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:56.353 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.4Yi 00:18:56.353 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.4Yi 00:18:56.353 21:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.4Yi 00:18:56.353 21:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:18:56.353 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:56.353 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:56.353 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:56.353 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:56.353 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:56.353 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:56.354 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d702071209edd817cee34a69a707b7a013b977d579710bfa3be95ad5236880a3 00:18:56.354 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:56.354 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.f5V 00:18:56.354 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d702071209edd817cee34a69a707b7a013b977d579710bfa3be95ad5236880a3 3 00:18:56.354 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d702071209edd817cee34a69a707b7a013b977d579710bfa3be95ad5236880a3 3 00:18:56.354 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:56.354 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:56.354 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d702071209edd817cee34a69a707b7a013b977d579710bfa3be95ad5236880a3 00:18:56.354 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:56.354 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:56.613 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.f5V 00:18:56.613 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.f5V 00:18:56.613 21:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.f5V 00:18:56.613 21:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:18:56.613 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:56.613 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:56.613 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:56.613 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:56.613 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:56.613 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:56.613 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=12fc994870540718a0e259fe2745f0c0 00:18:56.613 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:56.613 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.lQV 00:18:56.613 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 12fc994870540718a0e259fe2745f0c0 1 00:18:56.613 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 12fc994870540718a0e259fe2745f0c0 1 00:18:56.613 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:56.613 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:56.613 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=12fc994870540718a0e259fe2745f0c0 00:18:56.613 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:56.613 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:56.613 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.lQV 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.lQV 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.lQV 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6785b6992d5f709235e4978adde20e76a7299c5e94507c9e 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.5rv 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6785b6992d5f709235e4978adde20e76a7299c5e94507c9e 2 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6785b6992d5f709235e4978adde20e76a7299c5e94507c9e 2 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6785b6992d5f709235e4978adde20e76a7299c5e94507c9e 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.5rv 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.5rv 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.5rv 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=51231e328f25babed00e29cd4d5411a7d1d634b936cd54a5 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.tul 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 51231e328f25babed00e29cd4d5411a7d1d634b936cd54a5 2 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 51231e328f25babed00e29cd4d5411a7d1d634b936cd54a5 2 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=51231e328f25babed00e29cd4d5411a7d1d634b936cd54a5 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.tul 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.tul 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.tul 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a0360a35ad3b822188a2fc6d89ab3d21 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.IUJ 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a0360a35ad3b822188a2fc6d89ab3d21 1 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a0360a35ad3b822188a2fc6d89ab3d21 1 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a0360a35ad3b822188a2fc6d89ab3d21 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:56.614 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:56.877 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.IUJ 00:18:56.877 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.IUJ 00:18:56.877 21:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.IUJ 00:18:56.877 21:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:18:56.877 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:56.877 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:56.877 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:56.877 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:56.877 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:56.877 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:56.877 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3802c8c10babc8cf761baf9b04b2cf82de8443f3339cf1202abb47373d0be7fc 00:18:56.877 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:56.877 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.DgG 00:18:56.877 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3802c8c10babc8cf761baf9b04b2cf82de8443f3339cf1202abb47373d0be7fc 3 00:18:56.877 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3802c8c10babc8cf761baf9b04b2cf82de8443f3339cf1202abb47373d0be7fc 3 00:18:56.877 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:56.877 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:56.877 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3802c8c10babc8cf761baf9b04b2cf82de8443f3339cf1202abb47373d0be7fc 00:18:56.877 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:56.877 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:56.877 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.DgG 00:18:56.877 21:35:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.DgG 00:18:56.877 21:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.DgG 00:18:56.877 21:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:18:56.877 21:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1446885 00:18:56.877 21:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 1446885 ']' 00:18:56.877 21:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:56.877 21:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:56.877 21:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:56.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:56.877 21:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:56.877 21:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.877 21:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:56.877 21:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:18:56.877 21:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1446968 /var/tmp/host.sock 00:18:56.877 21:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 1446968 ']' 00:18:56.877 21:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/host.sock 00:18:56.877 21:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:56.877 21:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:56.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:56.877 21:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:56.877 21:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.137 21:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:57.137 21:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:18:57.137 21:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:18:57.137 21:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:57.137 21:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.137 21:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:57.137 21:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:57.137 21:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.4Yi 00:18:57.137 21:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:57.137 21:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.137 21:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:57.137 21:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.4Yi 00:18:57.137 21:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.4Yi 00:18:57.396 21:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.f5V ]] 00:18:57.396 21:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.f5V 00:18:57.396 21:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:57.396 21:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.396 21:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:57.396 21:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.f5V 00:18:57.396 21:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.f5V 00:18:57.655 21:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:57.655 21:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.lQV 00:18:57.655 21:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:57.655 21:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.655 21:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:57.655 21:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.lQV 00:18:57.655 21:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.lQV 00:18:57.914 21:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.5rv ]] 00:18:57.914 21:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.5rv 00:18:57.914 21:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:57.914 21:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.914 21:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:57.914 21:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.5rv 00:18:57.914 21:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.5rv 00:18:58.173 21:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:58.173 21:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.tul 00:18:58.173 21:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:58.173 21:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.173 21:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:58.173 21:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.tul 00:18:58.173 21:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.tul 00:18:58.432 21:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.IUJ ]] 00:18:58.432 21:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.IUJ 00:18:58.432 21:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:58.432 21:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.432 21:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:58.432 21:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.IUJ 00:18:58.432 21:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.IUJ 00:18:58.691 21:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:58.691 21:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.DgG 00:18:58.691 21:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:58.691 21:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.691 21:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:58.691 21:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.DgG 00:18:58.691 21:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.DgG 00:18:58.950 21:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:18:58.950 21:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:58.950 21:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:58.950 21:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:58.950 21:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:58.950 21:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:59.209 21:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:18:59.209 21:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:59.209 21:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:59.209 21:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:59.209 21:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:59.209 21:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.209 21:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.209 21:35:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:59.209 21:35:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.209 21:35:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:59.209 21:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.209 21:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.468 00:18:59.468 21:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:59.468 21:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.468 21:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:59.727 21:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.727 21:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.727 21:35:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:59.727 21:35:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.727 21:35:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:59.727 21:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:59.727 { 00:18:59.727 "cntlid": 1, 00:18:59.727 "qid": 0, 00:18:59.727 "state": "enabled", 00:18:59.727 "listen_address": { 00:18:59.727 "trtype": "TCP", 00:18:59.727 "adrfam": "IPv4", 00:18:59.727 "traddr": "10.0.0.2", 00:18:59.727 "trsvcid": "4420" 00:18:59.727 }, 00:18:59.727 "peer_address": { 00:18:59.727 "trtype": "TCP", 00:18:59.727 "adrfam": "IPv4", 00:18:59.727 "traddr": "10.0.0.1", 00:18:59.727 "trsvcid": "38978" 00:18:59.727 }, 00:18:59.727 "auth": { 00:18:59.727 "state": "completed", 00:18:59.727 "digest": "sha256", 00:18:59.727 "dhgroup": "null" 00:18:59.727 } 00:18:59.727 } 00:18:59.727 ]' 00:18:59.727 21:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:59.986 21:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:59.986 21:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:59.986 21:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:59.986 21:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:59.986 21:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.986 21:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.986 21:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.245 21:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjM5ZjIxOTQ0MmYzZTQ3OTJjOWQzYzA5NzY1ZWMxZDQ0NTRiZDUzYWVlZWZjNGQ0bGeOug==: --dhchap-ctrl-secret DHHC-1:03:ZDcwMjA3MTIwOWVkZDgxN2NlZTM0YTY5YTcwN2I3YTAxM2I5NzdkNTc5NzEwYmZhM2JlOTVhZDUyMzY4ODBhM45e68M=: 00:19:00.812 21:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.812 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.812 21:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:00.812 21:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:00.812 21:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.812 21:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:00.812 21:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:00.812 21:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:00.812 21:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:01.070 21:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:19:01.070 21:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:01.070 21:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:01.070 21:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:01.070 21:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:01.070 21:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.070 21:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.070 21:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:01.070 21:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.070 21:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:01.071 21:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.071 21:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.329 00:19:01.329 21:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:01.329 21:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.329 21:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:01.587 21:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.587 21:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.587 21:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:01.587 21:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.587 21:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:01.588 21:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:01.588 { 00:19:01.588 "cntlid": 3, 00:19:01.588 "qid": 0, 00:19:01.588 "state": "enabled", 00:19:01.588 "listen_address": { 00:19:01.588 "trtype": "TCP", 00:19:01.588 "adrfam": "IPv4", 00:19:01.588 "traddr": "10.0.0.2", 00:19:01.588 "trsvcid": "4420" 00:19:01.588 }, 00:19:01.588 "peer_address": { 00:19:01.588 "trtype": "TCP", 00:19:01.588 "adrfam": "IPv4", 00:19:01.588 "traddr": "10.0.0.1", 00:19:01.588 "trsvcid": "55100" 00:19:01.588 }, 00:19:01.588 "auth": { 00:19:01.588 "state": "completed", 00:19:01.588 "digest": "sha256", 00:19:01.588 "dhgroup": "null" 00:19:01.588 } 00:19:01.588 } 00:19:01.588 ]' 00:19:01.588 21:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:01.846 21:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:01.846 21:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:01.846 21:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:01.846 21:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:01.846 21:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.846 21:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.846 21:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.104 21:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MTJmYzk5NDg3MDU0MDcxOGEwZTI1OWZlMjc0NWYwYzDjLYEU: --dhchap-ctrl-secret DHHC-1:02:Njc4NWI2OTkyZDVmNzA5MjM1ZTQ5NzhhZGRlMjBlNzZhNzI5OWM1ZTk0NTA3YzllFvD1/A==: 00:19:03.039 21:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.039 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.039 21:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:03.039 21:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:03.039 21:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.039 21:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:03.039 21:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:03.039 21:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:03.039 21:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:03.039 21:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:19:03.039 21:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:03.039 21:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:03.039 21:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:03.039 21:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:03.039 21:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.039 21:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.039 21:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:03.039 21:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.039 21:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:03.039 21:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.039 21:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.297 00:19:03.297 21:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:03.297 21:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:03.297 21:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.555 21:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.555 21:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.555 21:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:03.555 21:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.555 21:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:03.555 21:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:03.555 { 00:19:03.555 "cntlid": 5, 00:19:03.555 "qid": 0, 00:19:03.555 "state": "enabled", 00:19:03.555 "listen_address": { 00:19:03.555 "trtype": "TCP", 00:19:03.555 "adrfam": "IPv4", 00:19:03.555 "traddr": "10.0.0.2", 00:19:03.555 "trsvcid": "4420" 00:19:03.555 }, 00:19:03.555 "peer_address": { 00:19:03.555 "trtype": "TCP", 00:19:03.555 "adrfam": "IPv4", 00:19:03.555 "traddr": "10.0.0.1", 00:19:03.555 "trsvcid": "55122" 00:19:03.555 }, 00:19:03.555 "auth": { 00:19:03.555 "state": "completed", 00:19:03.555 "digest": "sha256", 00:19:03.555 "dhgroup": "null" 00:19:03.555 } 00:19:03.555 } 00:19:03.555 ]' 00:19:03.555 21:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:03.813 21:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:03.813 21:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:03.813 21:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:03.813 21:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:03.813 21:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.813 21:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.813 21:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.070 21:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTEyMzFlMzI4ZjI1YmFiZWQwMGUyOWNkNGQ1NDExYTdkMWQ2MzRiOTM2Y2Q1NGE1bPMCmQ==: --dhchap-ctrl-secret DHHC-1:01:YTAzNjBhMzVhZDNiODIyMTg4YTJmYzZkODlhYjNkMjHMTAmO: 00:19:04.636 21:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.636 21:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:04.636 21:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:04.636 21:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.636 21:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:04.636 21:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:04.636 21:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:04.636 21:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:04.894 21:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:19:04.894 21:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:04.894 21:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:04.894 21:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:04.894 21:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:04.894 21:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.894 21:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:04.894 21:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:04.894 21:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.894 21:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:04.894 21:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:04.894 21:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:05.460 00:19:05.460 21:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:05.460 21:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:05.460 21:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.460 21:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.460 21:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.460 21:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:05.460 21:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.460 21:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:05.460 21:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.460 { 00:19:05.460 "cntlid": 7, 00:19:05.460 "qid": 0, 00:19:05.460 "state": "enabled", 00:19:05.460 "listen_address": { 00:19:05.460 "trtype": "TCP", 00:19:05.460 "adrfam": "IPv4", 00:19:05.460 "traddr": "10.0.0.2", 00:19:05.460 "trsvcid": "4420" 00:19:05.460 }, 00:19:05.460 "peer_address": { 00:19:05.460 "trtype": "TCP", 00:19:05.460 "adrfam": "IPv4", 00:19:05.460 "traddr": "10.0.0.1", 00:19:05.460 "trsvcid": "55160" 00:19:05.460 }, 00:19:05.460 "auth": { 00:19:05.460 "state": "completed", 00:19:05.460 "digest": "sha256", 00:19:05.460 "dhgroup": "null" 00:19:05.460 } 00:19:05.460 } 00:19:05.460 ]' 00:19:05.461 21:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.719 21:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:05.719 21:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.719 21:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:05.719 21:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.719 21:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.719 21:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.719 21:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.978 21:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MzgwMmM4YzEwYmFiYzhjZjc2MWJhZjliMDRiMmNmODJkZTg0NDNmMzMzOWNmMTIwMmFiYjQ3MzczZDBiZTdmYwZFBis=: 00:19:06.914 21:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.914 21:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:06.914 21:36:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:06.914 21:36:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.914 21:36:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:06.914 21:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:06.914 21:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:06.914 21:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:06.914 21:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:06.914 21:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:19:06.914 21:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:06.914 21:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:06.914 21:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:06.914 21:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:06.914 21:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.914 21:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.914 21:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:06.914 21:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.914 21:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:06.914 21:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.914 21:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.172 00:19:07.172 21:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:07.172 21:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.173 21:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:07.431 21:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.431 21:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.431 21:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:07.431 21:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.431 21:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:07.431 21:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:07.431 { 00:19:07.431 "cntlid": 9, 00:19:07.431 "qid": 0, 00:19:07.431 "state": "enabled", 00:19:07.431 "listen_address": { 00:19:07.431 "trtype": "TCP", 00:19:07.431 "adrfam": "IPv4", 00:19:07.431 "traddr": "10.0.0.2", 00:19:07.431 "trsvcid": "4420" 00:19:07.431 }, 00:19:07.431 "peer_address": { 00:19:07.431 "trtype": "TCP", 00:19:07.431 "adrfam": "IPv4", 00:19:07.431 "traddr": "10.0.0.1", 00:19:07.431 "trsvcid": "55176" 00:19:07.431 }, 00:19:07.431 "auth": { 00:19:07.431 "state": "completed", 00:19:07.431 "digest": "sha256", 00:19:07.431 "dhgroup": "ffdhe2048" 00:19:07.431 } 00:19:07.431 } 00:19:07.431 ]' 00:19:07.431 21:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:07.690 21:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:07.690 21:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:07.690 21:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:07.690 21:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:07.690 21:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.690 21:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.690 21:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.948 21:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjM5ZjIxOTQ0MmYzZTQ3OTJjOWQzYzA5NzY1ZWMxZDQ0NTRiZDUzYWVlZWZjNGQ0bGeOug==: --dhchap-ctrl-secret DHHC-1:03:ZDcwMjA3MTIwOWVkZDgxN2NlZTM0YTY5YTcwN2I3YTAxM2I5NzdkNTc5NzEwYmZhM2JlOTVhZDUyMzY4ODBhM45e68M=: 00:19:08.883 21:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.883 21:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:08.883 21:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:08.883 21:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.883 21:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:08.883 21:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:08.883 21:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:08.883 21:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:08.883 21:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:19:08.883 21:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:08.883 21:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:08.883 21:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:08.883 21:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:08.883 21:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.883 21:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.883 21:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:08.883 21:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.883 21:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:08.883 21:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.883 21:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.142 00:19:09.142 21:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:09.142 21:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.142 21:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:09.401 21:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.401 21:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.401 21:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:09.401 21:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.401 21:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:09.401 21:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:09.401 { 00:19:09.401 "cntlid": 11, 00:19:09.401 "qid": 0, 00:19:09.401 "state": "enabled", 00:19:09.401 "listen_address": { 00:19:09.401 "trtype": "TCP", 00:19:09.401 "adrfam": "IPv4", 00:19:09.401 "traddr": "10.0.0.2", 00:19:09.401 "trsvcid": "4420" 00:19:09.401 }, 00:19:09.401 "peer_address": { 00:19:09.401 "trtype": "TCP", 00:19:09.401 "adrfam": "IPv4", 00:19:09.401 "traddr": "10.0.0.1", 00:19:09.401 "trsvcid": "55202" 00:19:09.401 }, 00:19:09.401 "auth": { 00:19:09.401 "state": "completed", 00:19:09.401 "digest": "sha256", 00:19:09.401 "dhgroup": "ffdhe2048" 00:19:09.401 } 00:19:09.401 } 00:19:09.401 ]' 00:19:09.401 21:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:09.660 21:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:09.660 21:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:09.660 21:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:09.660 21:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:09.660 21:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.660 21:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.660 21:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.919 21:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MTJmYzk5NDg3MDU0MDcxOGEwZTI1OWZlMjc0NWYwYzDjLYEU: --dhchap-ctrl-secret DHHC-1:02:Njc4NWI2OTkyZDVmNzA5MjM1ZTQ5NzhhZGRlMjBlNzZhNzI5OWM1ZTk0NTA3YzllFvD1/A==: 00:19:10.486 21:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.486 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.486 21:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:10.486 21:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:10.486 21:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.486 21:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:10.486 21:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:10.486 21:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:10.486 21:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:10.745 21:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:19:10.745 21:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:10.745 21:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:10.745 21:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:10.745 21:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:10.745 21:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.745 21:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.745 21:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:10.745 21:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.745 21:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:10.745 21:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.745 21:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.312 00:19:11.312 21:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:11.312 21:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:11.312 21:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.312 21:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.312 21:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.312 21:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:11.312 21:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.312 21:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:11.312 21:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:11.312 { 00:19:11.312 "cntlid": 13, 00:19:11.312 "qid": 0, 00:19:11.312 "state": "enabled", 00:19:11.312 "listen_address": { 00:19:11.312 "trtype": "TCP", 00:19:11.312 "adrfam": "IPv4", 00:19:11.312 "traddr": "10.0.0.2", 00:19:11.312 "trsvcid": "4420" 00:19:11.312 }, 00:19:11.312 "peer_address": { 00:19:11.312 "trtype": "TCP", 00:19:11.312 "adrfam": "IPv4", 00:19:11.312 "traddr": "10.0.0.1", 00:19:11.312 "trsvcid": "41706" 00:19:11.312 }, 00:19:11.312 "auth": { 00:19:11.312 "state": "completed", 00:19:11.312 "digest": "sha256", 00:19:11.312 "dhgroup": "ffdhe2048" 00:19:11.312 } 00:19:11.312 } 00:19:11.312 ]' 00:19:11.312 21:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:11.571 21:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:11.571 21:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:11.571 21:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:11.571 21:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:11.571 21:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.571 21:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.571 21:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.830 21:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTEyMzFlMzI4ZjI1YmFiZWQwMGUyOWNkNGQ1NDExYTdkMWQ2MzRiOTM2Y2Q1NGE1bPMCmQ==: --dhchap-ctrl-secret DHHC-1:01:YTAzNjBhMzVhZDNiODIyMTg4YTJmYzZkODlhYjNkMjHMTAmO: 00:19:12.397 21:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.397 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.397 21:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:12.397 21:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:12.397 21:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.397 21:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:12.397 21:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:12.397 21:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:12.397 21:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:12.656 21:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:19:12.656 21:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:12.656 21:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:12.656 21:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:12.656 21:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:12.656 21:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.656 21:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:12.656 21:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:12.656 21:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.656 21:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:12.656 21:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:12.656 21:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:12.927 00:19:12.927 21:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:12.927 21:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:12.927 21:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.185 21:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.185 21:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.185 21:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:13.185 21:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.185 21:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:13.185 21:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:13.185 { 00:19:13.185 "cntlid": 15, 00:19:13.185 "qid": 0, 00:19:13.185 "state": "enabled", 00:19:13.185 "listen_address": { 00:19:13.185 "trtype": "TCP", 00:19:13.185 "adrfam": "IPv4", 00:19:13.185 "traddr": "10.0.0.2", 00:19:13.185 "trsvcid": "4420" 00:19:13.185 }, 00:19:13.185 "peer_address": { 00:19:13.185 "trtype": "TCP", 00:19:13.185 "adrfam": "IPv4", 00:19:13.185 "traddr": "10.0.0.1", 00:19:13.185 "trsvcid": "41742" 00:19:13.185 }, 00:19:13.185 "auth": { 00:19:13.185 "state": "completed", 00:19:13.185 "digest": "sha256", 00:19:13.185 "dhgroup": "ffdhe2048" 00:19:13.185 } 00:19:13.185 } 00:19:13.185 ]' 00:19:13.185 21:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:13.444 21:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:13.444 21:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:13.444 21:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:13.444 21:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:13.444 21:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.444 21:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.444 21:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.702 21:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MzgwMmM4YzEwYmFiYzhjZjc2MWJhZjliMDRiMmNmODJkZTg0NDNmMzMzOWNmMTIwMmFiYjQ3MzczZDBiZTdmYwZFBis=: 00:19:14.695 21:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.695 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.695 21:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:14.695 21:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:14.695 21:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.695 21:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:14.695 21:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:14.695 21:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:14.695 21:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:14.695 21:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:14.695 21:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:19:14.695 21:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:14.695 21:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:14.695 21:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:14.695 21:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:14.695 21:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.695 21:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.695 21:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:14.695 21:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.695 21:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:14.695 21:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.695 21:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.955 00:19:14.955 21:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:14.955 21:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:14.955 21:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.213 21:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.213 21:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.213 21:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:15.213 21:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.213 21:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:15.213 21:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:15.213 { 00:19:15.214 "cntlid": 17, 00:19:15.214 "qid": 0, 00:19:15.214 "state": "enabled", 00:19:15.214 "listen_address": { 00:19:15.214 "trtype": "TCP", 00:19:15.214 "adrfam": "IPv4", 00:19:15.214 "traddr": "10.0.0.2", 00:19:15.214 "trsvcid": "4420" 00:19:15.214 }, 00:19:15.214 "peer_address": { 00:19:15.214 "trtype": "TCP", 00:19:15.214 "adrfam": "IPv4", 00:19:15.214 "traddr": "10.0.0.1", 00:19:15.214 "trsvcid": "41776" 00:19:15.214 }, 00:19:15.214 "auth": { 00:19:15.214 "state": "completed", 00:19:15.214 "digest": "sha256", 00:19:15.214 "dhgroup": "ffdhe3072" 00:19:15.214 } 00:19:15.214 } 00:19:15.214 ]' 00:19:15.214 21:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:15.214 21:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:15.214 21:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:15.473 21:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:15.473 21:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:15.473 21:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.473 21:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.473 21:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.732 21:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjM5ZjIxOTQ0MmYzZTQ3OTJjOWQzYzA5NzY1ZWMxZDQ0NTRiZDUzYWVlZWZjNGQ0bGeOug==: --dhchap-ctrl-secret DHHC-1:03:ZDcwMjA3MTIwOWVkZDgxN2NlZTM0YTY5YTcwN2I3YTAxM2I5NzdkNTc5NzEwYmZhM2JlOTVhZDUyMzY4ODBhM45e68M=: 00:19:16.299 21:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.557 21:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:16.557 21:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:16.557 21:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.557 21:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:16.557 21:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:16.557 21:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:16.558 21:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:16.816 21:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:19:16.816 21:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:16.816 21:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:16.816 21:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:16.816 21:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:16.816 21:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.816 21:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.816 21:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:16.816 21:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.816 21:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:16.816 21:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.816 21:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.075 00:19:17.075 21:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:17.075 21:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:17.075 21:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.334 21:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.334 21:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.334 21:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:17.334 21:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.334 21:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:17.334 21:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:17.334 { 00:19:17.334 "cntlid": 19, 00:19:17.334 "qid": 0, 00:19:17.334 "state": "enabled", 00:19:17.334 "listen_address": { 00:19:17.334 "trtype": "TCP", 00:19:17.334 "adrfam": "IPv4", 00:19:17.334 "traddr": "10.0.0.2", 00:19:17.334 "trsvcid": "4420" 00:19:17.334 }, 00:19:17.334 "peer_address": { 00:19:17.334 "trtype": "TCP", 00:19:17.334 "adrfam": "IPv4", 00:19:17.334 "traddr": "10.0.0.1", 00:19:17.334 "trsvcid": "41786" 00:19:17.334 }, 00:19:17.334 "auth": { 00:19:17.334 "state": "completed", 00:19:17.334 "digest": "sha256", 00:19:17.334 "dhgroup": "ffdhe3072" 00:19:17.334 } 00:19:17.334 } 00:19:17.334 ]' 00:19:17.334 21:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:17.334 21:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:17.334 21:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:17.334 21:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:17.334 21:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:17.334 21:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.334 21:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.334 21:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.593 21:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MTJmYzk5NDg3MDU0MDcxOGEwZTI1OWZlMjc0NWYwYzDjLYEU: --dhchap-ctrl-secret DHHC-1:02:Njc4NWI2OTkyZDVmNzA5MjM1ZTQ5NzhhZGRlMjBlNzZhNzI5OWM1ZTk0NTA3YzllFvD1/A==: 00:19:18.530 21:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.530 21:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:18.530 21:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:18.530 21:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.530 21:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:18.531 21:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:18.531 21:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:18.531 21:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:18.531 21:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:19:18.531 21:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:18.531 21:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:18.531 21:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:18.531 21:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:18.531 21:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.531 21:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.531 21:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:18.531 21:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.531 21:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:18.531 21:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.531 21:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.098 00:19:19.098 21:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:19.098 21:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:19.098 21:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.098 21:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.098 21:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.098 21:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:19.098 21:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.098 21:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:19.098 21:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:19.098 { 00:19:19.098 "cntlid": 21, 00:19:19.098 "qid": 0, 00:19:19.098 "state": "enabled", 00:19:19.098 "listen_address": { 00:19:19.098 "trtype": "TCP", 00:19:19.098 "adrfam": "IPv4", 00:19:19.098 "traddr": "10.0.0.2", 00:19:19.098 "trsvcid": "4420" 00:19:19.098 }, 00:19:19.098 "peer_address": { 00:19:19.098 "trtype": "TCP", 00:19:19.098 "adrfam": "IPv4", 00:19:19.098 "traddr": "10.0.0.1", 00:19:19.098 "trsvcid": "41808" 00:19:19.098 }, 00:19:19.098 "auth": { 00:19:19.098 "state": "completed", 00:19:19.098 "digest": "sha256", 00:19:19.098 "dhgroup": "ffdhe3072" 00:19:19.098 } 00:19:19.098 } 00:19:19.098 ]' 00:19:19.098 21:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:19.357 21:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:19.357 21:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:19.357 21:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:19.357 21:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:19.357 21:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.357 21:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.357 21:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.616 21:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTEyMzFlMzI4ZjI1YmFiZWQwMGUyOWNkNGQ1NDExYTdkMWQ2MzRiOTM2Y2Q1NGE1bPMCmQ==: --dhchap-ctrl-secret DHHC-1:01:YTAzNjBhMzVhZDNiODIyMTg4YTJmYzZkODlhYjNkMjHMTAmO: 00:19:20.550 21:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.550 21:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:20.550 21:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:20.550 21:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.550 21:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:20.550 21:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:20.550 21:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:20.550 21:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:20.550 21:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:19:20.550 21:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:20.550 21:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:20.550 21:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:20.550 21:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:20.550 21:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.550 21:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:20.550 21:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:20.550 21:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.550 21:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:20.550 21:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:20.550 21:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:20.808 00:19:20.808 21:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:20.808 21:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:20.808 21:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.066 21:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.066 21:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.066 21:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:21.066 21:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.066 21:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:21.066 21:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:21.066 { 00:19:21.066 "cntlid": 23, 00:19:21.066 "qid": 0, 00:19:21.066 "state": "enabled", 00:19:21.066 "listen_address": { 00:19:21.066 "trtype": "TCP", 00:19:21.066 "adrfam": "IPv4", 00:19:21.066 "traddr": "10.0.0.2", 00:19:21.066 "trsvcid": "4420" 00:19:21.066 }, 00:19:21.066 "peer_address": { 00:19:21.066 "trtype": "TCP", 00:19:21.066 "adrfam": "IPv4", 00:19:21.066 "traddr": "10.0.0.1", 00:19:21.066 "trsvcid": "41834" 00:19:21.066 }, 00:19:21.066 "auth": { 00:19:21.066 "state": "completed", 00:19:21.066 "digest": "sha256", 00:19:21.066 "dhgroup": "ffdhe3072" 00:19:21.066 } 00:19:21.066 } 00:19:21.066 ]' 00:19:21.066 21:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:21.324 21:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:21.324 21:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:21.324 21:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:21.324 21:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:21.324 21:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.324 21:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.324 21:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.582 21:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MzgwMmM4YzEwYmFiYzhjZjc2MWJhZjliMDRiMmNmODJkZTg0NDNmMzMzOWNmMTIwMmFiYjQ3MzczZDBiZTdmYwZFBis=: 00:19:22.518 21:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.518 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.518 21:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:22.518 21:36:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:22.518 21:36:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.518 21:36:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:22.518 21:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:22.518 21:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:22.518 21:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:22.518 21:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:22.518 21:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:19:22.518 21:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:22.518 21:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:22.518 21:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:22.518 21:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:22.518 21:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.518 21:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.518 21:36:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:22.518 21:36:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.518 21:36:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:22.518 21:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.518 21:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.084 00:19:23.084 21:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:23.084 21:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:23.084 21:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.343 21:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.343 21:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.343 21:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:23.343 21:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.343 21:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:23.343 21:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:23.343 { 00:19:23.343 "cntlid": 25, 00:19:23.343 "qid": 0, 00:19:23.343 "state": "enabled", 00:19:23.343 "listen_address": { 00:19:23.343 "trtype": "TCP", 00:19:23.343 "adrfam": "IPv4", 00:19:23.343 "traddr": "10.0.0.2", 00:19:23.343 "trsvcid": "4420" 00:19:23.343 }, 00:19:23.343 "peer_address": { 00:19:23.343 "trtype": "TCP", 00:19:23.343 "adrfam": "IPv4", 00:19:23.343 "traddr": "10.0.0.1", 00:19:23.343 "trsvcid": "43662" 00:19:23.343 }, 00:19:23.343 "auth": { 00:19:23.343 "state": "completed", 00:19:23.343 "digest": "sha256", 00:19:23.343 "dhgroup": "ffdhe4096" 00:19:23.343 } 00:19:23.343 } 00:19:23.343 ]' 00:19:23.343 21:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:23.343 21:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:23.343 21:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:23.343 21:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:23.343 21:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:23.343 21:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.343 21:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.343 21:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.601 21:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjM5ZjIxOTQ0MmYzZTQ3OTJjOWQzYzA5NzY1ZWMxZDQ0NTRiZDUzYWVlZWZjNGQ0bGeOug==: --dhchap-ctrl-secret DHHC-1:03:ZDcwMjA3MTIwOWVkZDgxN2NlZTM0YTY5YTcwN2I3YTAxM2I5NzdkNTc5NzEwYmZhM2JlOTVhZDUyMzY4ODBhM45e68M=: 00:19:24.535 21:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.535 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.535 21:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:24.535 21:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:24.535 21:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.535 21:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:24.535 21:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:24.535 21:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:24.535 21:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:24.793 21:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:19:24.793 21:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:24.793 21:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:24.793 21:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:24.793 21:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:24.793 21:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.793 21:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.793 21:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:24.793 21:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.793 21:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:24.793 21:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.793 21:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.051 00:19:25.051 21:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:25.051 21:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:25.051 21:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.309 21:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.309 21:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.309 21:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:25.309 21:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.309 21:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:25.309 21:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:25.309 { 00:19:25.309 "cntlid": 27, 00:19:25.309 "qid": 0, 00:19:25.309 "state": "enabled", 00:19:25.309 "listen_address": { 00:19:25.309 "trtype": "TCP", 00:19:25.309 "adrfam": "IPv4", 00:19:25.309 "traddr": "10.0.0.2", 00:19:25.309 "trsvcid": "4420" 00:19:25.309 }, 00:19:25.309 "peer_address": { 00:19:25.309 "trtype": "TCP", 00:19:25.309 "adrfam": "IPv4", 00:19:25.309 "traddr": "10.0.0.1", 00:19:25.309 "trsvcid": "43696" 00:19:25.309 }, 00:19:25.309 "auth": { 00:19:25.309 "state": "completed", 00:19:25.309 "digest": "sha256", 00:19:25.309 "dhgroup": "ffdhe4096" 00:19:25.309 } 00:19:25.309 } 00:19:25.309 ]' 00:19:25.309 21:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:25.309 21:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:25.309 21:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:25.309 21:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:25.309 21:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:25.567 21:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.567 21:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.567 21:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.567 21:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MTJmYzk5NDg3MDU0MDcxOGEwZTI1OWZlMjc0NWYwYzDjLYEU: --dhchap-ctrl-secret DHHC-1:02:Njc4NWI2OTkyZDVmNzA5MjM1ZTQ5NzhhZGRlMjBlNzZhNzI5OWM1ZTk0NTA3YzllFvD1/A==: 00:19:26.501 21:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.501 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.501 21:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:26.501 21:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:26.501 21:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.501 21:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:26.501 21:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:26.501 21:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:26.501 21:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:26.760 21:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:19:26.760 21:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:26.760 21:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:26.760 21:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:26.760 21:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:26.760 21:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.760 21:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.760 21:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:26.760 21:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.760 21:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:26.760 21:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.760 21:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:27.019 00:19:27.019 21:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:27.019 21:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.019 21:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:27.277 21:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.277 21:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.277 21:36:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:27.277 21:36:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.277 21:36:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:27.277 21:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:27.277 { 00:19:27.277 "cntlid": 29, 00:19:27.277 "qid": 0, 00:19:27.277 "state": "enabled", 00:19:27.277 "listen_address": { 00:19:27.277 "trtype": "TCP", 00:19:27.277 "adrfam": "IPv4", 00:19:27.277 "traddr": "10.0.0.2", 00:19:27.277 "trsvcid": "4420" 00:19:27.277 }, 00:19:27.277 "peer_address": { 00:19:27.277 "trtype": "TCP", 00:19:27.277 "adrfam": "IPv4", 00:19:27.277 "traddr": "10.0.0.1", 00:19:27.277 "trsvcid": "43720" 00:19:27.277 }, 00:19:27.277 "auth": { 00:19:27.277 "state": "completed", 00:19:27.277 "digest": "sha256", 00:19:27.277 "dhgroup": "ffdhe4096" 00:19:27.277 } 00:19:27.277 } 00:19:27.277 ]' 00:19:27.277 21:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:27.536 21:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:27.536 21:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:27.536 21:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:27.536 21:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:27.536 21:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.536 21:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.536 21:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.795 21:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTEyMzFlMzI4ZjI1YmFiZWQwMGUyOWNkNGQ1NDExYTdkMWQ2MzRiOTM2Y2Q1NGE1bPMCmQ==: --dhchap-ctrl-secret DHHC-1:01:YTAzNjBhMzVhZDNiODIyMTg4YTJmYzZkODlhYjNkMjHMTAmO: 00:19:28.731 21:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.731 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.731 21:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:28.731 21:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:28.731 21:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.731 21:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:28.731 21:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.731 21:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:28.731 21:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:28.731 21:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:19:28.731 21:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:28.731 21:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:28.731 21:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:28.731 21:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:28.731 21:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.731 21:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:28.731 21:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:28.731 21:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.731 21:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:28.731 21:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:28.731 21:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:29.297 00:19:29.297 21:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:29.297 21:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:29.297 21:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.297 21:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.297 21:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.297 21:36:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:29.297 21:36:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.297 21:36:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:29.297 21:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:29.297 { 00:19:29.297 "cntlid": 31, 00:19:29.297 "qid": 0, 00:19:29.297 "state": "enabled", 00:19:29.297 "listen_address": { 00:19:29.297 "trtype": "TCP", 00:19:29.297 "adrfam": "IPv4", 00:19:29.297 "traddr": "10.0.0.2", 00:19:29.297 "trsvcid": "4420" 00:19:29.297 }, 00:19:29.297 "peer_address": { 00:19:29.297 "trtype": "TCP", 00:19:29.297 "adrfam": "IPv4", 00:19:29.297 "traddr": "10.0.0.1", 00:19:29.297 "trsvcid": "43742" 00:19:29.297 }, 00:19:29.297 "auth": { 00:19:29.297 "state": "completed", 00:19:29.297 "digest": "sha256", 00:19:29.297 "dhgroup": "ffdhe4096" 00:19:29.297 } 00:19:29.297 } 00:19:29.297 ]' 00:19:29.297 21:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:29.554 21:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:29.554 21:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:29.554 21:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:29.554 21:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:29.554 21:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.554 21:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.554 21:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.812 21:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MzgwMmM4YzEwYmFiYzhjZjc2MWJhZjliMDRiMmNmODJkZTg0NDNmMzMzOWNmMTIwMmFiYjQ3MzczZDBiZTdmYwZFBis=: 00:19:30.745 21:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.745 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.745 21:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:30.745 21:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:30.745 21:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.745 21:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:30.745 21:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:30.745 21:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:30.745 21:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:30.745 21:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:30.745 21:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:19:30.745 21:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:30.745 21:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:30.745 21:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:30.745 21:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:30.745 21:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.745 21:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.745 21:36:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:30.745 21:36:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.745 21:36:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:30.745 21:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.745 21:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.312 00:19:31.312 21:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:31.312 21:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:31.312 21:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.570 21:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.570 21:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.570 21:36:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:31.570 21:36:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.570 21:36:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:31.570 21:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.570 { 00:19:31.570 "cntlid": 33, 00:19:31.570 "qid": 0, 00:19:31.570 "state": "enabled", 00:19:31.570 "listen_address": { 00:19:31.570 "trtype": "TCP", 00:19:31.570 "adrfam": "IPv4", 00:19:31.570 "traddr": "10.0.0.2", 00:19:31.570 "trsvcid": "4420" 00:19:31.570 }, 00:19:31.570 "peer_address": { 00:19:31.570 "trtype": "TCP", 00:19:31.570 "adrfam": "IPv4", 00:19:31.570 "traddr": "10.0.0.1", 00:19:31.570 "trsvcid": "46182" 00:19:31.570 }, 00:19:31.570 "auth": { 00:19:31.570 "state": "completed", 00:19:31.570 "digest": "sha256", 00:19:31.570 "dhgroup": "ffdhe6144" 00:19:31.570 } 00:19:31.570 } 00:19:31.570 ]' 00:19:31.570 21:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:31.570 21:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:31.570 21:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:31.829 21:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:31.829 21:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:31.829 21:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.829 21:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.829 21:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.087 21:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjM5ZjIxOTQ0MmYzZTQ3OTJjOWQzYzA5NzY1ZWMxZDQ0NTRiZDUzYWVlZWZjNGQ0bGeOug==: --dhchap-ctrl-secret DHHC-1:03:ZDcwMjA3MTIwOWVkZDgxN2NlZTM0YTY5YTcwN2I3YTAxM2I5NzdkNTc5NzEwYmZhM2JlOTVhZDUyMzY4ODBhM45e68M=: 00:19:32.654 21:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.914 21:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:32.914 21:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:32.914 21:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.914 21:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:32.914 21:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:32.914 21:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:32.914 21:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:33.173 21:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:19:33.173 21:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:33.173 21:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:33.173 21:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:33.173 21:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:33.173 21:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.173 21:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.173 21:36:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:33.173 21:36:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.173 21:36:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:33.173 21:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.173 21:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.430 00:19:33.430 21:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.430 21:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.430 21:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.688 21:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.688 21:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.688 21:36:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:33.689 21:36:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.689 21:36:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:33.689 21:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:33.689 { 00:19:33.689 "cntlid": 35, 00:19:33.689 "qid": 0, 00:19:33.689 "state": "enabled", 00:19:33.689 "listen_address": { 00:19:33.689 "trtype": "TCP", 00:19:33.689 "adrfam": "IPv4", 00:19:33.689 "traddr": "10.0.0.2", 00:19:33.689 "trsvcid": "4420" 00:19:33.689 }, 00:19:33.689 "peer_address": { 00:19:33.689 "trtype": "TCP", 00:19:33.689 "adrfam": "IPv4", 00:19:33.689 "traddr": "10.0.0.1", 00:19:33.689 "trsvcid": "46208" 00:19:33.689 }, 00:19:33.689 "auth": { 00:19:33.689 "state": "completed", 00:19:33.689 "digest": "sha256", 00:19:33.689 "dhgroup": "ffdhe6144" 00:19:33.689 } 00:19:33.689 } 00:19:33.689 ]' 00:19:33.947 21:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:33.947 21:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:33.947 21:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:33.947 21:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:33.947 21:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:33.947 21:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.947 21:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.947 21:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.205 21:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MTJmYzk5NDg3MDU0MDcxOGEwZTI1OWZlMjc0NWYwYzDjLYEU: --dhchap-ctrl-secret DHHC-1:02:Njc4NWI2OTkyZDVmNzA5MjM1ZTQ5NzhhZGRlMjBlNzZhNzI5OWM1ZTk0NTA3YzllFvD1/A==: 00:19:34.772 21:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.772 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.772 21:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:34.772 21:36:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:34.772 21:36:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.772 21:36:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:34.772 21:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:34.772 21:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:34.772 21:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:35.030 21:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:19:35.030 21:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:35.030 21:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:35.030 21:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:35.030 21:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:35.030 21:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.030 21:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.030 21:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:35.030 21:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.030 21:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:35.030 21:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.030 21:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.597 00:19:35.597 21:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:35.597 21:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:35.597 21:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.855 21:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.855 21:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.855 21:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:35.855 21:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.855 21:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:35.855 21:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:35.855 { 00:19:35.855 "cntlid": 37, 00:19:35.855 "qid": 0, 00:19:35.855 "state": "enabled", 00:19:35.855 "listen_address": { 00:19:35.855 "trtype": "TCP", 00:19:35.855 "adrfam": "IPv4", 00:19:35.855 "traddr": "10.0.0.2", 00:19:35.855 "trsvcid": "4420" 00:19:35.855 }, 00:19:35.855 "peer_address": { 00:19:35.855 "trtype": "TCP", 00:19:35.855 "adrfam": "IPv4", 00:19:35.855 "traddr": "10.0.0.1", 00:19:35.855 "trsvcid": "46226" 00:19:35.855 }, 00:19:35.855 "auth": { 00:19:35.855 "state": "completed", 00:19:35.855 "digest": "sha256", 00:19:35.855 "dhgroup": "ffdhe6144" 00:19:35.855 } 00:19:35.855 } 00:19:35.855 ]' 00:19:35.855 21:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:35.855 21:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:35.855 21:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:35.855 21:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:35.855 21:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:35.855 21:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.855 21:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.855 21:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.114 21:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTEyMzFlMzI4ZjI1YmFiZWQwMGUyOWNkNGQ1NDExYTdkMWQ2MzRiOTM2Y2Q1NGE1bPMCmQ==: --dhchap-ctrl-secret DHHC-1:01:YTAzNjBhMzVhZDNiODIyMTg4YTJmYzZkODlhYjNkMjHMTAmO: 00:19:37.048 21:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.048 21:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:37.048 21:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:37.048 21:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.048 21:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:37.048 21:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.048 21:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:37.048 21:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:37.306 21:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:19:37.306 21:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:37.306 21:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:37.306 21:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:37.306 21:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:37.306 21:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.306 21:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:37.306 21:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:37.306 21:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.306 21:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:37.306 21:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:37.306 21:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:37.564 00:19:37.564 21:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:37.564 21:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:37.564 21:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.822 21:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.822 21:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.822 21:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:37.822 21:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.822 21:36:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:37.822 21:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:37.822 { 00:19:37.822 "cntlid": 39, 00:19:37.822 "qid": 0, 00:19:37.822 "state": "enabled", 00:19:37.822 "listen_address": { 00:19:37.822 "trtype": "TCP", 00:19:37.822 "adrfam": "IPv4", 00:19:37.822 "traddr": "10.0.0.2", 00:19:37.822 "trsvcid": "4420" 00:19:37.822 }, 00:19:37.822 "peer_address": { 00:19:37.822 "trtype": "TCP", 00:19:37.822 "adrfam": "IPv4", 00:19:37.822 "traddr": "10.0.0.1", 00:19:37.822 "trsvcid": "46244" 00:19:37.822 }, 00:19:37.822 "auth": { 00:19:37.822 "state": "completed", 00:19:37.822 "digest": "sha256", 00:19:37.822 "dhgroup": "ffdhe6144" 00:19:37.822 } 00:19:37.822 } 00:19:37.822 ]' 00:19:37.822 21:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:37.822 21:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:37.822 21:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:38.080 21:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:38.080 21:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:38.080 21:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.080 21:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.080 21:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.338 21:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MzgwMmM4YzEwYmFiYzhjZjc2MWJhZjliMDRiMmNmODJkZTg0NDNmMzMzOWNmMTIwMmFiYjQ3MzczZDBiZTdmYwZFBis=: 00:19:38.905 21:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.905 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.905 21:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:38.905 21:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:38.905 21:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.905 21:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:38.905 21:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:38.905 21:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:38.905 21:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:38.905 21:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:39.164 21:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:19:39.164 21:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:39.164 21:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:39.164 21:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:39.164 21:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:39.164 21:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.164 21:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.164 21:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:39.164 21:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.164 21:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:39.164 21:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.164 21:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.100 00:19:40.100 21:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:40.100 21:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.100 21:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:40.100 21:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.100 21:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.100 21:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:40.100 21:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.100 21:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:40.100 21:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:40.100 { 00:19:40.100 "cntlid": 41, 00:19:40.100 "qid": 0, 00:19:40.100 "state": "enabled", 00:19:40.100 "listen_address": { 00:19:40.100 "trtype": "TCP", 00:19:40.100 "adrfam": "IPv4", 00:19:40.100 "traddr": "10.0.0.2", 00:19:40.100 "trsvcid": "4420" 00:19:40.100 }, 00:19:40.100 "peer_address": { 00:19:40.100 "trtype": "TCP", 00:19:40.100 "adrfam": "IPv4", 00:19:40.100 "traddr": "10.0.0.1", 00:19:40.100 "trsvcid": "46262" 00:19:40.100 }, 00:19:40.100 "auth": { 00:19:40.100 "state": "completed", 00:19:40.100 "digest": "sha256", 00:19:40.100 "dhgroup": "ffdhe8192" 00:19:40.100 } 00:19:40.100 } 00:19:40.100 ]' 00:19:40.100 21:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:40.358 21:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:40.358 21:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:40.358 21:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:40.358 21:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:40.358 21:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.358 21:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.358 21:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.616 21:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjM5ZjIxOTQ0MmYzZTQ3OTJjOWQzYzA5NzY1ZWMxZDQ0NTRiZDUzYWVlZWZjNGQ0bGeOug==: --dhchap-ctrl-secret DHHC-1:03:ZDcwMjA3MTIwOWVkZDgxN2NlZTM0YTY5YTcwN2I3YTAxM2I5NzdkNTc5NzEwYmZhM2JlOTVhZDUyMzY4ODBhM45e68M=: 00:19:41.550 21:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.550 21:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:41.550 21:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:41.550 21:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.550 21:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:41.550 21:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:41.550 21:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:41.550 21:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:41.550 21:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:19:41.550 21:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:41.550 21:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:41.550 21:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:41.550 21:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:41.550 21:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.550 21:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.550 21:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:41.550 21:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.808 21:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:41.808 21:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.808 21:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.373 00:19:42.373 21:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:42.373 21:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.373 21:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:42.631 21:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.631 21:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.631 21:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:42.631 21:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.631 21:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:42.631 21:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:42.631 { 00:19:42.631 "cntlid": 43, 00:19:42.631 "qid": 0, 00:19:42.631 "state": "enabled", 00:19:42.631 "listen_address": { 00:19:42.631 "trtype": "TCP", 00:19:42.631 "adrfam": "IPv4", 00:19:42.631 "traddr": "10.0.0.2", 00:19:42.631 "trsvcid": "4420" 00:19:42.631 }, 00:19:42.631 "peer_address": { 00:19:42.631 "trtype": "TCP", 00:19:42.631 "adrfam": "IPv4", 00:19:42.631 "traddr": "10.0.0.1", 00:19:42.631 "trsvcid": "46738" 00:19:42.631 }, 00:19:42.631 "auth": { 00:19:42.631 "state": "completed", 00:19:42.631 "digest": "sha256", 00:19:42.631 "dhgroup": "ffdhe8192" 00:19:42.631 } 00:19:42.631 } 00:19:42.631 ]' 00:19:42.631 21:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:42.631 21:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:42.631 21:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:42.631 21:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:42.631 21:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:42.631 21:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.631 21:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.631 21:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.904 21:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MTJmYzk5NDg3MDU0MDcxOGEwZTI1OWZlMjc0NWYwYzDjLYEU: --dhchap-ctrl-secret DHHC-1:02:Njc4NWI2OTkyZDVmNzA5MjM1ZTQ5NzhhZGRlMjBlNzZhNzI5OWM1ZTk0NTA3YzllFvD1/A==: 00:19:43.914 21:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.914 21:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:43.914 21:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:43.914 21:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.914 21:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:43.914 21:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:43.914 21:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:43.914 21:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:43.914 21:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:19:43.914 21:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:43.914 21:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:43.914 21:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:43.914 21:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:43.914 21:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.914 21:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.914 21:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:43.914 21:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.914 21:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:43.914 21:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.914 21:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.849 00:19:44.849 21:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:44.849 21:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:44.849 21:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.849 21:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.849 21:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.849 21:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:44.849 21:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.849 21:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:44.849 21:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:44.849 { 00:19:44.849 "cntlid": 45, 00:19:44.849 "qid": 0, 00:19:44.849 "state": "enabled", 00:19:44.849 "listen_address": { 00:19:44.849 "trtype": "TCP", 00:19:44.849 "adrfam": "IPv4", 00:19:44.849 "traddr": "10.0.0.2", 00:19:44.849 "trsvcid": "4420" 00:19:44.849 }, 00:19:44.849 "peer_address": { 00:19:44.849 "trtype": "TCP", 00:19:44.849 "adrfam": "IPv4", 00:19:44.849 "traddr": "10.0.0.1", 00:19:44.849 "trsvcid": "46764" 00:19:44.849 }, 00:19:44.849 "auth": { 00:19:44.849 "state": "completed", 00:19:44.849 "digest": "sha256", 00:19:44.849 "dhgroup": "ffdhe8192" 00:19:44.849 } 00:19:44.849 } 00:19:44.849 ]' 00:19:44.849 21:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:45.107 21:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:45.107 21:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:45.107 21:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:45.107 21:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:45.107 21:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.107 21:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.107 21:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.365 21:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTEyMzFlMzI4ZjI1YmFiZWQwMGUyOWNkNGQ1NDExYTdkMWQ2MzRiOTM2Y2Q1NGE1bPMCmQ==: --dhchap-ctrl-secret DHHC-1:01:YTAzNjBhMzVhZDNiODIyMTg4YTJmYzZkODlhYjNkMjHMTAmO: 00:19:46.298 21:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.298 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.298 21:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:46.298 21:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:46.298 21:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.298 21:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:46.298 21:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:46.298 21:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:46.298 21:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:46.556 21:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:19:46.556 21:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:46.556 21:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:46.556 21:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:46.556 21:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:46.556 21:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.556 21:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:46.556 21:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:46.556 21:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.556 21:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:46.556 21:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:46.556 21:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:47.122 00:19:47.122 21:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:47.122 21:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.122 21:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:47.380 21:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.380 21:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.380 21:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:47.380 21:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.380 21:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:47.380 21:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:47.380 { 00:19:47.380 "cntlid": 47, 00:19:47.380 "qid": 0, 00:19:47.380 "state": "enabled", 00:19:47.380 "listen_address": { 00:19:47.380 "trtype": "TCP", 00:19:47.380 "adrfam": "IPv4", 00:19:47.380 "traddr": "10.0.0.2", 00:19:47.380 "trsvcid": "4420" 00:19:47.380 }, 00:19:47.380 "peer_address": { 00:19:47.380 "trtype": "TCP", 00:19:47.380 "adrfam": "IPv4", 00:19:47.380 "traddr": "10.0.0.1", 00:19:47.380 "trsvcid": "46780" 00:19:47.380 }, 00:19:47.380 "auth": { 00:19:47.380 "state": "completed", 00:19:47.380 "digest": "sha256", 00:19:47.380 "dhgroup": "ffdhe8192" 00:19:47.380 } 00:19:47.380 } 00:19:47.380 ]' 00:19:47.380 21:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:47.380 21:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:47.380 21:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:47.380 21:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:47.380 21:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:47.638 21:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.638 21:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.638 21:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.896 21:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MzgwMmM4YzEwYmFiYzhjZjc2MWJhZjliMDRiMmNmODJkZTg0NDNmMzMzOWNmMTIwMmFiYjQ3MzczZDBiZTdmYwZFBis=: 00:19:48.462 21:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.720 21:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:48.720 21:36:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:48.720 21:36:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.720 21:36:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:48.720 21:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:48.721 21:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:48.721 21:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:48.721 21:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:48.721 21:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:48.721 21:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:19:48.721 21:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:48.721 21:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:48.721 21:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:48.721 21:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:48.721 21:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.721 21:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.721 21:36:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:48.721 21:36:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.979 21:36:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:48.979 21:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.979 21:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.237 00:19:49.237 21:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:49.237 21:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:49.237 21:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.494 21:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.494 21:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.494 21:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:49.494 21:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.494 21:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:49.494 21:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:49.494 { 00:19:49.494 "cntlid": 49, 00:19:49.494 "qid": 0, 00:19:49.494 "state": "enabled", 00:19:49.495 "listen_address": { 00:19:49.495 "trtype": "TCP", 00:19:49.495 "adrfam": "IPv4", 00:19:49.495 "traddr": "10.0.0.2", 00:19:49.495 "trsvcid": "4420" 00:19:49.495 }, 00:19:49.495 "peer_address": { 00:19:49.495 "trtype": "TCP", 00:19:49.495 "adrfam": "IPv4", 00:19:49.495 "traddr": "10.0.0.1", 00:19:49.495 "trsvcid": "46800" 00:19:49.495 }, 00:19:49.495 "auth": { 00:19:49.495 "state": "completed", 00:19:49.495 "digest": "sha384", 00:19:49.495 "dhgroup": "null" 00:19:49.495 } 00:19:49.495 } 00:19:49.495 ]' 00:19:49.495 21:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:49.495 21:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:49.495 21:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:49.495 21:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:49.495 21:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:49.495 21:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.495 21:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.495 21:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.752 21:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjM5ZjIxOTQ0MmYzZTQ3OTJjOWQzYzA5NzY1ZWMxZDQ0NTRiZDUzYWVlZWZjNGQ0bGeOug==: --dhchap-ctrl-secret DHHC-1:03:ZDcwMjA3MTIwOWVkZDgxN2NlZTM0YTY5YTcwN2I3YTAxM2I5NzdkNTc5NzEwYmZhM2JlOTVhZDUyMzY4ODBhM45e68M=: 00:19:50.686 21:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.686 21:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:50.686 21:36:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:50.686 21:36:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.686 21:36:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:50.686 21:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:50.686 21:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:50.686 21:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:50.944 21:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:19:50.944 21:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:50.944 21:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:50.944 21:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:50.944 21:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:50.944 21:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.944 21:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.944 21:36:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:50.944 21:36:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.944 21:36:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:50.944 21:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.944 21:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.202 00:19:51.202 21:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:51.202 21:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:51.202 21:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.460 21:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.460 21:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.460 21:36:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:51.460 21:36:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.460 21:36:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:51.460 21:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:51.460 { 00:19:51.460 "cntlid": 51, 00:19:51.460 "qid": 0, 00:19:51.460 "state": "enabled", 00:19:51.460 "listen_address": { 00:19:51.460 "trtype": "TCP", 00:19:51.460 "adrfam": "IPv4", 00:19:51.460 "traddr": "10.0.0.2", 00:19:51.460 "trsvcid": "4420" 00:19:51.460 }, 00:19:51.460 "peer_address": { 00:19:51.460 "trtype": "TCP", 00:19:51.460 "adrfam": "IPv4", 00:19:51.460 "traddr": "10.0.0.1", 00:19:51.460 "trsvcid": "41694" 00:19:51.460 }, 00:19:51.460 "auth": { 00:19:51.460 "state": "completed", 00:19:51.460 "digest": "sha384", 00:19:51.460 "dhgroup": "null" 00:19:51.460 } 00:19:51.460 } 00:19:51.460 ]' 00:19:51.460 21:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:51.460 21:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:51.460 21:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:51.460 21:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:51.460 21:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:51.460 21:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.460 21:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.460 21:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.720 21:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MTJmYzk5NDg3MDU0MDcxOGEwZTI1OWZlMjc0NWYwYzDjLYEU: --dhchap-ctrl-secret DHHC-1:02:Njc4NWI2OTkyZDVmNzA5MjM1ZTQ5NzhhZGRlMjBlNzZhNzI5OWM1ZTk0NTA3YzllFvD1/A==: 00:19:52.655 21:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.655 21:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:52.655 21:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:52.655 21:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.655 21:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:52.655 21:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:52.655 21:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:52.655 21:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:52.655 21:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:19:52.655 21:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:52.655 21:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:52.655 21:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:52.655 21:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:52.655 21:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.655 21:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.655 21:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:52.655 21:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.655 21:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:52.655 21:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.655 21:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.914 00:19:52.914 21:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:52.914 21:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:52.914 21:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.172 21:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.172 21:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.172 21:36:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:53.172 21:36:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.172 21:36:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:53.172 21:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:53.172 { 00:19:53.172 "cntlid": 53, 00:19:53.172 "qid": 0, 00:19:53.172 "state": "enabled", 00:19:53.172 "listen_address": { 00:19:53.172 "trtype": "TCP", 00:19:53.172 "adrfam": "IPv4", 00:19:53.172 "traddr": "10.0.0.2", 00:19:53.172 "trsvcid": "4420" 00:19:53.172 }, 00:19:53.172 "peer_address": { 00:19:53.172 "trtype": "TCP", 00:19:53.172 "adrfam": "IPv4", 00:19:53.172 "traddr": "10.0.0.1", 00:19:53.172 "trsvcid": "41716" 00:19:53.172 }, 00:19:53.172 "auth": { 00:19:53.172 "state": "completed", 00:19:53.172 "digest": "sha384", 00:19:53.172 "dhgroup": "null" 00:19:53.172 } 00:19:53.172 } 00:19:53.172 ]' 00:19:53.172 21:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:53.172 21:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:53.430 21:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:53.430 21:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:53.430 21:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:53.430 21:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.430 21:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.430 21:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.688 21:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTEyMzFlMzI4ZjI1YmFiZWQwMGUyOWNkNGQ1NDExYTdkMWQ2MzRiOTM2Y2Q1NGE1bPMCmQ==: --dhchap-ctrl-secret DHHC-1:01:YTAzNjBhMzVhZDNiODIyMTg4YTJmYzZkODlhYjNkMjHMTAmO: 00:19:54.622 21:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.622 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.622 21:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:54.622 21:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:54.622 21:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.622 21:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:54.622 21:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:54.622 21:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:54.622 21:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:54.622 21:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:19:54.622 21:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:54.622 21:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:54.622 21:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:54.622 21:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:54.622 21:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.622 21:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:54.622 21:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:54.622 21:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.622 21:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:54.622 21:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:54.622 21:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:55.189 00:19:55.189 21:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:55.189 21:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:55.189 21:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.189 21:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.189 21:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.189 21:36:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:55.189 21:36:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.189 21:36:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:55.189 21:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:55.189 { 00:19:55.189 "cntlid": 55, 00:19:55.189 "qid": 0, 00:19:55.189 "state": "enabled", 00:19:55.189 "listen_address": { 00:19:55.189 "trtype": "TCP", 00:19:55.189 "adrfam": "IPv4", 00:19:55.189 "traddr": "10.0.0.2", 00:19:55.189 "trsvcid": "4420" 00:19:55.189 }, 00:19:55.189 "peer_address": { 00:19:55.189 "trtype": "TCP", 00:19:55.189 "adrfam": "IPv4", 00:19:55.189 "traddr": "10.0.0.1", 00:19:55.189 "trsvcid": "41736" 00:19:55.189 }, 00:19:55.189 "auth": { 00:19:55.189 "state": "completed", 00:19:55.189 "digest": "sha384", 00:19:55.189 "dhgroup": "null" 00:19:55.189 } 00:19:55.189 } 00:19:55.189 ]' 00:19:55.189 21:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:55.447 21:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:55.447 21:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:55.447 21:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:55.447 21:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:55.447 21:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.447 21:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.447 21:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.705 21:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MzgwMmM4YzEwYmFiYzhjZjc2MWJhZjliMDRiMmNmODJkZTg0NDNmMzMzOWNmMTIwMmFiYjQ3MzczZDBiZTdmYwZFBis=: 00:19:56.638 21:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.638 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.638 21:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:56.638 21:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:56.638 21:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.638 21:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:56.638 21:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:56.638 21:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:56.638 21:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:56.638 21:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:56.638 21:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:19:56.638 21:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:56.638 21:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:56.638 21:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:56.638 21:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:56.638 21:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.638 21:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.638 21:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:56.638 21:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.895 21:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:56.895 21:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.895 21:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.153 00:19:57.153 21:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:57.153 21:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:57.153 21:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.410 21:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.410 21:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.410 21:36:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:57.410 21:36:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.410 21:36:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:57.410 21:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:57.410 { 00:19:57.410 "cntlid": 57, 00:19:57.410 "qid": 0, 00:19:57.410 "state": "enabled", 00:19:57.410 "listen_address": { 00:19:57.410 "trtype": "TCP", 00:19:57.410 "adrfam": "IPv4", 00:19:57.410 "traddr": "10.0.0.2", 00:19:57.410 "trsvcid": "4420" 00:19:57.410 }, 00:19:57.410 "peer_address": { 00:19:57.410 "trtype": "TCP", 00:19:57.410 "adrfam": "IPv4", 00:19:57.410 "traddr": "10.0.0.1", 00:19:57.410 "trsvcid": "41772" 00:19:57.410 }, 00:19:57.410 "auth": { 00:19:57.410 "state": "completed", 00:19:57.410 "digest": "sha384", 00:19:57.410 "dhgroup": "ffdhe2048" 00:19:57.410 } 00:19:57.410 } 00:19:57.410 ]' 00:19:57.410 21:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:57.410 21:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:57.410 21:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:57.410 21:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:57.410 21:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:57.410 21:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.410 21:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.410 21:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.667 21:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjM5ZjIxOTQ0MmYzZTQ3OTJjOWQzYzA5NzY1ZWMxZDQ0NTRiZDUzYWVlZWZjNGQ0bGeOug==: --dhchap-ctrl-secret DHHC-1:03:ZDcwMjA3MTIwOWVkZDgxN2NlZTM0YTY5YTcwN2I3YTAxM2I5NzdkNTc5NzEwYmZhM2JlOTVhZDUyMzY4ODBhM45e68M=: 00:19:58.599 21:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.599 21:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:58.599 21:36:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:58.599 21:36:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.599 21:36:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:58.599 21:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:58.599 21:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:58.599 21:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:58.599 21:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:19:58.599 21:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:58.599 21:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:58.599 21:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:58.599 21:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:58.599 21:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.599 21:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.599 21:36:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:58.599 21:36:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.857 21:36:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:58.857 21:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.857 21:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.114 00:19:59.114 21:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:59.115 21:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:59.115 21:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.373 21:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.373 21:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.373 21:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:59.373 21:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.373 21:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:59.373 21:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:59.373 { 00:19:59.373 "cntlid": 59, 00:19:59.373 "qid": 0, 00:19:59.373 "state": "enabled", 00:19:59.373 "listen_address": { 00:19:59.373 "trtype": "TCP", 00:19:59.373 "adrfam": "IPv4", 00:19:59.373 "traddr": "10.0.0.2", 00:19:59.373 "trsvcid": "4420" 00:19:59.373 }, 00:19:59.373 "peer_address": { 00:19:59.373 "trtype": "TCP", 00:19:59.373 "adrfam": "IPv4", 00:19:59.373 "traddr": "10.0.0.1", 00:19:59.373 "trsvcid": "41794" 00:19:59.373 }, 00:19:59.373 "auth": { 00:19:59.373 "state": "completed", 00:19:59.373 "digest": "sha384", 00:19:59.373 "dhgroup": "ffdhe2048" 00:19:59.373 } 00:19:59.373 } 00:19:59.373 ]' 00:19:59.373 21:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:59.373 21:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:59.373 21:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:59.373 21:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:59.373 21:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:59.373 21:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.373 21:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.373 21:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.631 21:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MTJmYzk5NDg3MDU0MDcxOGEwZTI1OWZlMjc0NWYwYzDjLYEU: --dhchap-ctrl-secret DHHC-1:02:Njc4NWI2OTkyZDVmNzA5MjM1ZTQ5NzhhZGRlMjBlNzZhNzI5OWM1ZTk0NTA3YzllFvD1/A==: 00:20:00.562 21:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.562 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.562 21:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:00.562 21:37:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:00.562 21:37:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.562 21:37:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:00.562 21:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:00.562 21:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:00.562 21:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:00.820 21:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:20:00.820 21:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:00.820 21:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:00.820 21:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:00.820 21:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:00.820 21:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.820 21:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.820 21:37:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:00.820 21:37:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.820 21:37:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:00.820 21:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.820 21:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.078 00:20:01.078 21:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:01.078 21:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:01.078 21:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.335 21:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.335 21:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.335 21:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:01.335 21:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.335 21:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:01.335 21:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:01.335 { 00:20:01.335 "cntlid": 61, 00:20:01.335 "qid": 0, 00:20:01.335 "state": "enabled", 00:20:01.335 "listen_address": { 00:20:01.335 "trtype": "TCP", 00:20:01.335 "adrfam": "IPv4", 00:20:01.335 "traddr": "10.0.0.2", 00:20:01.335 "trsvcid": "4420" 00:20:01.335 }, 00:20:01.335 "peer_address": { 00:20:01.335 "trtype": "TCP", 00:20:01.335 "adrfam": "IPv4", 00:20:01.335 "traddr": "10.0.0.1", 00:20:01.335 "trsvcid": "42560" 00:20:01.335 }, 00:20:01.335 "auth": { 00:20:01.335 "state": "completed", 00:20:01.335 "digest": "sha384", 00:20:01.335 "dhgroup": "ffdhe2048" 00:20:01.335 } 00:20:01.335 } 00:20:01.335 ]' 00:20:01.335 21:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:01.335 21:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:01.335 21:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:01.335 21:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:01.335 21:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:01.593 21:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.593 21:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.593 21:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.851 21:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTEyMzFlMzI4ZjI1YmFiZWQwMGUyOWNkNGQ1NDExYTdkMWQ2MzRiOTM2Y2Q1NGE1bPMCmQ==: --dhchap-ctrl-secret DHHC-1:01:YTAzNjBhMzVhZDNiODIyMTg4YTJmYzZkODlhYjNkMjHMTAmO: 00:20:02.417 21:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.417 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.417 21:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:02.417 21:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:02.417 21:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.417 21:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:02.417 21:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:02.417 21:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:02.417 21:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:02.676 21:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:20:02.676 21:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:02.676 21:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:02.676 21:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:02.676 21:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:02.676 21:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.676 21:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:20:02.676 21:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:02.676 21:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.676 21:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:02.676 21:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:02.676 21:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:03.242 00:20:03.242 21:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:03.242 21:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:03.242 21:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.242 21:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.242 21:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.242 21:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:03.242 21:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.501 21:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:03.501 21:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:03.501 { 00:20:03.501 "cntlid": 63, 00:20:03.501 "qid": 0, 00:20:03.501 "state": "enabled", 00:20:03.501 "listen_address": { 00:20:03.501 "trtype": "TCP", 00:20:03.501 "adrfam": "IPv4", 00:20:03.501 "traddr": "10.0.0.2", 00:20:03.501 "trsvcid": "4420" 00:20:03.501 }, 00:20:03.501 "peer_address": { 00:20:03.501 "trtype": "TCP", 00:20:03.501 "adrfam": "IPv4", 00:20:03.501 "traddr": "10.0.0.1", 00:20:03.501 "trsvcid": "42578" 00:20:03.501 }, 00:20:03.501 "auth": { 00:20:03.501 "state": "completed", 00:20:03.501 "digest": "sha384", 00:20:03.501 "dhgroup": "ffdhe2048" 00:20:03.501 } 00:20:03.501 } 00:20:03.501 ]' 00:20:03.501 21:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:03.501 21:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:03.501 21:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:03.501 21:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:03.501 21:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:03.501 21:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.501 21:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.501 21:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.760 21:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MzgwMmM4YzEwYmFiYzhjZjc2MWJhZjliMDRiMmNmODJkZTg0NDNmMzMzOWNmMTIwMmFiYjQ3MzczZDBiZTdmYwZFBis=: 00:20:04.693 21:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.693 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.693 21:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:04.693 21:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:04.693 21:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.693 21:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:04.693 21:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:04.693 21:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:04.693 21:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:04.693 21:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:04.950 21:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:20:04.950 21:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:04.950 21:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:04.950 21:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:04.950 21:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:04.950 21:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.950 21:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.950 21:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:04.950 21:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.950 21:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:04.951 21:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.951 21:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.208 00:20:05.208 21:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:05.208 21:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:05.208 21:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.466 21:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.466 21:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.466 21:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:05.466 21:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.466 21:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:05.466 21:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:05.466 { 00:20:05.466 "cntlid": 65, 00:20:05.466 "qid": 0, 00:20:05.466 "state": "enabled", 00:20:05.466 "listen_address": { 00:20:05.466 "trtype": "TCP", 00:20:05.466 "adrfam": "IPv4", 00:20:05.466 "traddr": "10.0.0.2", 00:20:05.466 "trsvcid": "4420" 00:20:05.466 }, 00:20:05.466 "peer_address": { 00:20:05.466 "trtype": "TCP", 00:20:05.466 "adrfam": "IPv4", 00:20:05.466 "traddr": "10.0.0.1", 00:20:05.466 "trsvcid": "42616" 00:20:05.466 }, 00:20:05.466 "auth": { 00:20:05.466 "state": "completed", 00:20:05.466 "digest": "sha384", 00:20:05.466 "dhgroup": "ffdhe3072" 00:20:05.466 } 00:20:05.466 } 00:20:05.466 ]' 00:20:05.466 21:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:05.466 21:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:05.466 21:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:05.466 21:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:05.466 21:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:05.466 21:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.466 21:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.466 21:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.724 21:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjM5ZjIxOTQ0MmYzZTQ3OTJjOWQzYzA5NzY1ZWMxZDQ0NTRiZDUzYWVlZWZjNGQ0bGeOug==: --dhchap-ctrl-secret DHHC-1:03:ZDcwMjA3MTIwOWVkZDgxN2NlZTM0YTY5YTcwN2I3YTAxM2I5NzdkNTc5NzEwYmZhM2JlOTVhZDUyMzY4ODBhM45e68M=: 00:20:06.656 21:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.656 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.656 21:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:06.656 21:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:06.656 21:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.656 21:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:06.656 21:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:06.656 21:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:06.656 21:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:06.656 21:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:20:06.656 21:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:06.656 21:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:06.656 21:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:06.656 21:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:06.656 21:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.656 21:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.656 21:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:06.656 21:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.656 21:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:06.656 21:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.656 21:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.222 00:20:07.222 21:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:07.222 21:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:07.222 21:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.222 21:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.222 21:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.222 21:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:07.222 21:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.222 21:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:07.222 21:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:07.222 { 00:20:07.222 "cntlid": 67, 00:20:07.222 "qid": 0, 00:20:07.222 "state": "enabled", 00:20:07.222 "listen_address": { 00:20:07.222 "trtype": "TCP", 00:20:07.222 "adrfam": "IPv4", 00:20:07.222 "traddr": "10.0.0.2", 00:20:07.222 "trsvcid": "4420" 00:20:07.222 }, 00:20:07.222 "peer_address": { 00:20:07.222 "trtype": "TCP", 00:20:07.222 "adrfam": "IPv4", 00:20:07.222 "traddr": "10.0.0.1", 00:20:07.222 "trsvcid": "42646" 00:20:07.222 }, 00:20:07.222 "auth": { 00:20:07.222 "state": "completed", 00:20:07.222 "digest": "sha384", 00:20:07.222 "dhgroup": "ffdhe3072" 00:20:07.222 } 00:20:07.222 } 00:20:07.222 ]' 00:20:07.222 21:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:07.480 21:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:07.480 21:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:07.480 21:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:07.480 21:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:07.480 21:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.480 21:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.480 21:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.738 21:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MTJmYzk5NDg3MDU0MDcxOGEwZTI1OWZlMjc0NWYwYzDjLYEU: --dhchap-ctrl-secret DHHC-1:02:Njc4NWI2OTkyZDVmNzA5MjM1ZTQ5NzhhZGRlMjBlNzZhNzI5OWM1ZTk0NTA3YzllFvD1/A==: 00:20:08.670 21:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.670 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.670 21:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:08.670 21:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:08.670 21:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.670 21:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:08.670 21:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:08.670 21:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:08.670 21:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:08.670 21:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:20:08.670 21:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:08.670 21:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:08.670 21:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:08.670 21:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:08.670 21:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.670 21:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.670 21:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:08.670 21:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.670 21:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:08.670 21:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.670 21:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.927 00:20:08.927 21:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:08.927 21:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:08.927 21:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.184 21:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.184 21:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.184 21:37:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:09.184 21:37:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.184 21:37:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:09.184 21:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:09.184 { 00:20:09.184 "cntlid": 69, 00:20:09.184 "qid": 0, 00:20:09.184 "state": "enabled", 00:20:09.184 "listen_address": { 00:20:09.184 "trtype": "TCP", 00:20:09.184 "adrfam": "IPv4", 00:20:09.184 "traddr": "10.0.0.2", 00:20:09.184 "trsvcid": "4420" 00:20:09.184 }, 00:20:09.184 "peer_address": { 00:20:09.184 "trtype": "TCP", 00:20:09.184 "adrfam": "IPv4", 00:20:09.184 "traddr": "10.0.0.1", 00:20:09.184 "trsvcid": "42658" 00:20:09.184 }, 00:20:09.184 "auth": { 00:20:09.184 "state": "completed", 00:20:09.184 "digest": "sha384", 00:20:09.184 "dhgroup": "ffdhe3072" 00:20:09.184 } 00:20:09.184 } 00:20:09.184 ]' 00:20:09.184 21:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:09.184 21:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:09.184 21:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:09.184 21:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:09.184 21:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:09.184 21:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.184 21:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.184 21:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.442 21:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTEyMzFlMzI4ZjI1YmFiZWQwMGUyOWNkNGQ1NDExYTdkMWQ2MzRiOTM2Y2Q1NGE1bPMCmQ==: --dhchap-ctrl-secret DHHC-1:01:YTAzNjBhMzVhZDNiODIyMTg4YTJmYzZkODlhYjNkMjHMTAmO: 00:20:10.467 21:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.467 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.467 21:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:10.467 21:37:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:10.467 21:37:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.467 21:37:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:10.467 21:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:10.467 21:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:10.467 21:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:10.467 21:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:20:10.467 21:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:10.467 21:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:10.467 21:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:10.467 21:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:10.467 21:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.467 21:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:20:10.467 21:37:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:10.467 21:37:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.467 21:37:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:10.467 21:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:10.467 21:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:10.756 00:20:10.756 21:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:10.756 21:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:10.756 21:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.014 21:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.014 21:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.014 21:37:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:11.014 21:37:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.014 21:37:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:11.014 21:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:11.014 { 00:20:11.014 "cntlid": 71, 00:20:11.014 "qid": 0, 00:20:11.014 "state": "enabled", 00:20:11.014 "listen_address": { 00:20:11.014 "trtype": "TCP", 00:20:11.014 "adrfam": "IPv4", 00:20:11.014 "traddr": "10.0.0.2", 00:20:11.014 "trsvcid": "4420" 00:20:11.014 }, 00:20:11.014 "peer_address": { 00:20:11.014 "trtype": "TCP", 00:20:11.014 "adrfam": "IPv4", 00:20:11.014 "traddr": "10.0.0.1", 00:20:11.014 "trsvcid": "42678" 00:20:11.014 }, 00:20:11.014 "auth": { 00:20:11.014 "state": "completed", 00:20:11.014 "digest": "sha384", 00:20:11.014 "dhgroup": "ffdhe3072" 00:20:11.014 } 00:20:11.014 } 00:20:11.014 ]' 00:20:11.014 21:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:11.014 21:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:11.014 21:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:11.015 21:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:11.015 21:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:11.272 21:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.273 21:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.273 21:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.531 21:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MzgwMmM4YzEwYmFiYzhjZjc2MWJhZjliMDRiMmNmODJkZTg0NDNmMzMzOWNmMTIwMmFiYjQ3MzczZDBiZTdmYwZFBis=: 00:20:12.095 21:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.095 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.095 21:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:12.095 21:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:12.095 21:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.095 21:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:12.095 21:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:12.095 21:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:12.095 21:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:12.095 21:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:12.352 21:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:20:12.352 21:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:12.352 21:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:12.352 21:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:12.352 21:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:12.352 21:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.352 21:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.352 21:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:12.352 21:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.352 21:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:12.352 21:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.352 21:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.919 00:20:12.919 21:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:12.919 21:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:12.919 21:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.919 21:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.919 21:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.919 21:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:12.919 21:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.919 21:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:12.919 21:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:12.919 { 00:20:12.919 "cntlid": 73, 00:20:12.919 "qid": 0, 00:20:12.919 "state": "enabled", 00:20:12.919 "listen_address": { 00:20:12.919 "trtype": "TCP", 00:20:12.919 "adrfam": "IPv4", 00:20:12.919 "traddr": "10.0.0.2", 00:20:12.919 "trsvcid": "4420" 00:20:12.919 }, 00:20:12.919 "peer_address": { 00:20:12.919 "trtype": "TCP", 00:20:12.919 "adrfam": "IPv4", 00:20:12.919 "traddr": "10.0.0.1", 00:20:12.919 "trsvcid": "43818" 00:20:12.919 }, 00:20:12.919 "auth": { 00:20:12.919 "state": "completed", 00:20:12.919 "digest": "sha384", 00:20:12.919 "dhgroup": "ffdhe4096" 00:20:12.919 } 00:20:12.919 } 00:20:12.919 ]' 00:20:12.919 21:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:13.176 21:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:13.176 21:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:13.176 21:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:13.176 21:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:13.176 21:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.176 21:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.176 21:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.433 21:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjM5ZjIxOTQ0MmYzZTQ3OTJjOWQzYzA5NzY1ZWMxZDQ0NTRiZDUzYWVlZWZjNGQ0bGeOug==: --dhchap-ctrl-secret DHHC-1:03:ZDcwMjA3MTIwOWVkZDgxN2NlZTM0YTY5YTcwN2I3YTAxM2I5NzdkNTc5NzEwYmZhM2JlOTVhZDUyMzY4ODBhM45e68M=: 00:20:14.365 21:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.365 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.365 21:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:14.365 21:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:14.365 21:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.365 21:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:14.365 21:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:14.365 21:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:14.365 21:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:14.365 21:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:20:14.365 21:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:14.365 21:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:14.365 21:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:14.365 21:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:14.365 21:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.365 21:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.365 21:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:14.365 21:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.365 21:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:14.365 21:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.365 21:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.622 00:20:14.622 21:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:14.622 21:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.622 21:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:14.880 21:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.880 21:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.880 21:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:14.880 21:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.880 21:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:14.880 21:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:14.880 { 00:20:14.880 "cntlid": 75, 00:20:14.880 "qid": 0, 00:20:14.880 "state": "enabled", 00:20:14.880 "listen_address": { 00:20:14.880 "trtype": "TCP", 00:20:14.880 "adrfam": "IPv4", 00:20:14.880 "traddr": "10.0.0.2", 00:20:14.880 "trsvcid": "4420" 00:20:14.880 }, 00:20:14.880 "peer_address": { 00:20:14.880 "trtype": "TCP", 00:20:14.880 "adrfam": "IPv4", 00:20:14.880 "traddr": "10.0.0.1", 00:20:14.880 "trsvcid": "43838" 00:20:14.880 }, 00:20:14.880 "auth": { 00:20:14.880 "state": "completed", 00:20:14.880 "digest": "sha384", 00:20:14.880 "dhgroup": "ffdhe4096" 00:20:14.880 } 00:20:14.880 } 00:20:14.880 ]' 00:20:14.880 21:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:15.137 21:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:15.137 21:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:15.137 21:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:15.137 21:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:15.137 21:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.137 21:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.137 21:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.394 21:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MTJmYzk5NDg3MDU0MDcxOGEwZTI1OWZlMjc0NWYwYzDjLYEU: --dhchap-ctrl-secret DHHC-1:02:Njc4NWI2OTkyZDVmNzA5MjM1ZTQ5NzhhZGRlMjBlNzZhNzI5OWM1ZTk0NTA3YzllFvD1/A==: 00:20:16.327 21:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.327 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.327 21:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:16.327 21:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:16.327 21:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.327 21:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:16.327 21:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:16.327 21:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:16.327 21:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:16.327 21:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:20:16.327 21:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:16.327 21:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:16.327 21:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:16.327 21:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:16.327 21:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.327 21:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.327 21:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:16.327 21:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.327 21:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:16.327 21:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.327 21:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.585 00:20:16.585 21:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:16.585 21:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:16.585 21:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.842 21:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.842 21:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.842 21:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:16.842 21:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.842 21:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:16.842 21:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:16.842 { 00:20:16.842 "cntlid": 77, 00:20:16.842 "qid": 0, 00:20:16.842 "state": "enabled", 00:20:16.842 "listen_address": { 00:20:16.843 "trtype": "TCP", 00:20:16.843 "adrfam": "IPv4", 00:20:16.843 "traddr": "10.0.0.2", 00:20:16.843 "trsvcid": "4420" 00:20:16.843 }, 00:20:16.843 "peer_address": { 00:20:16.843 "trtype": "TCP", 00:20:16.843 "adrfam": "IPv4", 00:20:16.843 "traddr": "10.0.0.1", 00:20:16.843 "trsvcid": "43866" 00:20:16.843 }, 00:20:16.843 "auth": { 00:20:16.843 "state": "completed", 00:20:16.843 "digest": "sha384", 00:20:16.843 "dhgroup": "ffdhe4096" 00:20:16.843 } 00:20:16.843 } 00:20:16.843 ]' 00:20:16.843 21:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:16.843 21:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:17.100 21:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:17.100 21:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:17.100 21:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:17.100 21:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.100 21:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.100 21:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.357 21:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTEyMzFlMzI4ZjI1YmFiZWQwMGUyOWNkNGQ1NDExYTdkMWQ2MzRiOTM2Y2Q1NGE1bPMCmQ==: --dhchap-ctrl-secret DHHC-1:01:YTAzNjBhMzVhZDNiODIyMTg4YTJmYzZkODlhYjNkMjHMTAmO: 00:20:18.289 21:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.289 21:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:18.289 21:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:18.289 21:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.289 21:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:18.289 21:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:18.289 21:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:18.289 21:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:18.289 21:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:20:18.289 21:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:18.289 21:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:18.289 21:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:18.289 21:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:18.289 21:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.289 21:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:20:18.289 21:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:18.290 21:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.547 21:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:18.547 21:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:18.547 21:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:18.805 00:20:18.805 21:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:18.805 21:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:18.805 21:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.062 21:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.062 21:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.062 21:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:19.062 21:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.062 21:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:19.062 21:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:19.062 { 00:20:19.062 "cntlid": 79, 00:20:19.062 "qid": 0, 00:20:19.062 "state": "enabled", 00:20:19.062 "listen_address": { 00:20:19.062 "trtype": "TCP", 00:20:19.062 "adrfam": "IPv4", 00:20:19.062 "traddr": "10.0.0.2", 00:20:19.062 "trsvcid": "4420" 00:20:19.062 }, 00:20:19.062 "peer_address": { 00:20:19.062 "trtype": "TCP", 00:20:19.062 "adrfam": "IPv4", 00:20:19.062 "traddr": "10.0.0.1", 00:20:19.062 "trsvcid": "43890" 00:20:19.062 }, 00:20:19.062 "auth": { 00:20:19.062 "state": "completed", 00:20:19.062 "digest": "sha384", 00:20:19.062 "dhgroup": "ffdhe4096" 00:20:19.062 } 00:20:19.062 } 00:20:19.062 ]' 00:20:19.062 21:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:19.062 21:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:19.062 21:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:19.062 21:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:19.062 21:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:19.062 21:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.062 21:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.062 21:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.320 21:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MzgwMmM4YzEwYmFiYzhjZjc2MWJhZjliMDRiMmNmODJkZTg0NDNmMzMzOWNmMTIwMmFiYjQ3MzczZDBiZTdmYwZFBis=: 00:20:20.252 21:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.252 21:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:20.252 21:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:20.252 21:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.252 21:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:20.252 21:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:20.252 21:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:20.252 21:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:20.252 21:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:20.510 21:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:20:20.510 21:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:20.510 21:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:20.510 21:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:20.510 21:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:20.510 21:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.510 21:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.510 21:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:20.510 21:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.510 21:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:20.510 21:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.510 21:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.767 00:20:20.767 21:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:20.767 21:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:20.767 21:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.024 21:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.024 21:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.024 21:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:21.024 21:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.024 21:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:21.024 21:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:21.024 { 00:20:21.024 "cntlid": 81, 00:20:21.024 "qid": 0, 00:20:21.024 "state": "enabled", 00:20:21.024 "listen_address": { 00:20:21.024 "trtype": "TCP", 00:20:21.024 "adrfam": "IPv4", 00:20:21.024 "traddr": "10.0.0.2", 00:20:21.024 "trsvcid": "4420" 00:20:21.024 }, 00:20:21.024 "peer_address": { 00:20:21.024 "trtype": "TCP", 00:20:21.024 "adrfam": "IPv4", 00:20:21.024 "traddr": "10.0.0.1", 00:20:21.024 "trsvcid": "43926" 00:20:21.024 }, 00:20:21.024 "auth": { 00:20:21.024 "state": "completed", 00:20:21.024 "digest": "sha384", 00:20:21.024 "dhgroup": "ffdhe6144" 00:20:21.024 } 00:20:21.024 } 00:20:21.024 ]' 00:20:21.024 21:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:21.282 21:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:21.282 21:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:21.282 21:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:21.282 21:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:21.282 21:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.282 21:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.282 21:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.539 21:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjM5ZjIxOTQ0MmYzZTQ3OTJjOWQzYzA5NzY1ZWMxZDQ0NTRiZDUzYWVlZWZjNGQ0bGeOug==: --dhchap-ctrl-secret DHHC-1:03:ZDcwMjA3MTIwOWVkZDgxN2NlZTM0YTY5YTcwN2I3YTAxM2I5NzdkNTc5NzEwYmZhM2JlOTVhZDUyMzY4ODBhM45e68M=: 00:20:22.102 21:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.102 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.102 21:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:22.102 21:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:22.102 21:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.102 21:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:22.102 21:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:22.102 21:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:22.102 21:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:22.360 21:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:20:22.360 21:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:22.360 21:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:22.360 21:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:22.360 21:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:22.360 21:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.360 21:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.360 21:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:22.360 21:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.360 21:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:22.360 21:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.360 21:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.924 00:20:22.924 21:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:22.924 21:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:22.924 21:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.181 21:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.181 21:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.181 21:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:23.181 21:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.181 21:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:23.181 21:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:23.181 { 00:20:23.181 "cntlid": 83, 00:20:23.182 "qid": 0, 00:20:23.182 "state": "enabled", 00:20:23.182 "listen_address": { 00:20:23.182 "trtype": "TCP", 00:20:23.182 "adrfam": "IPv4", 00:20:23.182 "traddr": "10.0.0.2", 00:20:23.182 "trsvcid": "4420" 00:20:23.182 }, 00:20:23.182 "peer_address": { 00:20:23.182 "trtype": "TCP", 00:20:23.182 "adrfam": "IPv4", 00:20:23.182 "traddr": "10.0.0.1", 00:20:23.182 "trsvcid": "50700" 00:20:23.182 }, 00:20:23.182 "auth": { 00:20:23.182 "state": "completed", 00:20:23.182 "digest": "sha384", 00:20:23.182 "dhgroup": "ffdhe6144" 00:20:23.182 } 00:20:23.182 } 00:20:23.182 ]' 00:20:23.182 21:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:23.182 21:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:23.182 21:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:23.182 21:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:23.182 21:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:23.182 21:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.182 21:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.182 21:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.440 21:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MTJmYzk5NDg3MDU0MDcxOGEwZTI1OWZlMjc0NWYwYzDjLYEU: --dhchap-ctrl-secret DHHC-1:02:Njc4NWI2OTkyZDVmNzA5MjM1ZTQ5NzhhZGRlMjBlNzZhNzI5OWM1ZTk0NTA3YzllFvD1/A==: 00:20:24.375 21:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.375 21:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:24.375 21:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:24.375 21:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.375 21:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:24.375 21:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:24.375 21:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:24.375 21:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:24.375 21:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:20:24.375 21:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:24.375 21:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:24.375 21:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:24.375 21:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:24.375 21:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.375 21:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.375 21:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:24.375 21:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.375 21:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:24.375 21:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.375 21:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.944 00:20:24.944 21:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:24.944 21:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:24.944 21:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.203 21:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.203 21:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.203 21:37:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:25.203 21:37:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.203 21:37:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:25.203 21:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:25.203 { 00:20:25.203 "cntlid": 85, 00:20:25.203 "qid": 0, 00:20:25.203 "state": "enabled", 00:20:25.203 "listen_address": { 00:20:25.203 "trtype": "TCP", 00:20:25.203 "adrfam": "IPv4", 00:20:25.203 "traddr": "10.0.0.2", 00:20:25.203 "trsvcid": "4420" 00:20:25.203 }, 00:20:25.203 "peer_address": { 00:20:25.203 "trtype": "TCP", 00:20:25.203 "adrfam": "IPv4", 00:20:25.203 "traddr": "10.0.0.1", 00:20:25.203 "trsvcid": "50742" 00:20:25.203 }, 00:20:25.203 "auth": { 00:20:25.203 "state": "completed", 00:20:25.203 "digest": "sha384", 00:20:25.203 "dhgroup": "ffdhe6144" 00:20:25.203 } 00:20:25.203 } 00:20:25.203 ]' 00:20:25.203 21:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:25.203 21:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:25.203 21:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:25.203 21:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:25.203 21:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:25.462 21:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.462 21:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.462 21:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.462 21:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTEyMzFlMzI4ZjI1YmFiZWQwMGUyOWNkNGQ1NDExYTdkMWQ2MzRiOTM2Y2Q1NGE1bPMCmQ==: --dhchap-ctrl-secret DHHC-1:01:YTAzNjBhMzVhZDNiODIyMTg4YTJmYzZkODlhYjNkMjHMTAmO: 00:20:26.396 21:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.396 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.396 21:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:26.396 21:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:26.396 21:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.396 21:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:26.396 21:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:26.396 21:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:26.396 21:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:26.396 21:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:20:26.396 21:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:26.396 21:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:26.396 21:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:26.396 21:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:26.396 21:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.397 21:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:20:26.397 21:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:26.397 21:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.655 21:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:26.655 21:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:26.655 21:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:26.914 00:20:26.914 21:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:26.914 21:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.914 21:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:27.172 21:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.172 21:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.172 21:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:27.172 21:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.172 21:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:27.172 21:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:27.172 { 00:20:27.172 "cntlid": 87, 00:20:27.172 "qid": 0, 00:20:27.172 "state": "enabled", 00:20:27.172 "listen_address": { 00:20:27.172 "trtype": "TCP", 00:20:27.172 "adrfam": "IPv4", 00:20:27.172 "traddr": "10.0.0.2", 00:20:27.172 "trsvcid": "4420" 00:20:27.172 }, 00:20:27.172 "peer_address": { 00:20:27.172 "trtype": "TCP", 00:20:27.172 "adrfam": "IPv4", 00:20:27.172 "traddr": "10.0.0.1", 00:20:27.172 "trsvcid": "50756" 00:20:27.172 }, 00:20:27.172 "auth": { 00:20:27.172 "state": "completed", 00:20:27.172 "digest": "sha384", 00:20:27.172 "dhgroup": "ffdhe6144" 00:20:27.172 } 00:20:27.172 } 00:20:27.172 ]' 00:20:27.173 21:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:27.173 21:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:27.173 21:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:27.173 21:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:27.173 21:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:27.173 21:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.173 21:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.173 21:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.431 21:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MzgwMmM4YzEwYmFiYzhjZjc2MWJhZjliMDRiMmNmODJkZTg0NDNmMzMzOWNmMTIwMmFiYjQ3MzczZDBiZTdmYwZFBis=: 00:20:28.366 21:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.366 21:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:28.366 21:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:28.366 21:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.366 21:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:28.366 21:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:28.366 21:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:28.366 21:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:28.366 21:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:28.625 21:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:20:28.625 21:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:28.625 21:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:28.625 21:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:28.625 21:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:28.625 21:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.625 21:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.625 21:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:28.625 21:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.625 21:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:28.625 21:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.625 21:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.194 00:20:29.194 21:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:29.194 21:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:29.194 21:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.452 21:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.452 21:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.452 21:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:29.452 21:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.452 21:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:29.452 21:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:29.452 { 00:20:29.452 "cntlid": 89, 00:20:29.452 "qid": 0, 00:20:29.452 "state": "enabled", 00:20:29.452 "listen_address": { 00:20:29.452 "trtype": "TCP", 00:20:29.452 "adrfam": "IPv4", 00:20:29.452 "traddr": "10.0.0.2", 00:20:29.452 "trsvcid": "4420" 00:20:29.452 }, 00:20:29.452 "peer_address": { 00:20:29.452 "trtype": "TCP", 00:20:29.452 "adrfam": "IPv4", 00:20:29.452 "traddr": "10.0.0.1", 00:20:29.452 "trsvcid": "50786" 00:20:29.452 }, 00:20:29.452 "auth": { 00:20:29.452 "state": "completed", 00:20:29.452 "digest": "sha384", 00:20:29.452 "dhgroup": "ffdhe8192" 00:20:29.452 } 00:20:29.452 } 00:20:29.452 ]' 00:20:29.452 21:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:29.452 21:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:29.452 21:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:29.452 21:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:29.452 21:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:29.452 21:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.452 21:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.452 21:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.019 21:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjM5ZjIxOTQ0MmYzZTQ3OTJjOWQzYzA5NzY1ZWMxZDQ0NTRiZDUzYWVlZWZjNGQ0bGeOug==: --dhchap-ctrl-secret DHHC-1:03:ZDcwMjA3MTIwOWVkZDgxN2NlZTM0YTY5YTcwN2I3YTAxM2I5NzdkNTc5NzEwYmZhM2JlOTVhZDUyMzY4ODBhM45e68M=: 00:20:30.584 21:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.584 21:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:30.584 21:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:30.584 21:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.584 21:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:30.584 21:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:30.584 21:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:30.584 21:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:30.843 21:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:20:30.843 21:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:30.843 21:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:30.843 21:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:30.843 21:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:30.843 21:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.843 21:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.843 21:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:30.843 21:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.843 21:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:30.843 21:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.843 21:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.778 00:20:31.778 21:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:31.778 21:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:31.778 21:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.778 21:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.778 21:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.778 21:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:31.778 21:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.778 21:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:31.778 21:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:31.778 { 00:20:31.778 "cntlid": 91, 00:20:31.778 "qid": 0, 00:20:31.778 "state": "enabled", 00:20:31.778 "listen_address": { 00:20:31.778 "trtype": "TCP", 00:20:31.778 "adrfam": "IPv4", 00:20:31.778 "traddr": "10.0.0.2", 00:20:31.778 "trsvcid": "4420" 00:20:31.778 }, 00:20:31.778 "peer_address": { 00:20:31.778 "trtype": "TCP", 00:20:31.778 "adrfam": "IPv4", 00:20:31.778 "traddr": "10.0.0.1", 00:20:31.778 "trsvcid": "49318" 00:20:31.778 }, 00:20:31.778 "auth": { 00:20:31.778 "state": "completed", 00:20:31.778 "digest": "sha384", 00:20:31.778 "dhgroup": "ffdhe8192" 00:20:31.778 } 00:20:31.778 } 00:20:31.778 ]' 00:20:31.778 21:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:32.037 21:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:32.037 21:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:32.037 21:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:32.037 21:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:32.037 21:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.037 21:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.037 21:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.295 21:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MTJmYzk5NDg3MDU0MDcxOGEwZTI1OWZlMjc0NWYwYzDjLYEU: --dhchap-ctrl-secret DHHC-1:02:Njc4NWI2OTkyZDVmNzA5MjM1ZTQ5NzhhZGRlMjBlNzZhNzI5OWM1ZTk0NTA3YzllFvD1/A==: 00:20:33.231 21:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.231 21:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:33.231 21:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:33.231 21:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.231 21:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:33.231 21:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:33.231 21:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:33.231 21:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:33.231 21:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:20:33.231 21:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:33.231 21:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:33.231 21:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:33.231 21:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:33.231 21:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.231 21:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.231 21:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:33.231 21:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.231 21:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:33.231 21:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.231 21:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.163 00:20:34.164 21:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:34.164 21:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:34.164 21:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.164 21:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.164 21:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.164 21:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:34.164 21:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.164 21:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:34.164 21:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:34.164 { 00:20:34.164 "cntlid": 93, 00:20:34.164 "qid": 0, 00:20:34.164 "state": "enabled", 00:20:34.164 "listen_address": { 00:20:34.164 "trtype": "TCP", 00:20:34.164 "adrfam": "IPv4", 00:20:34.164 "traddr": "10.0.0.2", 00:20:34.164 "trsvcid": "4420" 00:20:34.164 }, 00:20:34.164 "peer_address": { 00:20:34.164 "trtype": "TCP", 00:20:34.164 "adrfam": "IPv4", 00:20:34.164 "traddr": "10.0.0.1", 00:20:34.164 "trsvcid": "49342" 00:20:34.164 }, 00:20:34.164 "auth": { 00:20:34.164 "state": "completed", 00:20:34.164 "digest": "sha384", 00:20:34.164 "dhgroup": "ffdhe8192" 00:20:34.164 } 00:20:34.164 } 00:20:34.164 ]' 00:20:34.164 21:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:34.422 21:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:34.422 21:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:34.422 21:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:34.422 21:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:34.422 21:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.422 21:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.422 21:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.680 21:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTEyMzFlMzI4ZjI1YmFiZWQwMGUyOWNkNGQ1NDExYTdkMWQ2MzRiOTM2Y2Q1NGE1bPMCmQ==: --dhchap-ctrl-secret DHHC-1:01:YTAzNjBhMzVhZDNiODIyMTg4YTJmYzZkODlhYjNkMjHMTAmO: 00:20:35.616 21:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.616 21:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:35.616 21:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:35.616 21:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.616 21:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:35.616 21:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:35.616 21:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:35.616 21:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:35.616 21:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:20:35.616 21:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:35.616 21:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:35.616 21:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:35.616 21:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:35.616 21:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.616 21:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:20:35.616 21:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:35.616 21:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.616 21:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:35.616 21:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:35.616 21:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:36.183 00:20:36.183 21:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:36.183 21:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:36.183 21:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.441 21:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.441 21:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.441 21:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:36.441 21:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.441 21:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:36.441 21:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:36.441 { 00:20:36.441 "cntlid": 95, 00:20:36.441 "qid": 0, 00:20:36.441 "state": "enabled", 00:20:36.441 "listen_address": { 00:20:36.441 "trtype": "TCP", 00:20:36.441 "adrfam": "IPv4", 00:20:36.441 "traddr": "10.0.0.2", 00:20:36.441 "trsvcid": "4420" 00:20:36.441 }, 00:20:36.441 "peer_address": { 00:20:36.441 "trtype": "TCP", 00:20:36.441 "adrfam": "IPv4", 00:20:36.441 "traddr": "10.0.0.1", 00:20:36.441 "trsvcid": "49368" 00:20:36.441 }, 00:20:36.441 "auth": { 00:20:36.441 "state": "completed", 00:20:36.441 "digest": "sha384", 00:20:36.441 "dhgroup": "ffdhe8192" 00:20:36.441 } 00:20:36.441 } 00:20:36.441 ]' 00:20:36.441 21:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:36.699 21:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:36.699 21:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:36.699 21:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:36.699 21:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:36.699 21:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.699 21:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.699 21:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.957 21:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MzgwMmM4YzEwYmFiYzhjZjc2MWJhZjliMDRiMmNmODJkZTg0NDNmMzMzOWNmMTIwMmFiYjQ3MzczZDBiZTdmYwZFBis=: 00:20:37.524 21:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.524 21:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:37.524 21:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:37.524 21:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.856 21:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:37.856 21:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:37.856 21:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:37.856 21:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:37.856 21:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:37.856 21:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:37.856 21:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:20:37.856 21:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:37.856 21:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:37.856 21:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:37.856 21:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:37.856 21:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.856 21:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.856 21:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:37.857 21:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.857 21:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:37.857 21:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.857 21:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.115 00:20:38.115 21:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:38.115 21:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:38.115 21:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.373 21:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.373 21:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.373 21:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:38.373 21:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.373 21:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:38.373 21:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:38.373 { 00:20:38.373 "cntlid": 97, 00:20:38.373 "qid": 0, 00:20:38.373 "state": "enabled", 00:20:38.373 "listen_address": { 00:20:38.373 "trtype": "TCP", 00:20:38.373 "adrfam": "IPv4", 00:20:38.373 "traddr": "10.0.0.2", 00:20:38.373 "trsvcid": "4420" 00:20:38.373 }, 00:20:38.373 "peer_address": { 00:20:38.373 "trtype": "TCP", 00:20:38.374 "adrfam": "IPv4", 00:20:38.374 "traddr": "10.0.0.1", 00:20:38.374 "trsvcid": "49394" 00:20:38.374 }, 00:20:38.374 "auth": { 00:20:38.374 "state": "completed", 00:20:38.374 "digest": "sha512", 00:20:38.374 "dhgroup": "null" 00:20:38.374 } 00:20:38.374 } 00:20:38.374 ]' 00:20:38.374 21:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:38.633 21:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:38.633 21:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:38.633 21:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:38.633 21:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:38.633 21:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.633 21:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.633 21:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.891 21:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjM5ZjIxOTQ0MmYzZTQ3OTJjOWQzYzA5NzY1ZWMxZDQ0NTRiZDUzYWVlZWZjNGQ0bGeOug==: --dhchap-ctrl-secret DHHC-1:03:ZDcwMjA3MTIwOWVkZDgxN2NlZTM0YTY5YTcwN2I3YTAxM2I5NzdkNTc5NzEwYmZhM2JlOTVhZDUyMzY4ODBhM45e68M=: 00:20:39.827 21:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.827 21:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:39.827 21:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:39.827 21:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.827 21:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:39.827 21:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:39.827 21:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:39.827 21:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:39.827 21:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:20:39.827 21:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:39.827 21:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:39.827 21:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:39.827 21:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:39.827 21:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.827 21:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:39.827 21:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:39.827 21:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.085 21:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:40.085 21:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.085 21:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.344 00:20:40.344 21:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:40.344 21:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:40.344 21:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.602 21:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.602 21:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.602 21:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:40.602 21:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.602 21:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:40.602 21:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:40.602 { 00:20:40.602 "cntlid": 99, 00:20:40.602 "qid": 0, 00:20:40.602 "state": "enabled", 00:20:40.602 "listen_address": { 00:20:40.602 "trtype": "TCP", 00:20:40.602 "adrfam": "IPv4", 00:20:40.602 "traddr": "10.0.0.2", 00:20:40.602 "trsvcid": "4420" 00:20:40.602 }, 00:20:40.602 "peer_address": { 00:20:40.602 "trtype": "TCP", 00:20:40.602 "adrfam": "IPv4", 00:20:40.602 "traddr": "10.0.0.1", 00:20:40.602 "trsvcid": "49416" 00:20:40.602 }, 00:20:40.602 "auth": { 00:20:40.602 "state": "completed", 00:20:40.602 "digest": "sha512", 00:20:40.602 "dhgroup": "null" 00:20:40.602 } 00:20:40.602 } 00:20:40.602 ]' 00:20:40.602 21:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:40.602 21:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:40.602 21:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:40.602 21:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:40.602 21:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:40.602 21:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.602 21:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.602 21:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.861 21:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MTJmYzk5NDg3MDU0MDcxOGEwZTI1OWZlMjc0NWYwYzDjLYEU: --dhchap-ctrl-secret DHHC-1:02:Njc4NWI2OTkyZDVmNzA5MjM1ZTQ5NzhhZGRlMjBlNzZhNzI5OWM1ZTk0NTA3YzllFvD1/A==: 00:20:41.796 21:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.796 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.796 21:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:41.796 21:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:41.796 21:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.796 21:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:41.796 21:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:41.796 21:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:41.796 21:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:42.054 21:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:20:42.054 21:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:42.054 21:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:42.054 21:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:42.054 21:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:42.054 21:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.055 21:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.055 21:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:42.055 21:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.055 21:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:42.055 21:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.055 21:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.313 00:20:42.313 21:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:42.313 21:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.313 21:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:42.572 21:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.572 21:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.572 21:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:42.572 21:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.572 21:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:42.572 21:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:42.572 { 00:20:42.572 "cntlid": 101, 00:20:42.572 "qid": 0, 00:20:42.572 "state": "enabled", 00:20:42.572 "listen_address": { 00:20:42.572 "trtype": "TCP", 00:20:42.572 "adrfam": "IPv4", 00:20:42.572 "traddr": "10.0.0.2", 00:20:42.572 "trsvcid": "4420" 00:20:42.572 }, 00:20:42.572 "peer_address": { 00:20:42.572 "trtype": "TCP", 00:20:42.572 "adrfam": "IPv4", 00:20:42.572 "traddr": "10.0.0.1", 00:20:42.572 "trsvcid": "58774" 00:20:42.572 }, 00:20:42.572 "auth": { 00:20:42.572 "state": "completed", 00:20:42.572 "digest": "sha512", 00:20:42.572 "dhgroup": "null" 00:20:42.572 } 00:20:42.572 } 00:20:42.572 ]' 00:20:42.572 21:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:42.572 21:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:42.572 21:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:42.572 21:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:42.572 21:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:42.572 21:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.572 21:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.572 21:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.830 21:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTEyMzFlMzI4ZjI1YmFiZWQwMGUyOWNkNGQ1NDExYTdkMWQ2MzRiOTM2Y2Q1NGE1bPMCmQ==: --dhchap-ctrl-secret DHHC-1:01:YTAzNjBhMzVhZDNiODIyMTg4YTJmYzZkODlhYjNkMjHMTAmO: 00:20:43.764 21:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.764 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.764 21:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:43.764 21:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:43.764 21:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.764 21:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:43.764 21:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:43.764 21:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:43.764 21:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:44.022 21:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:20:44.022 21:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:44.022 21:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:44.022 21:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:44.022 21:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:44.022 21:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.022 21:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:20:44.022 21:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:44.022 21:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.022 21:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:44.022 21:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:44.022 21:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:44.281 00:20:44.281 21:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:44.281 21:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:44.281 21:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.540 21:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.540 21:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.540 21:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:44.540 21:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.540 21:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:44.540 21:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:44.540 { 00:20:44.540 "cntlid": 103, 00:20:44.540 "qid": 0, 00:20:44.540 "state": "enabled", 00:20:44.540 "listen_address": { 00:20:44.540 "trtype": "TCP", 00:20:44.540 "adrfam": "IPv4", 00:20:44.540 "traddr": "10.0.0.2", 00:20:44.540 "trsvcid": "4420" 00:20:44.540 }, 00:20:44.540 "peer_address": { 00:20:44.540 "trtype": "TCP", 00:20:44.540 "adrfam": "IPv4", 00:20:44.540 "traddr": "10.0.0.1", 00:20:44.540 "trsvcid": "58794" 00:20:44.540 }, 00:20:44.540 "auth": { 00:20:44.540 "state": "completed", 00:20:44.540 "digest": "sha512", 00:20:44.540 "dhgroup": "null" 00:20:44.540 } 00:20:44.540 } 00:20:44.540 ]' 00:20:44.540 21:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:44.540 21:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:44.540 21:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:44.540 21:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:44.540 21:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:44.798 21:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.798 21:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.798 21:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.057 21:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MzgwMmM4YzEwYmFiYzhjZjc2MWJhZjliMDRiMmNmODJkZTg0NDNmMzMzOWNmMTIwMmFiYjQ3MzczZDBiZTdmYwZFBis=: 00:20:45.624 21:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.624 21:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:45.624 21:37:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:45.624 21:37:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.624 21:37:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:45.624 21:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:45.624 21:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:45.624 21:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:45.624 21:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:45.883 21:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:20:45.883 21:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:45.883 21:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:45.883 21:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:45.883 21:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:45.883 21:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.883 21:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.883 21:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:45.883 21:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.883 21:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:45.883 21:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.883 21:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.142 00:20:46.142 21:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:46.142 21:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:46.142 21:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.401 21:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.401 21:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.401 21:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:46.401 21:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.401 21:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:46.401 21:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:46.401 { 00:20:46.401 "cntlid": 105, 00:20:46.401 "qid": 0, 00:20:46.401 "state": "enabled", 00:20:46.401 "listen_address": { 00:20:46.401 "trtype": "TCP", 00:20:46.401 "adrfam": "IPv4", 00:20:46.401 "traddr": "10.0.0.2", 00:20:46.401 "trsvcid": "4420" 00:20:46.401 }, 00:20:46.401 "peer_address": { 00:20:46.401 "trtype": "TCP", 00:20:46.401 "adrfam": "IPv4", 00:20:46.401 "traddr": "10.0.0.1", 00:20:46.401 "trsvcid": "58828" 00:20:46.401 }, 00:20:46.401 "auth": { 00:20:46.401 "state": "completed", 00:20:46.401 "digest": "sha512", 00:20:46.401 "dhgroup": "ffdhe2048" 00:20:46.401 } 00:20:46.401 } 00:20:46.401 ]' 00:20:46.401 21:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:46.659 21:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:46.659 21:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:46.659 21:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:46.659 21:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:46.659 21:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.659 21:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.659 21:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.918 21:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjM5ZjIxOTQ0MmYzZTQ3OTJjOWQzYzA5NzY1ZWMxZDQ0NTRiZDUzYWVlZWZjNGQ0bGeOug==: --dhchap-ctrl-secret DHHC-1:03:ZDcwMjA3MTIwOWVkZDgxN2NlZTM0YTY5YTcwN2I3YTAxM2I5NzdkNTc5NzEwYmZhM2JlOTVhZDUyMzY4ODBhM45e68M=: 00:20:47.485 21:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.743 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.743 21:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:47.744 21:37:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:47.744 21:37:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.744 21:37:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:47.744 21:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:47.744 21:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:47.744 21:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:48.002 21:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:20:48.002 21:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:48.002 21:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:48.002 21:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:48.002 21:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:48.002 21:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.002 21:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.002 21:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:48.002 21:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.002 21:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:48.002 21:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.002 21:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.261 00:20:48.261 21:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:48.261 21:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.261 21:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:48.520 21:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.520 21:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.520 21:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:48.520 21:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.520 21:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:48.520 21:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:48.520 { 00:20:48.520 "cntlid": 107, 00:20:48.520 "qid": 0, 00:20:48.520 "state": "enabled", 00:20:48.520 "listen_address": { 00:20:48.520 "trtype": "TCP", 00:20:48.520 "adrfam": "IPv4", 00:20:48.520 "traddr": "10.0.0.2", 00:20:48.520 "trsvcid": "4420" 00:20:48.520 }, 00:20:48.520 "peer_address": { 00:20:48.520 "trtype": "TCP", 00:20:48.520 "adrfam": "IPv4", 00:20:48.520 "traddr": "10.0.0.1", 00:20:48.520 "trsvcid": "58860" 00:20:48.520 }, 00:20:48.520 "auth": { 00:20:48.520 "state": "completed", 00:20:48.520 "digest": "sha512", 00:20:48.520 "dhgroup": "ffdhe2048" 00:20:48.520 } 00:20:48.520 } 00:20:48.520 ]' 00:20:48.520 21:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:48.520 21:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:48.520 21:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:48.520 21:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:48.520 21:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:48.520 21:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.520 21:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.520 21:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.778 21:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MTJmYzk5NDg3MDU0MDcxOGEwZTI1OWZlMjc0NWYwYzDjLYEU: --dhchap-ctrl-secret DHHC-1:02:Njc4NWI2OTkyZDVmNzA5MjM1ZTQ5NzhhZGRlMjBlNzZhNzI5OWM1ZTk0NTA3YzllFvD1/A==: 00:20:49.714 21:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.714 21:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:49.714 21:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:49.714 21:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.714 21:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:49.714 21:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:49.714 21:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:49.714 21:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:49.973 21:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:20:49.973 21:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:49.973 21:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:49.973 21:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:49.973 21:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:49.973 21:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.973 21:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.973 21:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:49.973 21:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.973 21:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:49.973 21:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.973 21:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.231 00:20:50.231 21:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:50.231 21:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:50.231 21:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.489 21:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.489 21:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.489 21:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:50.489 21:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.489 21:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:50.489 21:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:50.489 { 00:20:50.489 "cntlid": 109, 00:20:50.489 "qid": 0, 00:20:50.489 "state": "enabled", 00:20:50.489 "listen_address": { 00:20:50.489 "trtype": "TCP", 00:20:50.489 "adrfam": "IPv4", 00:20:50.489 "traddr": "10.0.0.2", 00:20:50.489 "trsvcid": "4420" 00:20:50.489 }, 00:20:50.489 "peer_address": { 00:20:50.489 "trtype": "TCP", 00:20:50.489 "adrfam": "IPv4", 00:20:50.489 "traddr": "10.0.0.1", 00:20:50.489 "trsvcid": "58870" 00:20:50.489 }, 00:20:50.489 "auth": { 00:20:50.489 "state": "completed", 00:20:50.489 "digest": "sha512", 00:20:50.489 "dhgroup": "ffdhe2048" 00:20:50.489 } 00:20:50.489 } 00:20:50.489 ]' 00:20:50.489 21:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:50.489 21:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:50.489 21:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:50.490 21:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:50.490 21:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:50.490 21:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.490 21:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.490 21:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.747 21:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTEyMzFlMzI4ZjI1YmFiZWQwMGUyOWNkNGQ1NDExYTdkMWQ2MzRiOTM2Y2Q1NGE1bPMCmQ==: --dhchap-ctrl-secret DHHC-1:01:YTAzNjBhMzVhZDNiODIyMTg4YTJmYzZkODlhYjNkMjHMTAmO: 00:20:51.682 21:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.682 21:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:51.682 21:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:51.682 21:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.682 21:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:51.682 21:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:51.682 21:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:51.682 21:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:51.941 21:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:20:51.941 21:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:51.941 21:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:51.941 21:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:51.941 21:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:51.941 21:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.941 21:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:20:51.941 21:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:51.941 21:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.941 21:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:51.941 21:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:51.941 21:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:52.199 00:20:52.199 21:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:52.199 21:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:52.199 21:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.458 21:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.458 21:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.458 21:37:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:52.458 21:37:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.458 21:37:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:52.458 21:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:52.458 { 00:20:52.458 "cntlid": 111, 00:20:52.458 "qid": 0, 00:20:52.458 "state": "enabled", 00:20:52.458 "listen_address": { 00:20:52.458 "trtype": "TCP", 00:20:52.458 "adrfam": "IPv4", 00:20:52.458 "traddr": "10.0.0.2", 00:20:52.458 "trsvcid": "4420" 00:20:52.458 }, 00:20:52.458 "peer_address": { 00:20:52.458 "trtype": "TCP", 00:20:52.458 "adrfam": "IPv4", 00:20:52.458 "traddr": "10.0.0.1", 00:20:52.458 "trsvcid": "52688" 00:20:52.458 }, 00:20:52.458 "auth": { 00:20:52.458 "state": "completed", 00:20:52.458 "digest": "sha512", 00:20:52.458 "dhgroup": "ffdhe2048" 00:20:52.458 } 00:20:52.458 } 00:20:52.458 ]' 00:20:52.458 21:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:52.458 21:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:52.458 21:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:52.458 21:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:52.458 21:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:52.458 21:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.458 21:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.458 21:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.716 21:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MzgwMmM4YzEwYmFiYzhjZjc2MWJhZjliMDRiMmNmODJkZTg0NDNmMzMzOWNmMTIwMmFiYjQ3MzczZDBiZTdmYwZFBis=: 00:20:53.650 21:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.650 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.650 21:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:53.650 21:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:53.650 21:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.650 21:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:53.650 21:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:53.650 21:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:53.650 21:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:53.650 21:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:53.650 21:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:20:53.650 21:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:53.650 21:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:53.650 21:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:53.650 21:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:53.650 21:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.650 21:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.650 21:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:53.650 21:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.650 21:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:53.650 21:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.650 21:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.217 00:20:54.217 21:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:54.217 21:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.217 21:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:54.217 21:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.217 21:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.217 21:37:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:54.217 21:37:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.217 21:37:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:54.217 21:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:54.217 { 00:20:54.217 "cntlid": 113, 00:20:54.217 "qid": 0, 00:20:54.217 "state": "enabled", 00:20:54.217 "listen_address": { 00:20:54.217 "trtype": "TCP", 00:20:54.217 "adrfam": "IPv4", 00:20:54.217 "traddr": "10.0.0.2", 00:20:54.217 "trsvcid": "4420" 00:20:54.217 }, 00:20:54.217 "peer_address": { 00:20:54.217 "trtype": "TCP", 00:20:54.217 "adrfam": "IPv4", 00:20:54.217 "traddr": "10.0.0.1", 00:20:54.217 "trsvcid": "52724" 00:20:54.217 }, 00:20:54.217 "auth": { 00:20:54.218 "state": "completed", 00:20:54.218 "digest": "sha512", 00:20:54.218 "dhgroup": "ffdhe3072" 00:20:54.218 } 00:20:54.218 } 00:20:54.218 ]' 00:20:54.218 21:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:54.476 21:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:54.476 21:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:54.476 21:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:54.476 21:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:54.476 21:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.476 21:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.476 21:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.734 21:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjM5ZjIxOTQ0MmYzZTQ3OTJjOWQzYzA5NzY1ZWMxZDQ0NTRiZDUzYWVlZWZjNGQ0bGeOug==: --dhchap-ctrl-secret DHHC-1:03:ZDcwMjA3MTIwOWVkZDgxN2NlZTM0YTY5YTcwN2I3YTAxM2I5NzdkNTc5NzEwYmZhM2JlOTVhZDUyMzY4ODBhM45e68M=: 00:20:55.669 21:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.669 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.669 21:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:55.669 21:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:55.669 21:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.669 21:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:55.669 21:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:55.669 21:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:55.669 21:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:55.669 21:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:20:55.669 21:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:55.669 21:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:55.669 21:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:55.669 21:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:55.669 21:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.669 21:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.669 21:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:55.669 21:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.669 21:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:55.669 21:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.669 21:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.927 00:20:55.927 21:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:55.927 21:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.927 21:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:56.185 21:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.185 21:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.185 21:37:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:56.185 21:37:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.185 21:37:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:56.185 21:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:56.185 { 00:20:56.185 "cntlid": 115, 00:20:56.185 "qid": 0, 00:20:56.185 "state": "enabled", 00:20:56.185 "listen_address": { 00:20:56.185 "trtype": "TCP", 00:20:56.185 "adrfam": "IPv4", 00:20:56.185 "traddr": "10.0.0.2", 00:20:56.185 "trsvcid": "4420" 00:20:56.185 }, 00:20:56.185 "peer_address": { 00:20:56.185 "trtype": "TCP", 00:20:56.185 "adrfam": "IPv4", 00:20:56.185 "traddr": "10.0.0.1", 00:20:56.185 "trsvcid": "52760" 00:20:56.185 }, 00:20:56.185 "auth": { 00:20:56.185 "state": "completed", 00:20:56.185 "digest": "sha512", 00:20:56.185 "dhgroup": "ffdhe3072" 00:20:56.185 } 00:20:56.185 } 00:20:56.185 ]' 00:20:56.185 21:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:56.444 21:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:56.444 21:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:56.444 21:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:56.444 21:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:56.444 21:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.444 21:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.444 21:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.702 21:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MTJmYzk5NDg3MDU0MDcxOGEwZTI1OWZlMjc0NWYwYzDjLYEU: --dhchap-ctrl-secret DHHC-1:02:Njc4NWI2OTkyZDVmNzA5MjM1ZTQ5NzhhZGRlMjBlNzZhNzI5OWM1ZTk0NTA3YzllFvD1/A==: 00:20:57.638 21:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.638 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.638 21:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:57.638 21:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:57.638 21:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.638 21:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:57.638 21:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:57.638 21:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:57.638 21:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:57.638 21:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:20:57.638 21:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:57.638 21:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:57.638 21:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:57.638 21:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:57.638 21:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.638 21:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.638 21:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:57.638 21:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.638 21:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:57.638 21:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.638 21:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.897 00:20:57.897 21:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:57.897 21:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:57.897 21:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.155 21:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.155 21:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.155 21:37:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:58.155 21:37:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.155 21:37:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:58.155 21:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:58.155 { 00:20:58.155 "cntlid": 117, 00:20:58.155 "qid": 0, 00:20:58.155 "state": "enabled", 00:20:58.155 "listen_address": { 00:20:58.155 "trtype": "TCP", 00:20:58.155 "adrfam": "IPv4", 00:20:58.155 "traddr": "10.0.0.2", 00:20:58.155 "trsvcid": "4420" 00:20:58.155 }, 00:20:58.155 "peer_address": { 00:20:58.155 "trtype": "TCP", 00:20:58.155 "adrfam": "IPv4", 00:20:58.155 "traddr": "10.0.0.1", 00:20:58.155 "trsvcid": "52776" 00:20:58.155 }, 00:20:58.155 "auth": { 00:20:58.155 "state": "completed", 00:20:58.155 "digest": "sha512", 00:20:58.155 "dhgroup": "ffdhe3072" 00:20:58.155 } 00:20:58.155 } 00:20:58.155 ]' 00:20:58.155 21:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:58.413 21:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:58.413 21:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:58.413 21:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:58.413 21:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:58.413 21:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.413 21:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.413 21:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.671 21:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTEyMzFlMzI4ZjI1YmFiZWQwMGUyOWNkNGQ1NDExYTdkMWQ2MzRiOTM2Y2Q1NGE1bPMCmQ==: --dhchap-ctrl-secret DHHC-1:01:YTAzNjBhMzVhZDNiODIyMTg4YTJmYzZkODlhYjNkMjHMTAmO: 00:20:59.606 21:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.606 21:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:59.606 21:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:59.606 21:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.606 21:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:59.606 21:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:59.606 21:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:59.606 21:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:59.606 21:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:20:59.606 21:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:59.606 21:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:59.606 21:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:59.606 21:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:59.606 21:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.606 21:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:20:59.606 21:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:59.606 21:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.864 21:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:59.864 21:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:59.864 21:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:00.123 00:21:00.123 21:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:00.123 21:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.123 21:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:00.381 21:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.381 21:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.381 21:38:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:00.381 21:38:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.381 21:38:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:00.381 21:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:00.381 { 00:21:00.381 "cntlid": 119, 00:21:00.381 "qid": 0, 00:21:00.381 "state": "enabled", 00:21:00.381 "listen_address": { 00:21:00.381 "trtype": "TCP", 00:21:00.381 "adrfam": "IPv4", 00:21:00.381 "traddr": "10.0.0.2", 00:21:00.381 "trsvcid": "4420" 00:21:00.381 }, 00:21:00.381 "peer_address": { 00:21:00.381 "trtype": "TCP", 00:21:00.381 "adrfam": "IPv4", 00:21:00.381 "traddr": "10.0.0.1", 00:21:00.381 "trsvcid": "52802" 00:21:00.381 }, 00:21:00.381 "auth": { 00:21:00.381 "state": "completed", 00:21:00.381 "digest": "sha512", 00:21:00.381 "dhgroup": "ffdhe3072" 00:21:00.381 } 00:21:00.381 } 00:21:00.381 ]' 00:21:00.381 21:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:00.381 21:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:00.381 21:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:00.381 21:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:00.381 21:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:00.381 21:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.381 21:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.381 21:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.640 21:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MzgwMmM4YzEwYmFiYzhjZjc2MWJhZjliMDRiMmNmODJkZTg0NDNmMzMzOWNmMTIwMmFiYjQ3MzczZDBiZTdmYwZFBis=: 00:21:01.574 21:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.574 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.574 21:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:01.574 21:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:01.574 21:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.574 21:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:01.574 21:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:01.574 21:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:01.574 21:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:01.574 21:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:01.832 21:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:21:01.832 21:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:01.832 21:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:01.832 21:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:01.832 21:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:01.832 21:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.832 21:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.832 21:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:01.832 21:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.832 21:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:01.832 21:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.832 21:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.091 00:21:02.091 21:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:02.091 21:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:02.091 21:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.350 21:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.350 21:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.350 21:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:02.350 21:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.350 21:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:02.350 21:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:02.350 { 00:21:02.350 "cntlid": 121, 00:21:02.350 "qid": 0, 00:21:02.350 "state": "enabled", 00:21:02.350 "listen_address": { 00:21:02.350 "trtype": "TCP", 00:21:02.350 "adrfam": "IPv4", 00:21:02.350 "traddr": "10.0.0.2", 00:21:02.350 "trsvcid": "4420" 00:21:02.350 }, 00:21:02.350 "peer_address": { 00:21:02.350 "trtype": "TCP", 00:21:02.350 "adrfam": "IPv4", 00:21:02.350 "traddr": "10.0.0.1", 00:21:02.350 "trsvcid": "40360" 00:21:02.350 }, 00:21:02.350 "auth": { 00:21:02.350 "state": "completed", 00:21:02.350 "digest": "sha512", 00:21:02.350 "dhgroup": "ffdhe4096" 00:21:02.350 } 00:21:02.350 } 00:21:02.350 ]' 00:21:02.350 21:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:02.350 21:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:02.350 21:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:02.608 21:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:02.608 21:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:02.608 21:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.608 21:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.608 21:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.871 21:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjM5ZjIxOTQ0MmYzZTQ3OTJjOWQzYzA5NzY1ZWMxZDQ0NTRiZDUzYWVlZWZjNGQ0bGeOug==: --dhchap-ctrl-secret DHHC-1:03:ZDcwMjA3MTIwOWVkZDgxN2NlZTM0YTY5YTcwN2I3YTAxM2I5NzdkNTc5NzEwYmZhM2JlOTVhZDUyMzY4ODBhM45e68M=: 00:21:03.808 21:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.808 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.808 21:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:03.808 21:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:03.808 21:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.808 21:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:03.808 21:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:03.808 21:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:03.808 21:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:03.808 21:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:21:03.808 21:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:03.808 21:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:03.808 21:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:03.808 21:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:03.808 21:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.808 21:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.808 21:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:03.808 21:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.808 21:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:03.808 21:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.808 21:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.375 00:21:04.375 21:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:04.375 21:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:04.375 21:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.375 21:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.375 21:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.375 21:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:04.375 21:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.375 21:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:04.375 21:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:04.375 { 00:21:04.375 "cntlid": 123, 00:21:04.375 "qid": 0, 00:21:04.375 "state": "enabled", 00:21:04.375 "listen_address": { 00:21:04.375 "trtype": "TCP", 00:21:04.375 "adrfam": "IPv4", 00:21:04.375 "traddr": "10.0.0.2", 00:21:04.375 "trsvcid": "4420" 00:21:04.375 }, 00:21:04.375 "peer_address": { 00:21:04.375 "trtype": "TCP", 00:21:04.375 "adrfam": "IPv4", 00:21:04.375 "traddr": "10.0.0.1", 00:21:04.375 "trsvcid": "40378" 00:21:04.375 }, 00:21:04.375 "auth": { 00:21:04.375 "state": "completed", 00:21:04.375 "digest": "sha512", 00:21:04.375 "dhgroup": "ffdhe4096" 00:21:04.375 } 00:21:04.375 } 00:21:04.375 ]' 00:21:04.634 21:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:04.634 21:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:04.634 21:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:04.634 21:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:04.634 21:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:04.634 21:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.634 21:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.634 21:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.894 21:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MTJmYzk5NDg3MDU0MDcxOGEwZTI1OWZlMjc0NWYwYzDjLYEU: --dhchap-ctrl-secret DHHC-1:02:Njc4NWI2OTkyZDVmNzA5MjM1ZTQ5NzhhZGRlMjBlNzZhNzI5OWM1ZTk0NTA3YzllFvD1/A==: 00:21:05.894 21:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.894 21:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:05.894 21:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:05.894 21:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.894 21:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:05.894 21:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:05.894 21:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:05.894 21:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:05.894 21:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:21:05.894 21:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:05.894 21:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:05.894 21:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:05.894 21:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:05.894 21:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.894 21:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.894 21:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:05.894 21:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.894 21:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:05.894 21:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.894 21:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.462 00:21:06.462 21:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:06.462 21:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:06.462 21:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.721 21:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.721 21:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.721 21:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:06.721 21:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.721 21:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:06.721 21:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:06.721 { 00:21:06.721 "cntlid": 125, 00:21:06.721 "qid": 0, 00:21:06.721 "state": "enabled", 00:21:06.721 "listen_address": { 00:21:06.721 "trtype": "TCP", 00:21:06.721 "adrfam": "IPv4", 00:21:06.721 "traddr": "10.0.0.2", 00:21:06.721 "trsvcid": "4420" 00:21:06.721 }, 00:21:06.721 "peer_address": { 00:21:06.721 "trtype": "TCP", 00:21:06.721 "adrfam": "IPv4", 00:21:06.721 "traddr": "10.0.0.1", 00:21:06.721 "trsvcid": "40416" 00:21:06.721 }, 00:21:06.721 "auth": { 00:21:06.721 "state": "completed", 00:21:06.721 "digest": "sha512", 00:21:06.721 "dhgroup": "ffdhe4096" 00:21:06.721 } 00:21:06.721 } 00:21:06.721 ]' 00:21:06.721 21:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:06.721 21:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:06.721 21:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:06.721 21:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:06.721 21:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:06.721 21:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.721 21:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.721 21:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.979 21:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTEyMzFlMzI4ZjI1YmFiZWQwMGUyOWNkNGQ1NDExYTdkMWQ2MzRiOTM2Y2Q1NGE1bPMCmQ==: --dhchap-ctrl-secret DHHC-1:01:YTAzNjBhMzVhZDNiODIyMTg4YTJmYzZkODlhYjNkMjHMTAmO: 00:21:07.914 21:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.914 21:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:07.914 21:38:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:07.915 21:38:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.915 21:38:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:07.915 21:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:07.915 21:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:07.915 21:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:07.915 21:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:21:07.915 21:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:07.915 21:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:07.915 21:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:07.915 21:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:07.915 21:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.915 21:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:21:07.915 21:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:07.915 21:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.915 21:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:07.915 21:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:07.915 21:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:08.482 00:21:08.482 21:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:08.482 21:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.482 21:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:08.741 21:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.741 21:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.741 21:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:08.741 21:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.741 21:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:08.741 21:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:08.741 { 00:21:08.741 "cntlid": 127, 00:21:08.741 "qid": 0, 00:21:08.741 "state": "enabled", 00:21:08.741 "listen_address": { 00:21:08.741 "trtype": "TCP", 00:21:08.741 "adrfam": "IPv4", 00:21:08.741 "traddr": "10.0.0.2", 00:21:08.741 "trsvcid": "4420" 00:21:08.741 }, 00:21:08.741 "peer_address": { 00:21:08.741 "trtype": "TCP", 00:21:08.741 "adrfam": "IPv4", 00:21:08.741 "traddr": "10.0.0.1", 00:21:08.741 "trsvcid": "40444" 00:21:08.741 }, 00:21:08.741 "auth": { 00:21:08.741 "state": "completed", 00:21:08.741 "digest": "sha512", 00:21:08.741 "dhgroup": "ffdhe4096" 00:21:08.741 } 00:21:08.741 } 00:21:08.741 ]' 00:21:08.741 21:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:08.741 21:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:08.741 21:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:08.741 21:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:08.741 21:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:08.741 21:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.741 21:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.741 21:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.001 21:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MzgwMmM4YzEwYmFiYzhjZjc2MWJhZjliMDRiMmNmODJkZTg0NDNmMzMzOWNmMTIwMmFiYjQ3MzczZDBiZTdmYwZFBis=: 00:21:09.938 21:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.938 21:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:09.938 21:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:09.938 21:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.938 21:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:09.938 21:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:09.938 21:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:09.938 21:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:09.938 21:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:09.938 21:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:21:09.938 21:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:09.938 21:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:09.938 21:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:09.938 21:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:09.938 21:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.938 21:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.938 21:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:09.938 21:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.938 21:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:09.938 21:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.938 21:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.506 00:21:10.506 21:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:10.506 21:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:10.506 21:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.765 21:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.765 21:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.765 21:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:10.765 21:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.765 21:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:10.766 21:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:10.766 { 00:21:10.766 "cntlid": 129, 00:21:10.766 "qid": 0, 00:21:10.766 "state": "enabled", 00:21:10.766 "listen_address": { 00:21:10.766 "trtype": "TCP", 00:21:10.766 "adrfam": "IPv4", 00:21:10.766 "traddr": "10.0.0.2", 00:21:10.766 "trsvcid": "4420" 00:21:10.766 }, 00:21:10.766 "peer_address": { 00:21:10.766 "trtype": "TCP", 00:21:10.766 "adrfam": "IPv4", 00:21:10.766 "traddr": "10.0.0.1", 00:21:10.766 "trsvcid": "40482" 00:21:10.766 }, 00:21:10.766 "auth": { 00:21:10.766 "state": "completed", 00:21:10.766 "digest": "sha512", 00:21:10.766 "dhgroup": "ffdhe6144" 00:21:10.766 } 00:21:10.766 } 00:21:10.766 ]' 00:21:10.766 21:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:10.766 21:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:10.766 21:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:10.766 21:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:10.766 21:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:11.025 21:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.025 21:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.025 21:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.283 21:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjM5ZjIxOTQ0MmYzZTQ3OTJjOWQzYzA5NzY1ZWMxZDQ0NTRiZDUzYWVlZWZjNGQ0bGeOug==: --dhchap-ctrl-secret DHHC-1:03:ZDcwMjA3MTIwOWVkZDgxN2NlZTM0YTY5YTcwN2I3YTAxM2I5NzdkNTc5NzEwYmZhM2JlOTVhZDUyMzY4ODBhM45e68M=: 00:21:11.850 21:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.850 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.850 21:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:11.850 21:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:11.850 21:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.850 21:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:11.850 21:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:11.850 21:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:11.850 21:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:12.109 21:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:21:12.109 21:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:12.109 21:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:12.109 21:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:12.109 21:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:12.109 21:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.109 21:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.109 21:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:12.109 21:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.109 21:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:12.109 21:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.109 21:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.676 00:21:12.676 21:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:12.676 21:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.676 21:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:12.676 21:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.934 21:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.934 21:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:12.934 21:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.934 21:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:12.934 21:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:12.934 { 00:21:12.934 "cntlid": 131, 00:21:12.934 "qid": 0, 00:21:12.934 "state": "enabled", 00:21:12.934 "listen_address": { 00:21:12.934 "trtype": "TCP", 00:21:12.934 "adrfam": "IPv4", 00:21:12.934 "traddr": "10.0.0.2", 00:21:12.934 "trsvcid": "4420" 00:21:12.934 }, 00:21:12.934 "peer_address": { 00:21:12.934 "trtype": "TCP", 00:21:12.934 "adrfam": "IPv4", 00:21:12.934 "traddr": "10.0.0.1", 00:21:12.934 "trsvcid": "49038" 00:21:12.934 }, 00:21:12.934 "auth": { 00:21:12.934 "state": "completed", 00:21:12.934 "digest": "sha512", 00:21:12.935 "dhgroup": "ffdhe6144" 00:21:12.935 } 00:21:12.935 } 00:21:12.935 ]' 00:21:12.935 21:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:12.935 21:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:12.935 21:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:12.935 21:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:12.935 21:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:12.935 21:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.935 21:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.935 21:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.193 21:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MTJmYzk5NDg3MDU0MDcxOGEwZTI1OWZlMjc0NWYwYzDjLYEU: --dhchap-ctrl-secret DHHC-1:02:Njc4NWI2OTkyZDVmNzA5MjM1ZTQ5NzhhZGRlMjBlNzZhNzI5OWM1ZTk0NTA3YzllFvD1/A==: 00:21:13.760 21:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.760 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.760 21:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:13.760 21:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:13.760 21:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.760 21:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:13.760 21:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:13.760 21:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:13.760 21:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:14.019 21:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:21:14.019 21:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:14.019 21:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:14.019 21:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:14.019 21:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:14.019 21:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.019 21:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.019 21:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:14.019 21:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.019 21:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:14.019 21:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.019 21:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.586 00:21:14.586 21:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:14.586 21:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:14.586 21:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.845 21:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.845 21:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.845 21:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:14.845 21:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.845 21:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:14.845 21:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:14.845 { 00:21:14.845 "cntlid": 133, 00:21:14.845 "qid": 0, 00:21:14.845 "state": "enabled", 00:21:14.845 "listen_address": { 00:21:14.845 "trtype": "TCP", 00:21:14.845 "adrfam": "IPv4", 00:21:14.845 "traddr": "10.0.0.2", 00:21:14.845 "trsvcid": "4420" 00:21:14.845 }, 00:21:14.845 "peer_address": { 00:21:14.845 "trtype": "TCP", 00:21:14.845 "adrfam": "IPv4", 00:21:14.845 "traddr": "10.0.0.1", 00:21:14.845 "trsvcid": "49072" 00:21:14.845 }, 00:21:14.845 "auth": { 00:21:14.845 "state": "completed", 00:21:14.845 "digest": "sha512", 00:21:14.845 "dhgroup": "ffdhe6144" 00:21:14.845 } 00:21:14.845 } 00:21:14.845 ]' 00:21:14.845 21:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:14.845 21:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:14.845 21:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:14.845 21:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:14.845 21:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:15.104 21:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.104 21:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.104 21:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.362 21:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTEyMzFlMzI4ZjI1YmFiZWQwMGUyOWNkNGQ1NDExYTdkMWQ2MzRiOTM2Y2Q1NGE1bPMCmQ==: --dhchap-ctrl-secret DHHC-1:01:YTAzNjBhMzVhZDNiODIyMTg4YTJmYzZkODlhYjNkMjHMTAmO: 00:21:15.928 21:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.186 21:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:16.186 21:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:16.186 21:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.186 21:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:16.186 21:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:16.186 21:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:16.186 21:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:16.186 21:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:21:16.444 21:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:16.445 21:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:16.445 21:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:16.445 21:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:16.445 21:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.445 21:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:21:16.445 21:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:16.445 21:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.445 21:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:16.445 21:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:16.445 21:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:16.703 00:21:16.703 21:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:16.703 21:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:16.703 21:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.981 21:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.981 21:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.981 21:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:16.981 21:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.981 21:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:16.981 21:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:16.981 { 00:21:16.981 "cntlid": 135, 00:21:16.981 "qid": 0, 00:21:16.981 "state": "enabled", 00:21:16.981 "listen_address": { 00:21:16.981 "trtype": "TCP", 00:21:16.981 "adrfam": "IPv4", 00:21:16.981 "traddr": "10.0.0.2", 00:21:16.981 "trsvcid": "4420" 00:21:16.981 }, 00:21:16.981 "peer_address": { 00:21:16.981 "trtype": "TCP", 00:21:16.981 "adrfam": "IPv4", 00:21:16.981 "traddr": "10.0.0.1", 00:21:16.981 "trsvcid": "49096" 00:21:16.981 }, 00:21:16.981 "auth": { 00:21:16.981 "state": "completed", 00:21:16.981 "digest": "sha512", 00:21:16.982 "dhgroup": "ffdhe6144" 00:21:16.982 } 00:21:16.982 } 00:21:16.982 ]' 00:21:16.982 21:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:16.982 21:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:17.240 21:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:17.240 21:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:17.240 21:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:17.240 21:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.240 21:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.240 21:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.498 21:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MzgwMmM4YzEwYmFiYzhjZjc2MWJhZjliMDRiMmNmODJkZTg0NDNmMzMzOWNmMTIwMmFiYjQ3MzczZDBiZTdmYwZFBis=: 00:21:18.066 21:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.325 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.325 21:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:18.325 21:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.325 21:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.325 21:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.325 21:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:18.325 21:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:18.325 21:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:18.325 21:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:18.583 21:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:21:18.583 21:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:18.583 21:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:18.583 21:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:18.583 21:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:18.583 21:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.583 21:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.583 21:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.583 21:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.583 21:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.583 21:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.583 21:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.150 00:21:19.150 21:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:19.150 21:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:19.150 21:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.409 21:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.409 21:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.409 21:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:19.409 21:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.409 21:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:19.409 21:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:19.409 { 00:21:19.409 "cntlid": 137, 00:21:19.409 "qid": 0, 00:21:19.409 "state": "enabled", 00:21:19.409 "listen_address": { 00:21:19.409 "trtype": "TCP", 00:21:19.409 "adrfam": "IPv4", 00:21:19.409 "traddr": "10.0.0.2", 00:21:19.409 "trsvcid": "4420" 00:21:19.409 }, 00:21:19.409 "peer_address": { 00:21:19.409 "trtype": "TCP", 00:21:19.409 "adrfam": "IPv4", 00:21:19.409 "traddr": "10.0.0.1", 00:21:19.409 "trsvcid": "49116" 00:21:19.409 }, 00:21:19.409 "auth": { 00:21:19.409 "state": "completed", 00:21:19.409 "digest": "sha512", 00:21:19.409 "dhgroup": "ffdhe8192" 00:21:19.409 } 00:21:19.409 } 00:21:19.409 ]' 00:21:19.409 21:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:19.409 21:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:19.409 21:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:19.409 21:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:19.409 21:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:19.667 21:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.667 21:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.667 21:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.925 21:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjM5ZjIxOTQ0MmYzZTQ3OTJjOWQzYzA5NzY1ZWMxZDQ0NTRiZDUzYWVlZWZjNGQ0bGeOug==: --dhchap-ctrl-secret DHHC-1:03:ZDcwMjA3MTIwOWVkZDgxN2NlZTM0YTY5YTcwN2I3YTAxM2I5NzdkNTc5NzEwYmZhM2JlOTVhZDUyMzY4ODBhM45e68M=: 00:21:20.492 21:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.492 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.492 21:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:20.492 21:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:20.492 21:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.492 21:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:20.492 21:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:20.492 21:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:20.492 21:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:20.750 21:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:21:20.750 21:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:20.750 21:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:20.750 21:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:20.750 21:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:20.750 21:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.750 21:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.750 21:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:20.750 21:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.750 21:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:20.750 21:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.750 21:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.316 00:21:21.316 21:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:21.316 21:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:21.316 21:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.575 21:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.575 21:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.575 21:38:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:21.575 21:38:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.575 21:38:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:21.575 21:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:21.575 { 00:21:21.575 "cntlid": 139, 00:21:21.575 "qid": 0, 00:21:21.575 "state": "enabled", 00:21:21.575 "listen_address": { 00:21:21.575 "trtype": "TCP", 00:21:21.575 "adrfam": "IPv4", 00:21:21.575 "traddr": "10.0.0.2", 00:21:21.575 "trsvcid": "4420" 00:21:21.575 }, 00:21:21.575 "peer_address": { 00:21:21.575 "trtype": "TCP", 00:21:21.575 "adrfam": "IPv4", 00:21:21.575 "traddr": "10.0.0.1", 00:21:21.575 "trsvcid": "41096" 00:21:21.575 }, 00:21:21.575 "auth": { 00:21:21.575 "state": "completed", 00:21:21.575 "digest": "sha512", 00:21:21.575 "dhgroup": "ffdhe8192" 00:21:21.575 } 00:21:21.575 } 00:21:21.575 ]' 00:21:21.575 21:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:21.834 21:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:21.834 21:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:21.834 21:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:21.834 21:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:21.834 21:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.835 21:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.835 21:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.094 21:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MTJmYzk5NDg3MDU0MDcxOGEwZTI1OWZlMjc0NWYwYzDjLYEU: --dhchap-ctrl-secret DHHC-1:02:Njc4NWI2OTkyZDVmNzA5MjM1ZTQ5NzhhZGRlMjBlNzZhNzI5OWM1ZTk0NTA3YzllFvD1/A==: 00:21:23.029 21:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.029 21:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:23.029 21:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:23.029 21:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.029 21:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:23.029 21:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:23.029 21:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:23.029 21:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:23.029 21:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:21:23.029 21:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:23.029 21:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:23.029 21:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:23.029 21:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:23.029 21:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.029 21:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.029 21:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:23.029 21:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.029 21:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:23.029 21:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.029 21:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.963 00:21:23.963 21:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:23.963 21:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:23.963 21:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.963 21:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.963 21:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.963 21:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:23.963 21:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.963 21:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:23.963 21:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:23.963 { 00:21:23.963 "cntlid": 141, 00:21:23.963 "qid": 0, 00:21:23.963 "state": "enabled", 00:21:23.963 "listen_address": { 00:21:23.963 "trtype": "TCP", 00:21:23.963 "adrfam": "IPv4", 00:21:23.963 "traddr": "10.0.0.2", 00:21:23.963 "trsvcid": "4420" 00:21:23.963 }, 00:21:23.963 "peer_address": { 00:21:23.963 "trtype": "TCP", 00:21:23.963 "adrfam": "IPv4", 00:21:23.963 "traddr": "10.0.0.1", 00:21:23.963 "trsvcid": "41120" 00:21:23.963 }, 00:21:23.963 "auth": { 00:21:23.963 "state": "completed", 00:21:23.963 "digest": "sha512", 00:21:23.963 "dhgroup": "ffdhe8192" 00:21:23.963 } 00:21:23.963 } 00:21:23.963 ]' 00:21:23.963 21:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:23.963 21:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:23.963 21:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:23.963 21:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:23.963 21:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:23.963 21:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.963 21:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.963 21:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.222 21:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTEyMzFlMzI4ZjI1YmFiZWQwMGUyOWNkNGQ1NDExYTdkMWQ2MzRiOTM2Y2Q1NGE1bPMCmQ==: --dhchap-ctrl-secret DHHC-1:01:YTAzNjBhMzVhZDNiODIyMTg4YTJmYzZkODlhYjNkMjHMTAmO: 00:21:25.157 21:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.157 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.157 21:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:25.157 21:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:25.157 21:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.157 21:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:25.157 21:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:25.157 21:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:25.157 21:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:25.416 21:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:21:25.416 21:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:25.416 21:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:25.416 21:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:25.416 21:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:25.416 21:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.416 21:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:21:25.416 21:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:25.416 21:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.416 21:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:25.416 21:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:25.416 21:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:25.981 00:21:25.981 21:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:25.981 21:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:25.981 21:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.239 21:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.239 21:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.239 21:38:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:26.239 21:38:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.239 21:38:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:26.239 21:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:26.239 { 00:21:26.239 "cntlid": 143, 00:21:26.239 "qid": 0, 00:21:26.239 "state": "enabled", 00:21:26.239 "listen_address": { 00:21:26.239 "trtype": "TCP", 00:21:26.239 "adrfam": "IPv4", 00:21:26.239 "traddr": "10.0.0.2", 00:21:26.239 "trsvcid": "4420" 00:21:26.239 }, 00:21:26.239 "peer_address": { 00:21:26.239 "trtype": "TCP", 00:21:26.239 "adrfam": "IPv4", 00:21:26.239 "traddr": "10.0.0.1", 00:21:26.239 "trsvcid": "41136" 00:21:26.239 }, 00:21:26.239 "auth": { 00:21:26.239 "state": "completed", 00:21:26.239 "digest": "sha512", 00:21:26.239 "dhgroup": "ffdhe8192" 00:21:26.239 } 00:21:26.239 } 00:21:26.239 ]' 00:21:26.239 21:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:26.239 21:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:26.239 21:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:26.497 21:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:26.497 21:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:26.497 21:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.497 21:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.498 21:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.756 21:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MzgwMmM4YzEwYmFiYzhjZjc2MWJhZjliMDRiMmNmODJkZTg0NDNmMzMzOWNmMTIwMmFiYjQ3MzczZDBiZTdmYwZFBis=: 00:21:27.692 21:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.692 21:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:27.692 21:38:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:27.692 21:38:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.692 21:38:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:27.692 21:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:27.692 21:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:21:27.692 21:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:27.692 21:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:27.692 21:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:27.692 21:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:27.692 21:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:21:27.692 21:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:27.692 21:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:27.692 21:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:27.692 21:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:27.692 21:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:27.692 21:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:27.692 21:38:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:27.692 21:38:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.692 21:38:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:27.692 21:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:27.692 21:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.626 00:21:28.626 21:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:28.626 21:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:28.626 21:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.626 21:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.626 21:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.627 21:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:28.627 21:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.627 21:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:28.627 21:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:28.627 { 00:21:28.627 "cntlid": 145, 00:21:28.627 "qid": 0, 00:21:28.627 "state": "enabled", 00:21:28.627 "listen_address": { 00:21:28.627 "trtype": "TCP", 00:21:28.627 "adrfam": "IPv4", 00:21:28.627 "traddr": "10.0.0.2", 00:21:28.627 "trsvcid": "4420" 00:21:28.627 }, 00:21:28.627 "peer_address": { 00:21:28.627 "trtype": "TCP", 00:21:28.627 "adrfam": "IPv4", 00:21:28.627 "traddr": "10.0.0.1", 00:21:28.627 "trsvcid": "41178" 00:21:28.627 }, 00:21:28.627 "auth": { 00:21:28.627 "state": "completed", 00:21:28.627 "digest": "sha512", 00:21:28.627 "dhgroup": "ffdhe8192" 00:21:28.627 } 00:21:28.627 } 00:21:28.627 ]' 00:21:28.627 21:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:28.627 21:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:28.627 21:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:28.885 21:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:28.885 21:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:28.885 21:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.885 21:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.885 21:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.144 21:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjM5ZjIxOTQ0MmYzZTQ3OTJjOWQzYzA5NzY1ZWMxZDQ0NTRiZDUzYWVlZWZjNGQ0bGeOug==: --dhchap-ctrl-secret DHHC-1:03:ZDcwMjA3MTIwOWVkZDgxN2NlZTM0YTY5YTcwN2I3YTAxM2I5NzdkNTc5NzEwYmZhM2JlOTVhZDUyMzY4ODBhM45e68M=: 00:21:30.079 21:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.079 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.080 21:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:30.080 21:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:30.080 21:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.080 21:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:30.080 21:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 00:21:30.080 21:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:30.080 21:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.080 21:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:30.080 21:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:30.080 21:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:21:30.080 21:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:30.080 21:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:21:30.080 21:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:30.080 21:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:21:30.080 21:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:30.080 21:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:30.080 21:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:30.648 request: 00:21:30.648 { 00:21:30.648 "name": "nvme0", 00:21:30.648 "trtype": "tcp", 00:21:30.648 "traddr": "10.0.0.2", 00:21:30.648 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:21:30.648 "adrfam": "ipv4", 00:21:30.648 "trsvcid": "4420", 00:21:30.648 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:30.648 "dhchap_key": "key2", 00:21:30.648 "method": "bdev_nvme_attach_controller", 00:21:30.648 "req_id": 1 00:21:30.648 } 00:21:30.648 Got JSON-RPC error response 00:21:30.648 response: 00:21:30.648 { 00:21:30.648 "code": -5, 00:21:30.648 "message": "Input/output error" 00:21:30.648 } 00:21:30.648 21:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:21:30.648 21:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:30.648 21:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:30.648 21:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:30.648 21:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:30.648 21:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:30.648 21:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.648 21:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:30.648 21:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.648 21:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:30.648 21:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.648 21:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:30.648 21:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:30.648 21:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:21:30.648 21:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:30.648 21:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:21:30.648 21:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:30.648 21:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:21:30.648 21:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:30.648 21:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:30.648 21:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:31.214 request: 00:21:31.214 { 00:21:31.214 "name": "nvme0", 00:21:31.214 "trtype": "tcp", 00:21:31.214 "traddr": "10.0.0.2", 00:21:31.214 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:21:31.214 "adrfam": "ipv4", 00:21:31.214 "trsvcid": "4420", 00:21:31.214 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:31.214 "dhchap_key": "key1", 00:21:31.214 "dhchap_ctrlr_key": "ckey2", 00:21:31.214 "method": "bdev_nvme_attach_controller", 00:21:31.214 "req_id": 1 00:21:31.214 } 00:21:31.214 Got JSON-RPC error response 00:21:31.214 response: 00:21:31.214 { 00:21:31.214 "code": -5, 00:21:31.214 "message": "Input/output error" 00:21:31.214 } 00:21:31.214 21:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:21:31.214 21:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:31.214 21:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:31.214 21:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:31.214 21:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:31.214 21:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:31.214 21:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.214 21:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:31.215 21:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 00:21:31.215 21:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:31.215 21:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.215 21:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:31.215 21:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.215 21:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:21:31.215 21:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.215 21:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:21:31.215 21:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:31.215 21:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:21:31.215 21:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:31.215 21:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.215 21:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.781 request: 00:21:31.781 { 00:21:31.781 "name": "nvme0", 00:21:31.781 "trtype": "tcp", 00:21:31.781 "traddr": "10.0.0.2", 00:21:31.781 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:21:31.781 "adrfam": "ipv4", 00:21:31.781 "trsvcid": "4420", 00:21:31.781 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:31.781 "dhchap_key": "key1", 00:21:31.781 "dhchap_ctrlr_key": "ckey1", 00:21:31.781 "method": "bdev_nvme_attach_controller", 00:21:31.781 "req_id": 1 00:21:31.781 } 00:21:31.781 Got JSON-RPC error response 00:21:31.781 response: 00:21:31.781 { 00:21:31.781 "code": -5, 00:21:31.781 "message": "Input/output error" 00:21:31.781 } 00:21:31.781 21:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:21:31.781 21:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:31.781 21:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:31.781 21:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:31.781 21:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:31.781 21:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:31.781 21:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.781 21:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:31.781 21:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1446885 00:21:31.781 21:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 1446885 ']' 00:21:31.781 21:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 1446885 00:21:31.781 21:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:21:31.781 21:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:31.781 21:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1446885 00:21:32.041 21:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:21:32.041 21:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:21:32.041 21:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1446885' 00:21:32.041 killing process with pid 1446885 00:21:32.041 21:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 1446885 00:21:32.041 21:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 1446885 00:21:32.041 21:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:32.041 21:38:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:32.041 21:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:32.041 21:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.041 21:38:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1476033 00:21:32.041 21:38:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1476033 00:21:32.041 21:38:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:32.041 21:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 1476033 ']' 00:21:32.041 21:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:32.041 21:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:32.041 21:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:32.041 21:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:32.041 21:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.011 21:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:33.011 21:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:21:33.011 21:38:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:33.011 21:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:33.011 21:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.011 21:38:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:33.011 21:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:33.011 21:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1476033 00:21:33.011 21:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 1476033 ']' 00:21:33.011 21:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:33.011 21:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:33.011 21:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:33.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:33.011 21:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:33.011 21:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.308 21:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:33.308 21:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:21:33.308 21:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:21:33.308 21:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:33.308 21:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.589 21:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:33.589 21:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:21:33.589 21:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:33.589 21:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:33.589 21:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:33.589 21:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:33.589 21:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.589 21:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:21:33.589 21:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:33.589 21:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.589 21:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:33.589 21:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:33.589 21:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:34.156 00:21:34.156 21:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:34.156 21:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.156 21:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:34.415 21:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.415 21:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.415 21:38:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:34.415 21:38:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.415 21:38:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:34.415 21:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:34.415 { 00:21:34.415 "cntlid": 1, 00:21:34.415 "qid": 0, 00:21:34.415 "state": "enabled", 00:21:34.415 "listen_address": { 00:21:34.415 "trtype": "TCP", 00:21:34.415 "adrfam": "IPv4", 00:21:34.415 "traddr": "10.0.0.2", 00:21:34.415 "trsvcid": "4420" 00:21:34.415 }, 00:21:34.415 "peer_address": { 00:21:34.415 "trtype": "TCP", 00:21:34.415 "adrfam": "IPv4", 00:21:34.415 "traddr": "10.0.0.1", 00:21:34.415 "trsvcid": "35686" 00:21:34.415 }, 00:21:34.415 "auth": { 00:21:34.415 "state": "completed", 00:21:34.415 "digest": "sha512", 00:21:34.415 "dhgroup": "ffdhe8192" 00:21:34.415 } 00:21:34.415 } 00:21:34.415 ]' 00:21:34.415 21:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:34.415 21:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:34.415 21:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:34.415 21:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:34.415 21:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:34.673 21:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.673 21:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.673 21:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.931 21:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MzgwMmM4YzEwYmFiYzhjZjc2MWJhZjliMDRiMmNmODJkZTg0NDNmMzMzOWNmMTIwMmFiYjQ3MzczZDBiZTdmYwZFBis=: 00:21:35.498 21:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.498 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.498 21:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:35.498 21:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:35.498 21:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.498 21:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:35.498 21:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:21:35.498 21:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:35.498 21:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.498 21:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:35.498 21:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:35.498 21:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:35.756 21:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:35.756 21:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:21:35.756 21:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:35.756 21:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:21:35.756 21:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:35.756 21:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:21:35.756 21:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:35.756 21:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:35.756 21:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:36.015 request: 00:21:36.015 { 00:21:36.015 "name": "nvme0", 00:21:36.015 "trtype": "tcp", 00:21:36.015 "traddr": "10.0.0.2", 00:21:36.015 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:21:36.015 "adrfam": "ipv4", 00:21:36.015 "trsvcid": "4420", 00:21:36.015 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:36.015 "dhchap_key": "key3", 00:21:36.015 "method": "bdev_nvme_attach_controller", 00:21:36.015 "req_id": 1 00:21:36.015 } 00:21:36.015 Got JSON-RPC error response 00:21:36.015 response: 00:21:36.015 { 00:21:36.015 "code": -5, 00:21:36.015 "message": "Input/output error" 00:21:36.015 } 00:21:36.015 21:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:21:36.015 21:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:36.015 21:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:36.015 21:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:36.015 21:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:21:36.015 21:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:21:36.015 21:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:36.015 21:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:36.274 21:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:36.274 21:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:21:36.274 21:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:36.274 21:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:21:36.274 21:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:36.274 21:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:21:36.274 21:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:36.274 21:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:36.274 21:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:36.533 request: 00:21:36.533 { 00:21:36.533 "name": "nvme0", 00:21:36.533 "trtype": "tcp", 00:21:36.533 "traddr": "10.0.0.2", 00:21:36.533 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:21:36.533 "adrfam": "ipv4", 00:21:36.533 "trsvcid": "4420", 00:21:36.533 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:36.533 "dhchap_key": "key3", 00:21:36.534 "method": "bdev_nvme_attach_controller", 00:21:36.534 "req_id": 1 00:21:36.534 } 00:21:36.534 Got JSON-RPC error response 00:21:36.534 response: 00:21:36.534 { 00:21:36.534 "code": -5, 00:21:36.534 "message": "Input/output error" 00:21:36.534 } 00:21:36.534 21:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:21:36.534 21:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:36.534 21:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:36.534 21:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:36.534 21:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:36.534 21:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:21:36.534 21:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:36.534 21:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:36.534 21:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:36.534 21:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:36.792 21:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:36.792 21:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:36.792 21:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.792 21:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:36.792 21:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:36.792 21:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:36.792 21:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.792 21:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:36.792 21:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:36.792 21:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:21:36.792 21:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:36.792 21:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:21:36.792 21:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:36.792 21:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:21:36.792 21:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:36.792 21:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:36.793 21:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:37.051 request: 00:21:37.051 { 00:21:37.051 "name": "nvme0", 00:21:37.051 "trtype": "tcp", 00:21:37.051 "traddr": "10.0.0.2", 00:21:37.051 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:21:37.051 "adrfam": "ipv4", 00:21:37.051 "trsvcid": "4420", 00:21:37.051 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:37.051 "dhchap_key": "key0", 00:21:37.051 "dhchap_ctrlr_key": "key1", 00:21:37.051 "method": "bdev_nvme_attach_controller", 00:21:37.051 "req_id": 1 00:21:37.051 } 00:21:37.051 Got JSON-RPC error response 00:21:37.051 response: 00:21:37.051 { 00:21:37.051 "code": -5, 00:21:37.051 "message": "Input/output error" 00:21:37.051 } 00:21:37.051 21:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:21:37.051 21:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:37.051 21:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:37.051 21:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:37.051 21:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:37.051 21:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:37.617 00:21:37.617 21:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:21:37.617 21:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:21:37.617 21:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.617 21:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.617 21:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.617 21:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.876 21:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:21:37.876 21:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:21:37.876 21:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1446968 00:21:37.876 21:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 1446968 ']' 00:21:37.876 21:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 1446968 00:21:37.876 21:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:21:37.876 21:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:37.876 21:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1446968 00:21:38.135 21:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:21:38.135 21:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:21:38.135 21:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1446968' 00:21:38.135 killing process with pid 1446968 00:21:38.135 21:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 1446968 00:21:38.135 21:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 1446968 00:21:38.395 21:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:38.395 21:38:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:38.395 21:38:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:21:38.395 21:38:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:38.395 21:38:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:21:38.395 21:38:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:38.395 21:38:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:38.395 rmmod nvme_tcp 00:21:38.395 rmmod nvme_fabrics 00:21:38.395 rmmod nvme_keyring 00:21:38.395 21:38:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:38.395 21:38:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:21:38.395 21:38:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:21:38.395 21:38:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1476033 ']' 00:21:38.395 21:38:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1476033 00:21:38.395 21:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 1476033 ']' 00:21:38.395 21:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 1476033 00:21:38.395 21:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:21:38.395 21:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:38.395 21:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1476033 00:21:38.395 21:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:21:38.395 21:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:21:38.395 21:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1476033' 00:21:38.395 killing process with pid 1476033 00:21:38.395 21:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 1476033 00:21:38.395 21:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 1476033 00:21:38.654 21:38:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:38.654 21:38:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:38.654 21:38:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:38.654 21:38:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:38.654 21:38:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:38.654 21:38:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:38.654 21:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:38.654 21:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:41.189 21:38:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:41.189 21:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.4Yi /tmp/spdk.key-sha256.lQV /tmp/spdk.key-sha384.tul /tmp/spdk.key-sha512.DgG /tmp/spdk.key-sha512.f5V /tmp/spdk.key-sha384.5rv /tmp/spdk.key-sha256.IUJ '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:21:41.189 00:21:41.189 real 2m51.506s 00:21:41.189 user 6m38.986s 00:21:41.189 sys 0m23.722s 00:21:41.189 21:38:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:41.189 21:38:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.189 ************************************ 00:21:41.189 END TEST nvmf_auth_target 00:21:41.189 ************************************ 00:21:41.189 21:38:40 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:21:41.189 21:38:40 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:41.189 21:38:40 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:21:41.189 21:38:40 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:41.189 21:38:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:41.189 ************************************ 00:21:41.189 START TEST nvmf_bdevio_no_huge 00:21:41.189 ************************************ 00:21:41.189 21:38:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:41.189 * Looking for test storage... 00:21:41.189 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:41.189 21:38:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:41.189 21:38:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:21:41.189 21:38:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:41.189 21:38:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:41.189 21:38:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:41.189 21:38:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:41.189 21:38:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:41.189 21:38:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:41.189 21:38:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:41.189 21:38:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:41.189 21:38:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:41.189 21:38:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:41.189 21:38:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:41.189 21:38:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:21:41.189 21:38:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:41.189 21:38:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:41.189 21:38:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:41.189 21:38:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:41.189 21:38:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:41.189 21:38:41 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:41.189 21:38:41 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:41.189 21:38:41 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:41.189 21:38:41 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.189 21:38:41 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.190 21:38:41 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.190 21:38:41 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:21:41.190 21:38:41 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.190 21:38:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:21:41.190 21:38:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:41.190 21:38:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:41.190 21:38:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:41.190 21:38:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:41.190 21:38:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:41.190 21:38:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:41.190 21:38:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:41.190 21:38:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:41.190 21:38:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:41.190 21:38:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:41.190 21:38:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:21:41.190 21:38:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:41.190 21:38:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:41.190 21:38:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:41.190 21:38:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:41.190 21:38:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:41.190 21:38:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.190 21:38:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:41.190 21:38:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:41.190 21:38:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:41.190 21:38:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:41.190 21:38:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:21:41.190 21:38:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:47.746 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:47.746 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:47.747 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:47.747 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:47.747 Found net devices under 0000:af:00.0: cvl_0_0 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:47.747 Found net devices under 0000:af:00.1: cvl_0_1 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:47.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:47.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:21:47.747 00:21:47.747 --- 10.0.0.2 ping statistics --- 00:21:47.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.747 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:21:47.747 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:47.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:47.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:21:47.747 00:21:47.747 --- 10.0.0.1 ping statistics --- 00:21:47.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.748 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:21:47.748 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:47.748 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:21:47.748 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:47.748 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:47.748 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:47.748 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:47.748 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:47.748 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:47.748 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:47.748 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:47.748 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:47.748 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:47.748 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:47.748 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1481286 00:21:47.748 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1481286 00:21:47.748 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:47.748 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@830 -- # '[' -z 1481286 ']' 00:21:47.748 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:47.748 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:47.748 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:47.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:47.748 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:47.748 21:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:47.748 [2024-06-07 21:38:47.702303] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:21:47.748 [2024-06-07 21:38:47.702362] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:47.748 [2024-06-07 21:38:47.803802] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:47.748 [2024-06-07 21:38:47.921578] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:47.748 [2024-06-07 21:38:47.921616] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:47.748 [2024-06-07 21:38:47.921626] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:47.748 [2024-06-07 21:38:47.921635] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:47.748 [2024-06-07 21:38:47.921642] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:47.748 [2024-06-07 21:38:47.921756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:47.748 [2024-06-07 21:38:47.921788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:21:47.748 [2024-06-07 21:38:47.921902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:47.748 [2024-06-07 21:38:47.921902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:21:48.679 21:38:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:48.679 21:38:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@863 -- # return 0 00:21:48.679 21:38:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:48.679 21:38:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:48.679 21:38:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:48.679 21:38:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:48.679 21:38:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:48.679 21:38:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:48.679 21:38:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:48.679 [2024-06-07 21:38:48.694201] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:48.679 21:38:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:48.679 21:38:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:48.679 21:38:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:48.679 21:38:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:48.679 Malloc0 00:21:48.679 21:38:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:48.679 21:38:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:48.679 21:38:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:48.679 21:38:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:48.679 21:38:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:48.679 21:38:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:48.679 21:38:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:48.679 21:38:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:48.679 21:38:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:48.679 21:38:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:48.679 21:38:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:48.679 21:38:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:48.679 [2024-06-07 21:38:48.740417] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:48.679 21:38:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:48.679 21:38:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:48.679 21:38:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:48.679 21:38:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:21:48.679 21:38:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:21:48.679 21:38:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:48.679 21:38:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:48.679 { 00:21:48.679 "params": { 00:21:48.679 "name": "Nvme$subsystem", 00:21:48.679 "trtype": "$TEST_TRANSPORT", 00:21:48.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:48.679 "adrfam": "ipv4", 00:21:48.679 "trsvcid": "$NVMF_PORT", 00:21:48.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:48.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:48.679 "hdgst": ${hdgst:-false}, 00:21:48.679 "ddgst": ${ddgst:-false} 00:21:48.679 }, 00:21:48.679 "method": "bdev_nvme_attach_controller" 00:21:48.679 } 00:21:48.679 EOF 00:21:48.679 )") 00:21:48.679 21:38:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:21:48.679 21:38:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:21:48.679 21:38:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:21:48.679 21:38:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:48.679 "params": { 00:21:48.679 "name": "Nvme1", 00:21:48.679 "trtype": "tcp", 00:21:48.679 "traddr": "10.0.0.2", 00:21:48.679 "adrfam": "ipv4", 00:21:48.679 "trsvcid": "4420", 00:21:48.679 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.679 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:48.679 "hdgst": false, 00:21:48.679 "ddgst": false 00:21:48.679 }, 00:21:48.679 "method": "bdev_nvme_attach_controller" 00:21:48.679 }' 00:21:48.679 [2024-06-07 21:38:48.792058] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:21:48.679 [2024-06-07 21:38:48.792121] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1481350 ] 00:21:48.679 [2024-06-07 21:38:48.885919] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:48.937 [2024-06-07 21:38:49.004688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:48.937 [2024-06-07 21:38:49.004789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:48.937 [2024-06-07 21:38:49.004790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:48.937 I/O targets: 00:21:48.937 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:48.937 00:21:48.937 00:21:48.937 CUnit - A unit testing framework for C - Version 2.1-3 00:21:48.937 http://cunit.sourceforge.net/ 00:21:48.937 00:21:48.937 00:21:48.937 Suite: bdevio tests on: Nvme1n1 00:21:49.194 Test: blockdev write read block ...passed 00:21:49.194 Test: blockdev write zeroes read block ...passed 00:21:49.194 Test: blockdev write zeroes read no split ...passed 00:21:49.194 Test: blockdev write zeroes read split ...passed 00:21:49.194 Test: blockdev write zeroes read split partial ...passed 00:21:49.194 Test: blockdev reset ...[2024-06-07 21:38:49.388645] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.194 [2024-06-07 21:38:49.388717] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aac400 (9): Bad file descriptor 00:21:49.451 [2024-06-07 21:38:49.498275] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:49.451 passed 00:21:49.451 Test: blockdev write read 8 blocks ...passed 00:21:49.451 Test: blockdev write read size > 128k ...passed 00:21:49.451 Test: blockdev write read invalid size ...passed 00:21:49.451 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:49.451 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:49.451 Test: blockdev write read max offset ...passed 00:21:49.451 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:49.451 Test: blockdev writev readv 8 blocks ...passed 00:21:49.451 Test: blockdev writev readv 30 x 1block ...passed 00:21:49.709 Test: blockdev writev readv block ...passed 00:21:49.709 Test: blockdev writev readv size > 128k ...passed 00:21:49.709 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:49.709 Test: blockdev comparev and writev ...[2024-06-07 21:38:49.754468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:49.709 [2024-06-07 21:38:49.754496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.709 [2024-06-07 21:38:49.754508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:49.709 [2024-06-07 21:38:49.754515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:49.709 [2024-06-07 21:38:49.754881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:49.709 [2024-06-07 21:38:49.754891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:49.709 [2024-06-07 21:38:49.754901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:49.709 [2024-06-07 21:38:49.754907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:49.709 [2024-06-07 21:38:49.755294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:49.709 [2024-06-07 21:38:49.755305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:49.709 [2024-06-07 21:38:49.755315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:49.709 [2024-06-07 21:38:49.755322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:49.709 [2024-06-07 21:38:49.755683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:49.709 [2024-06-07 21:38:49.755693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:49.709 [2024-06-07 21:38:49.755707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:49.709 [2024-06-07 21:38:49.755715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:49.709 passed 00:21:49.709 Test: blockdev nvme passthru rw ...passed 00:21:49.709 Test: blockdev nvme passthru vendor specific ...[2024-06-07 21:38:49.837559] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:49.709 [2024-06-07 21:38:49.837574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:49.709 [2024-06-07 21:38:49.837759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:49.709 [2024-06-07 21:38:49.837768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:49.709 [2024-06-07 21:38:49.837954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:49.709 [2024-06-07 21:38:49.837963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:49.709 [2024-06-07 21:38:49.838150] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:49.709 [2024-06-07 21:38:49.838160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:49.709 passed 00:21:49.709 Test: blockdev nvme admin passthru ...passed 00:21:49.709 Test: blockdev copy ...passed 00:21:49.709 00:21:49.709 Run Summary: Type Total Ran Passed Failed Inactive 00:21:49.709 suites 1 1 n/a 0 0 00:21:49.709 tests 23 23 23 0 0 00:21:49.709 asserts 152 152 152 0 n/a 00:21:49.709 00:21:49.709 Elapsed time = 1.445 seconds 00:21:50.274 21:38:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:50.274 21:38:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:50.274 21:38:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:50.274 21:38:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:50.274 21:38:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:50.274 21:38:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:21:50.274 21:38:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:50.274 21:38:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:21:50.274 21:38:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:50.274 21:38:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:21:50.274 21:38:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:50.274 21:38:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:50.274 rmmod nvme_tcp 00:21:50.274 rmmod nvme_fabrics 00:21:50.274 rmmod nvme_keyring 00:21:50.274 21:38:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:50.274 21:38:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:21:50.274 21:38:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:21:50.274 21:38:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1481286 ']' 00:21:50.274 21:38:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1481286 00:21:50.274 21:38:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@949 -- # '[' -z 1481286 ']' 00:21:50.274 21:38:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # kill -0 1481286 00:21:50.274 21:38:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # uname 00:21:50.274 21:38:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:50.274 21:38:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1481286 00:21:50.274 21:38:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # process_name=reactor_3 00:21:50.274 21:38:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' reactor_3 = sudo ']' 00:21:50.274 21:38:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1481286' 00:21:50.274 killing process with pid 1481286 00:21:50.274 21:38:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # kill 1481286 00:21:50.274 21:38:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # wait 1481286 00:21:50.533 21:38:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:50.533 21:38:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:50.533 21:38:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:50.533 21:38:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:50.533 21:38:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:50.533 21:38:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.533 21:38:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:50.533 21:38:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.067 21:38:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:53.067 00:21:53.067 real 0m11.880s 00:21:53.067 user 0m15.475s 00:21:53.067 sys 0m5.994s 00:21:53.067 21:38:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:53.067 21:38:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:53.067 ************************************ 00:21:53.067 END TEST nvmf_bdevio_no_huge 00:21:53.067 ************************************ 00:21:53.067 21:38:52 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:53.067 21:38:52 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:21:53.067 21:38:52 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:53.067 21:38:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:53.067 ************************************ 00:21:53.067 START TEST nvmf_tls 00:21:53.067 ************************************ 00:21:53.067 21:38:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:53.067 * Looking for test storage... 00:21:53.067 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:53.067 21:38:52 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:53.067 21:38:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:21:53.067 21:38:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:53.067 21:38:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:53.067 21:38:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:53.067 21:38:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:53.067 21:38:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:53.067 21:38:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:53.067 21:38:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:53.067 21:38:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:53.068 21:38:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:53.068 21:38:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:53.068 21:38:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:53.068 21:38:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:21:53.068 21:38:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:53.068 21:38:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:53.068 21:38:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:53.068 21:38:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:53.068 21:38:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:53.068 21:38:53 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:53.068 21:38:53 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:53.068 21:38:53 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:53.068 21:38:53 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.068 21:38:53 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.068 21:38:53 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.068 21:38:53 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:21:53.068 21:38:53 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.068 21:38:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:21:53.068 21:38:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:53.068 21:38:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:53.068 21:38:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:53.068 21:38:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:53.068 21:38:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:53.068 21:38:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:53.068 21:38:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:53.068 21:38:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:53.068 21:38:53 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:53.068 21:38:53 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:21:53.068 21:38:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:53.068 21:38:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:53.068 21:38:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:53.068 21:38:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:53.068 21:38:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:53.068 21:38:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.068 21:38:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:53.068 21:38:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.068 21:38:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:53.068 21:38:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:53.068 21:38:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:21:53.068 21:38:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:59.633 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:59.633 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:59.633 Found net devices under 0000:af:00.0: cvl_0_0 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:59.633 Found net devices under 0000:af:00.1: cvl_0_1 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:59.633 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:59.633 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:21:59.633 00:21:59.633 --- 10.0.0.2 ping statistics --- 00:21:59.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.633 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:21:59.633 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:59.633 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:59.633 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:21:59.633 00:21:59.633 --- 10.0.0.1 ping statistics --- 00:21:59.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.633 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:21:59.634 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:59.634 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:21:59.634 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:59.634 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:59.634 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:59.634 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:59.634 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:59.634 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:59.634 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:59.634 21:38:59 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:59.634 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:59.634 21:38:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:59.634 21:38:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:59.634 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1485656 00:21:59.634 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1485656 00:21:59.634 21:38:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1485656 ']' 00:21:59.634 21:38:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.634 21:38:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:59.634 21:38:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.634 21:38:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:59.634 21:38:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:59.634 21:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:59.634 [2024-06-07 21:38:59.518497] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:21:59.634 [2024-06-07 21:38:59.518558] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:59.634 EAL: No free 2048 kB hugepages reported on node 1 00:21:59.634 [2024-06-07 21:38:59.605118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.634 [2024-06-07 21:38:59.693961] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:59.634 [2024-06-07 21:38:59.694006] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:59.634 [2024-06-07 21:38:59.694016] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:59.634 [2024-06-07 21:38:59.694030] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:59.634 [2024-06-07 21:38:59.694039] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:59.634 [2024-06-07 21:38:59.694066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:00.201 21:39:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:00.201 21:39:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:00.201 21:39:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:00.201 21:39:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:00.201 21:39:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:00.201 21:39:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:00.201 21:39:00 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:22:00.201 21:39:00 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:00.460 true 00:22:00.460 21:39:00 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:00.460 21:39:00 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:22:00.719 21:39:00 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:22:00.719 21:39:00 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:22:00.719 21:39:00 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:00.719 21:39:00 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:00.719 21:39:00 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:22:00.978 21:39:01 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:22:00.978 21:39:01 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:22:00.978 21:39:01 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:01.237 21:39:01 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:01.237 21:39:01 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:22:01.496 21:39:01 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:22:01.496 21:39:01 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:22:01.496 21:39:01 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:01.496 21:39:01 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:22:01.755 21:39:01 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:22:01.755 21:39:01 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:22:01.755 21:39:01 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:02.014 21:39:02 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:02.014 21:39:02 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:22:02.272 21:39:02 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:22:02.272 21:39:02 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:22:02.272 21:39:02 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:02.531 21:39:02 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:02.531 21:39:02 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:22:02.790 21:39:02 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:22:02.790 21:39:02 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:22:02.790 21:39:02 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:02.790 21:39:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:02.790 21:39:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:02.790 21:39:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:02.790 21:39:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:02.790 21:39:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:02.790 21:39:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:02.790 21:39:02 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:02.790 21:39:02 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:02.790 21:39:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:02.790 21:39:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:02.790 21:39:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:02.790 21:39:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:22:02.790 21:39:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:02.790 21:39:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:02.790 21:39:02 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:02.790 21:39:02 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:22:02.790 21:39:02 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.gMciauc7pl 00:22:02.790 21:39:02 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:02.790 21:39:02 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.7SHRavbuNN 00:22:02.790 21:39:02 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:02.790 21:39:02 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:02.790 21:39:02 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.gMciauc7pl 00:22:02.790 21:39:02 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.7SHRavbuNN 00:22:02.790 21:39:02 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:03.049 21:39:03 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:03.308 21:39:03 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.gMciauc7pl 00:22:03.308 21:39:03 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.gMciauc7pl 00:22:03.308 21:39:03 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:03.567 [2024-06-07 21:39:03.728190] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:03.567 21:39:03 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:03.825 21:39:03 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:04.083 [2024-06-07 21:39:04.209475] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:04.083 [2024-06-07 21:39:04.209686] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:04.083 21:39:04 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:04.342 malloc0 00:22:04.342 21:39:04 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:04.600 21:39:04 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.gMciauc7pl 00:22:04.859 [2024-06-07 21:39:04.952654] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:04.859 21:39:04 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.gMciauc7pl 00:22:04.859 EAL: No free 2048 kB hugepages reported on node 1 00:22:14.837 Initializing NVMe Controllers 00:22:14.837 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:14.837 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:14.837 Initialization complete. Launching workers. 00:22:14.837 ======================================================== 00:22:14.837 Latency(us) 00:22:14.837 Device Information : IOPS MiB/s Average min max 00:22:14.837 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10835.28 42.33 5907.77 1358.38 6754.58 00:22:14.837 ======================================================== 00:22:14.837 Total : 10835.28 42.33 5907.77 1358.38 6754.58 00:22:14.837 00:22:14.837 21:39:15 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gMciauc7pl 00:22:14.837 21:39:15 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:14.837 21:39:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:14.837 21:39:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:14.837 21:39:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.gMciauc7pl' 00:22:14.837 21:39:15 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:14.837 21:39:15 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1489038 00:22:14.837 21:39:15 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:14.838 21:39:15 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:14.838 21:39:15 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1489038 /var/tmp/bdevperf.sock 00:22:14.838 21:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1489038 ']' 00:22:14.838 21:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:14.838 21:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:14.838 21:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:14.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:14.838 21:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:14.838 21:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:15.096 [2024-06-07 21:39:15.136631] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:22:15.096 [2024-06-07 21:39:15.136691] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1489038 ] 00:22:15.096 EAL: No free 2048 kB hugepages reported on node 1 00:22:15.096 [2024-06-07 21:39:15.200288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:15.096 [2024-06-07 21:39:15.268547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:15.096 21:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:15.096 21:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:15.096 21:39:15 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.gMciauc7pl 00:22:15.354 [2024-06-07 21:39:15.587506] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:15.354 [2024-06-07 21:39:15.587571] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:15.612 TLSTESTn1 00:22:15.612 21:39:15 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:15.612 Running I/O for 10 seconds... 00:22:25.653 00:22:25.653 Latency(us) 00:22:25.653 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:25.653 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:25.653 Verification LBA range: start 0x0 length 0x2000 00:22:25.653 TLSTESTn1 : 10.03 3439.39 13.44 0.00 0.00 37146.68 4527.94 46709.29 00:22:25.653 =================================================================================================================== 00:22:25.653 Total : 3439.39 13.44 0.00 0.00 37146.68 4527.94 46709.29 00:22:25.653 0 00:22:25.653 21:39:25 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:25.653 21:39:25 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1489038 00:22:25.653 21:39:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1489038 ']' 00:22:25.653 21:39:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1489038 00:22:25.653 21:39:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:25.653 21:39:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:25.653 21:39:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1489038 00:22:25.913 21:39:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:22:25.913 21:39:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:22:25.913 21:39:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1489038' 00:22:25.913 killing process with pid 1489038 00:22:25.913 21:39:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1489038 00:22:25.913 Received shutdown signal, test time was about 10.000000 seconds 00:22:25.913 00:22:25.913 Latency(us) 00:22:25.913 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:25.913 =================================================================================================================== 00:22:25.913 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:25.913 [2024-06-07 21:39:25.929095] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:25.913 21:39:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1489038 00:22:25.913 21:39:26 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7SHRavbuNN 00:22:25.913 21:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:22:25.913 21:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7SHRavbuNN 00:22:25.913 21:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:22:25.913 21:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:25.913 21:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:22:25.913 21:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:25.913 21:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7SHRavbuNN 00:22:25.913 21:39:26 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:25.913 21:39:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:25.913 21:39:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:25.913 21:39:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.7SHRavbuNN' 00:22:25.913 21:39:26 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:25.913 21:39:26 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1490880 00:22:25.913 21:39:26 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:25.913 21:39:26 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:25.913 21:39:26 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1490880 /var/tmp/bdevperf.sock 00:22:25.913 21:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1490880 ']' 00:22:25.913 21:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:25.913 21:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:25.913 21:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:25.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:25.913 21:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:25.913 21:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:25.913 [2024-06-07 21:39:26.158801] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:22:25.913 [2024-06-07 21:39:26.158865] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1490880 ] 00:22:26.172 EAL: No free 2048 kB hugepages reported on node 1 00:22:26.172 [2024-06-07 21:39:26.221668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.172 [2024-06-07 21:39:26.288499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:26.172 21:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:26.172 21:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:26.172 21:39:26 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7SHRavbuNN 00:22:26.431 [2024-06-07 21:39:26.611338] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:26.431 [2024-06-07 21:39:26.611409] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:26.431 [2024-06-07 21:39:26.620826] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:26.431 [2024-06-07 21:39:26.621553] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19daaa0 (107): Transport endpoint is not connected 00:22:26.431 [2024-06-07 21:39:26.622547] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19daaa0 (9): Bad file descriptor 00:22:26.431 [2024-06-07 21:39:26.623548] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:26.431 [2024-06-07 21:39:26.623557] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:26.431 [2024-06-07 21:39:26.623565] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:26.431 request: 00:22:26.431 { 00:22:26.431 "name": "TLSTEST", 00:22:26.431 "trtype": "tcp", 00:22:26.431 "traddr": "10.0.0.2", 00:22:26.431 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:26.431 "adrfam": "ipv4", 00:22:26.431 "trsvcid": "4420", 00:22:26.431 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:26.431 "psk": "/tmp/tmp.7SHRavbuNN", 00:22:26.431 "method": "bdev_nvme_attach_controller", 00:22:26.431 "req_id": 1 00:22:26.431 } 00:22:26.431 Got JSON-RPC error response 00:22:26.431 response: 00:22:26.431 { 00:22:26.431 "code": -5, 00:22:26.431 "message": "Input/output error" 00:22:26.431 } 00:22:26.431 21:39:26 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1490880 00:22:26.431 21:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1490880 ']' 00:22:26.431 21:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1490880 00:22:26.431 21:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:26.431 21:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:26.431 21:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1490880 00:22:26.431 21:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:22:26.431 21:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:22:26.431 21:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1490880' 00:22:26.431 killing process with pid 1490880 00:22:26.431 21:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1490880 00:22:26.431 Received shutdown signal, test time was about 10.000000 seconds 00:22:26.431 00:22:26.431 Latency(us) 00:22:26.431 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.431 =================================================================================================================== 00:22:26.431 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:26.431 [2024-06-07 21:39:26.694436] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:26.431 21:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1490880 00:22:26.691 21:39:26 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:26.691 21:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:22:26.691 21:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:26.691 21:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:26.691 21:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:26.691 21:39:26 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.gMciauc7pl 00:22:26.691 21:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:22:26.691 21:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.gMciauc7pl 00:22:26.691 21:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:22:26.691 21:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:26.691 21:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:22:26.691 21:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:26.691 21:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.gMciauc7pl 00:22:26.691 21:39:26 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:26.691 21:39:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:26.691 21:39:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:26.691 21:39:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.gMciauc7pl' 00:22:26.691 21:39:26 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:26.691 21:39:26 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1491143 00:22:26.691 21:39:26 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:26.691 21:39:26 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:26.691 21:39:26 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1491143 /var/tmp/bdevperf.sock 00:22:26.691 21:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1491143 ']' 00:22:26.691 21:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:26.691 21:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:26.691 21:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:26.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:26.691 21:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:26.691 21:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:26.691 [2024-06-07 21:39:26.916636] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:22:26.691 [2024-06-07 21:39:26.916701] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1491143 ] 00:22:26.691 EAL: No free 2048 kB hugepages reported on node 1 00:22:26.950 [2024-06-07 21:39:26.979776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.950 [2024-06-07 21:39:27.046863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:26.950 21:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:26.950 21:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:26.950 21:39:27 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.gMciauc7pl 00:22:27.209 [2024-06-07 21:39:27.361872] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:27.209 [2024-06-07 21:39:27.361939] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:27.209 [2024-06-07 21:39:27.372631] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:27.209 [2024-06-07 21:39:27.372660] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:27.209 [2024-06-07 21:39:27.372688] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:27.209 [2024-06-07 21:39:27.373133] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x884aa0 (107): Transport endpoint is not connected 00:22:27.209 [2024-06-07 21:39:27.374125] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x884aa0 (9): Bad file descriptor 00:22:27.209 [2024-06-07 21:39:27.375126] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:27.209 [2024-06-07 21:39:27.375136] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:27.209 [2024-06-07 21:39:27.375143] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:27.209 request: 00:22:27.209 { 00:22:27.209 "name": "TLSTEST", 00:22:27.209 "trtype": "tcp", 00:22:27.209 "traddr": "10.0.0.2", 00:22:27.209 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:27.209 "adrfam": "ipv4", 00:22:27.209 "trsvcid": "4420", 00:22:27.209 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:27.209 "psk": "/tmp/tmp.gMciauc7pl", 00:22:27.209 "method": "bdev_nvme_attach_controller", 00:22:27.209 "req_id": 1 00:22:27.209 } 00:22:27.210 Got JSON-RPC error response 00:22:27.210 response: 00:22:27.210 { 00:22:27.210 "code": -5, 00:22:27.210 "message": "Input/output error" 00:22:27.210 } 00:22:27.210 21:39:27 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1491143 00:22:27.210 21:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1491143 ']' 00:22:27.210 21:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1491143 00:22:27.210 21:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:27.210 21:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:27.210 21:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1491143 00:22:27.210 21:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:22:27.210 21:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:22:27.210 21:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1491143' 00:22:27.210 killing process with pid 1491143 00:22:27.210 21:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1491143 00:22:27.210 Received shutdown signal, test time was about 10.000000 seconds 00:22:27.210 00:22:27.210 Latency(us) 00:22:27.210 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.210 =================================================================================================================== 00:22:27.210 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:27.210 [2024-06-07 21:39:27.444526] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:27.210 21:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1491143 00:22:27.469 21:39:27 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:27.469 21:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:22:27.469 21:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:27.469 21:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:27.469 21:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:27.469 21:39:27 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.gMciauc7pl 00:22:27.469 21:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:22:27.469 21:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.gMciauc7pl 00:22:27.469 21:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:22:27.469 21:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:27.469 21:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:22:27.469 21:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:27.469 21:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.gMciauc7pl 00:22:27.469 21:39:27 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:27.469 21:39:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:27.469 21:39:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:27.469 21:39:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.gMciauc7pl' 00:22:27.469 21:39:27 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:27.469 21:39:27 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1491163 00:22:27.469 21:39:27 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:27.469 21:39:27 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:27.469 21:39:27 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1491163 /var/tmp/bdevperf.sock 00:22:27.469 21:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1491163 ']' 00:22:27.469 21:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:27.469 21:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:27.469 21:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:27.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:27.469 21:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:27.469 21:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:27.469 [2024-06-07 21:39:27.663040] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:22:27.469 [2024-06-07 21:39:27.663103] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1491163 ] 00:22:27.469 EAL: No free 2048 kB hugepages reported on node 1 00:22:27.469 [2024-06-07 21:39:27.727192] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.728 [2024-06-07 21:39:27.800359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:27.728 21:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:27.728 21:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:27.728 21:39:27 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.gMciauc7pl 00:22:27.987 [2024-06-07 21:39:28.115403] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:27.987 [2024-06-07 21:39:28.115471] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:27.987 [2024-06-07 21:39:28.124607] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:27.987 [2024-06-07 21:39:28.124634] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:27.987 [2024-06-07 21:39:28.124664] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:27.987 [2024-06-07 21:39:28.125568] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17fdaa0 (107): Transport endpoint is not connected 00:22:27.987 [2024-06-07 21:39:28.126562] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17fdaa0 (9): Bad file descriptor 00:22:27.988 [2024-06-07 21:39:28.127564] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:27.988 [2024-06-07 21:39:28.127573] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:27.988 [2024-06-07 21:39:28.127581] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:27.988 request: 00:22:27.988 { 00:22:27.988 "name": "TLSTEST", 00:22:27.988 "trtype": "tcp", 00:22:27.988 "traddr": "10.0.0.2", 00:22:27.988 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:27.988 "adrfam": "ipv4", 00:22:27.988 "trsvcid": "4420", 00:22:27.988 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:27.988 "psk": "/tmp/tmp.gMciauc7pl", 00:22:27.988 "method": "bdev_nvme_attach_controller", 00:22:27.988 "req_id": 1 00:22:27.988 } 00:22:27.988 Got JSON-RPC error response 00:22:27.988 response: 00:22:27.988 { 00:22:27.988 "code": -5, 00:22:27.988 "message": "Input/output error" 00:22:27.988 } 00:22:27.988 21:39:28 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1491163 00:22:27.988 21:39:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1491163 ']' 00:22:27.988 21:39:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1491163 00:22:27.988 21:39:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:27.988 21:39:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:27.988 21:39:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1491163 00:22:27.988 21:39:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:22:27.988 21:39:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:22:27.988 21:39:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1491163' 00:22:27.988 killing process with pid 1491163 00:22:27.988 21:39:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1491163 00:22:27.988 Received shutdown signal, test time was about 10.000000 seconds 00:22:27.988 00:22:27.988 Latency(us) 00:22:27.988 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.988 =================================================================================================================== 00:22:27.988 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:27.988 [2024-06-07 21:39:28.203464] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:27.988 21:39:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1491163 00:22:28.247 21:39:28 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:28.247 21:39:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:22:28.247 21:39:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:28.247 21:39:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:28.247 21:39:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:28.247 21:39:28 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:28.247 21:39:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:22:28.247 21:39:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:28.247 21:39:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:22:28.247 21:39:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:28.247 21:39:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:22:28.247 21:39:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:28.247 21:39:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:28.247 21:39:28 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:28.247 21:39:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:28.247 21:39:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:28.247 21:39:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:28.247 21:39:28 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:28.247 21:39:28 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1491419 00:22:28.247 21:39:28 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:28.247 21:39:28 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:28.247 21:39:28 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1491419 /var/tmp/bdevperf.sock 00:22:28.247 21:39:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1491419 ']' 00:22:28.247 21:39:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:28.247 21:39:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:28.247 21:39:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:28.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:28.247 21:39:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:28.247 21:39:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:28.247 [2024-06-07 21:39:28.423436] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:22:28.247 [2024-06-07 21:39:28.423497] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1491419 ] 00:22:28.247 EAL: No free 2048 kB hugepages reported on node 1 00:22:28.247 [2024-06-07 21:39:28.486238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.507 [2024-06-07 21:39:28.553509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:28.507 21:39:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:28.507 21:39:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:28.507 21:39:28 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:28.767 [2024-06-07 21:39:28.876590] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:28.767 [2024-06-07 21:39:28.878926] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc60b0 (9): Bad file descriptor 00:22:28.767 [2024-06-07 21:39:28.879925] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:28.767 [2024-06-07 21:39:28.879937] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:28.767 [2024-06-07 21:39:28.879944] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:28.767 request: 00:22:28.767 { 00:22:28.767 "name": "TLSTEST", 00:22:28.767 "trtype": "tcp", 00:22:28.767 "traddr": "10.0.0.2", 00:22:28.767 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:28.767 "adrfam": "ipv4", 00:22:28.767 "trsvcid": "4420", 00:22:28.767 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:28.767 "method": "bdev_nvme_attach_controller", 00:22:28.767 "req_id": 1 00:22:28.767 } 00:22:28.767 Got JSON-RPC error response 00:22:28.767 response: 00:22:28.767 { 00:22:28.767 "code": -5, 00:22:28.767 "message": "Input/output error" 00:22:28.767 } 00:22:28.767 21:39:28 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1491419 00:22:28.767 21:39:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1491419 ']' 00:22:28.767 21:39:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1491419 00:22:28.767 21:39:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:28.767 21:39:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:28.767 21:39:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1491419 00:22:28.767 21:39:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:22:28.767 21:39:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:22:28.767 21:39:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1491419' 00:22:28.767 killing process with pid 1491419 00:22:28.767 21:39:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1491419 00:22:28.767 Received shutdown signal, test time was about 10.000000 seconds 00:22:28.767 00:22:28.767 Latency(us) 00:22:28.767 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:28.767 =================================================================================================================== 00:22:28.767 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:28.767 21:39:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1491419 00:22:29.026 21:39:29 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:29.026 21:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:22:29.026 21:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:29.026 21:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:29.026 21:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:29.026 21:39:29 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 1485656 00:22:29.026 21:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1485656 ']' 00:22:29.026 21:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1485656 00:22:29.026 21:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:29.026 21:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:29.026 21:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1485656 00:22:29.026 21:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:22:29.026 21:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:22:29.026 21:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1485656' 00:22:29.026 killing process with pid 1485656 00:22:29.026 21:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1485656 00:22:29.026 [2024-06-07 21:39:29.162363] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:29.026 21:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1485656 00:22:29.285 21:39:29 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:29.285 21:39:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:29.285 21:39:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:29.285 21:39:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:29.285 21:39:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:29.285 21:39:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:22:29.285 21:39:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:29.285 21:39:29 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:29.285 21:39:29 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:22:29.285 21:39:29 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.wzuEKLfHBe 00:22:29.285 21:39:29 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:29.286 21:39:29 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.wzuEKLfHBe 00:22:29.286 21:39:29 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:22:29.286 21:39:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:29.286 21:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:29.286 21:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:29.286 21:39:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1491483 00:22:29.286 21:39:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1491483 00:22:29.286 21:39:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:29.286 21:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1491483 ']' 00:22:29.286 21:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:29.286 21:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:29.286 21:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:29.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:29.286 21:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:29.286 21:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:29.286 [2024-06-07 21:39:29.499111] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:22:29.286 [2024-06-07 21:39:29.499172] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:29.286 EAL: No free 2048 kB hugepages reported on node 1 00:22:29.545 [2024-06-07 21:39:29.586145] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.545 [2024-06-07 21:39:29.676381] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:29.545 [2024-06-07 21:39:29.676423] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:29.545 [2024-06-07 21:39:29.676433] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:29.545 [2024-06-07 21:39:29.676442] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:29.545 [2024-06-07 21:39:29.676450] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:29.545 [2024-06-07 21:39:29.676472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:30.481 21:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:30.481 21:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:30.481 21:39:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:30.481 21:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:30.481 21:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:30.482 21:39:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:30.482 21:39:30 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.wzuEKLfHBe 00:22:30.482 21:39:30 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.wzuEKLfHBe 00:22:30.482 21:39:30 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:30.482 [2024-06-07 21:39:30.693750] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:30.482 21:39:30 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:30.740 21:39:30 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:30.999 [2024-06-07 21:39:31.183055] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:30.999 [2024-06-07 21:39:31.183269] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:30.999 21:39:31 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:31.258 malloc0 00:22:31.258 21:39:31 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:31.517 21:39:31 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wzuEKLfHBe 00:22:31.775 [2024-06-07 21:39:31.910229] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:31.775 21:39:31 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wzuEKLfHBe 00:22:31.775 21:39:31 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:31.775 21:39:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:31.775 21:39:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:31.775 21:39:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.wzuEKLfHBe' 00:22:31.776 21:39:31 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:31.776 21:39:31 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1491997 00:22:31.776 21:39:31 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:31.776 21:39:31 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:31.776 21:39:31 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1491997 /var/tmp/bdevperf.sock 00:22:31.776 21:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1491997 ']' 00:22:31.776 21:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:31.776 21:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:31.776 21:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:31.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:31.776 21:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:31.776 21:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:31.776 [2024-06-07 21:39:31.973759] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:22:31.776 [2024-06-07 21:39:31.973819] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1491997 ] 00:22:31.776 EAL: No free 2048 kB hugepages reported on node 1 00:22:31.776 [2024-06-07 21:39:32.042141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.035 [2024-06-07 21:39:32.113051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:32.973 21:39:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:32.973 21:39:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:32.973 21:39:32 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wzuEKLfHBe 00:22:32.973 [2024-06-07 21:39:33.109776] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:32.973 [2024-06-07 21:39:33.109846] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:32.973 TLSTESTn1 00:22:32.973 21:39:33 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:33.231 Running I/O for 10 seconds... 00:22:43.206 00:22:43.206 Latency(us) 00:22:43.206 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.206 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:43.206 Verification LBA range: start 0x0 length 0x2000 00:22:43.206 TLSTESTn1 : 10.02 5222.57 20.40 0.00 0.00 24465.32 6315.29 69110.69 00:22:43.206 =================================================================================================================== 00:22:43.206 Total : 5222.57 20.40 0.00 0.00 24465.32 6315.29 69110.69 00:22:43.206 0 00:22:43.206 21:39:43 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:43.206 21:39:43 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1491997 00:22:43.206 21:39:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1491997 ']' 00:22:43.206 21:39:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1491997 00:22:43.206 21:39:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:43.206 21:39:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:43.206 21:39:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1491997 00:22:43.206 21:39:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:22:43.206 21:39:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:22:43.206 21:39:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1491997' 00:22:43.206 killing process with pid 1491997 00:22:43.206 21:39:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1491997 00:22:43.206 Received shutdown signal, test time was about 10.000000 seconds 00:22:43.206 00:22:43.206 Latency(us) 00:22:43.206 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.206 =================================================================================================================== 00:22:43.206 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:43.207 [2024-06-07 21:39:43.438677] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:43.207 21:39:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1491997 00:22:43.465 21:39:43 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.wzuEKLfHBe 00:22:43.465 21:39:43 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wzuEKLfHBe 00:22:43.465 21:39:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:22:43.465 21:39:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wzuEKLfHBe 00:22:43.465 21:39:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:22:43.465 21:39:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:43.465 21:39:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:22:43.465 21:39:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:43.465 21:39:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wzuEKLfHBe 00:22:43.465 21:39:43 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:43.465 21:39:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:43.465 21:39:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:43.465 21:39:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.wzuEKLfHBe' 00:22:43.465 21:39:43 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:43.465 21:39:43 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1494086 00:22:43.465 21:39:43 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:43.465 21:39:43 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:43.465 21:39:43 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1494086 /var/tmp/bdevperf.sock 00:22:43.465 21:39:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1494086 ']' 00:22:43.465 21:39:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:43.465 21:39:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:43.465 21:39:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:43.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:43.465 21:39:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:43.465 21:39:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:43.465 [2024-06-07 21:39:43.671197] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:22:43.466 [2024-06-07 21:39:43.671261] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1494086 ] 00:22:43.466 EAL: No free 2048 kB hugepages reported on node 1 00:22:43.724 [2024-06-07 21:39:43.735447] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.724 [2024-06-07 21:39:43.798849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:43.724 21:39:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:43.724 21:39:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:43.724 21:39:43 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wzuEKLfHBe 00:22:43.983 [2024-06-07 21:39:44.117921] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:43.983 [2024-06-07 21:39:44.117966] bdev_nvme.c:6116:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:43.983 [2024-06-07 21:39:44.117973] bdev_nvme.c:6225:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.wzuEKLfHBe 00:22:43.983 request: 00:22:43.983 { 00:22:43.983 "name": "TLSTEST", 00:22:43.983 "trtype": "tcp", 00:22:43.983 "traddr": "10.0.0.2", 00:22:43.983 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:43.983 "adrfam": "ipv4", 00:22:43.983 "trsvcid": "4420", 00:22:43.983 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:43.983 "psk": "/tmp/tmp.wzuEKLfHBe", 00:22:43.983 "method": "bdev_nvme_attach_controller", 00:22:43.983 "req_id": 1 00:22:43.983 } 00:22:43.983 Got JSON-RPC error response 00:22:43.983 response: 00:22:43.983 { 00:22:43.983 "code": -1, 00:22:43.983 "message": "Operation not permitted" 00:22:43.983 } 00:22:43.984 21:39:44 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1494086 00:22:43.984 21:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1494086 ']' 00:22:43.984 21:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1494086 00:22:43.984 21:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:43.984 21:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:43.984 21:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1494086 00:22:43.984 21:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:22:43.984 21:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:22:43.984 21:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1494086' 00:22:43.984 killing process with pid 1494086 00:22:43.984 21:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1494086 00:22:43.984 Received shutdown signal, test time was about 10.000000 seconds 00:22:43.984 00:22:43.984 Latency(us) 00:22:43.984 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.984 =================================================================================================================== 00:22:43.984 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:43.984 21:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1494086 00:22:44.243 21:39:44 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:44.243 21:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:22:44.243 21:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:44.243 21:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:44.243 21:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:44.243 21:39:44 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 1491483 00:22:44.243 21:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1491483 ']' 00:22:44.243 21:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1491483 00:22:44.243 21:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:44.243 21:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:44.243 21:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1491483 00:22:44.243 21:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:22:44.243 21:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:22:44.243 21:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1491483' 00:22:44.243 killing process with pid 1491483 00:22:44.243 21:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1491483 00:22:44.243 [2024-06-07 21:39:44.409103] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:44.243 21:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1491483 00:22:44.502 21:39:44 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:22:44.502 21:39:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:44.502 21:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:44.502 21:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:44.502 21:39:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1494235 00:22:44.502 21:39:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1494235 00:22:44.502 21:39:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:44.502 21:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1494235 ']' 00:22:44.502 21:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:44.502 21:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:44.502 21:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:44.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:44.502 21:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:44.502 21:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:44.502 [2024-06-07 21:39:44.681816] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:22:44.502 [2024-06-07 21:39:44.681877] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:44.502 EAL: No free 2048 kB hugepages reported on node 1 00:22:44.502 [2024-06-07 21:39:44.769522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.761 [2024-06-07 21:39:44.858447] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:44.761 [2024-06-07 21:39:44.858489] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:44.761 [2024-06-07 21:39:44.858500] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:44.761 [2024-06-07 21:39:44.858509] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:44.761 [2024-06-07 21:39:44.858517] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:44.761 [2024-06-07 21:39:44.858540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:45.697 21:39:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:45.697 21:39:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:45.697 21:39:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:45.697 21:39:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:45.697 21:39:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:45.697 21:39:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:45.697 21:39:45 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.wzuEKLfHBe 00:22:45.697 21:39:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:22:45.697 21:39:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.wzuEKLfHBe 00:22:45.697 21:39:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=setup_nvmf_tgt 00:22:45.697 21:39:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:45.697 21:39:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t setup_nvmf_tgt 00:22:45.697 21:39:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:45.697 21:39:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # setup_nvmf_tgt /tmp/tmp.wzuEKLfHBe 00:22:45.697 21:39:45 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.wzuEKLfHBe 00:22:45.697 21:39:45 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:45.697 [2024-06-07 21:39:45.880579] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:45.697 21:39:45 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:45.956 21:39:46 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:46.215 [2024-06-07 21:39:46.365871] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:46.215 [2024-06-07 21:39:46.366093] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:46.215 21:39:46 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:46.474 malloc0 00:22:46.474 21:39:46 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:46.733 21:39:46 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wzuEKLfHBe 00:22:46.992 [2024-06-07 21:39:47.080996] tcp.c:3580:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:46.992 [2024-06-07 21:39:47.081031] tcp.c:3666:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:22:46.992 [2024-06-07 21:39:47.081062] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:22:46.992 request: 00:22:46.992 { 00:22:46.992 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:46.992 "host": "nqn.2016-06.io.spdk:host1", 00:22:46.992 "psk": "/tmp/tmp.wzuEKLfHBe", 00:22:46.992 "method": "nvmf_subsystem_add_host", 00:22:46.992 "req_id": 1 00:22:46.992 } 00:22:46.992 Got JSON-RPC error response 00:22:46.992 response: 00:22:46.992 { 00:22:46.992 "code": -32603, 00:22:46.992 "message": "Internal error" 00:22:46.992 } 00:22:46.992 21:39:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:22:46.992 21:39:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:46.992 21:39:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:46.992 21:39:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:46.992 21:39:47 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 1494235 00:22:46.992 21:39:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1494235 ']' 00:22:46.992 21:39:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1494235 00:22:46.992 21:39:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:46.992 21:39:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:46.992 21:39:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1494235 00:22:46.992 21:39:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:22:46.992 21:39:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:22:46.992 21:39:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1494235' 00:22:46.992 killing process with pid 1494235 00:22:46.992 21:39:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1494235 00:22:46.992 21:39:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1494235 00:22:47.251 21:39:47 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.wzuEKLfHBe 00:22:47.251 21:39:47 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:22:47.251 21:39:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:47.251 21:39:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:47.251 21:39:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:47.252 21:39:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1494673 00:22:47.252 21:39:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1494673 00:22:47.252 21:39:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:47.252 21:39:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1494673 ']' 00:22:47.252 21:39:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:47.252 21:39:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:47.252 21:39:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:47.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:47.252 21:39:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:47.252 21:39:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:47.252 [2024-06-07 21:39:47.425318] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:22:47.252 [2024-06-07 21:39:47.425377] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:47.252 EAL: No free 2048 kB hugepages reported on node 1 00:22:47.252 [2024-06-07 21:39:47.512902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.511 [2024-06-07 21:39:47.602286] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:47.511 [2024-06-07 21:39:47.602331] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:47.511 [2024-06-07 21:39:47.602342] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:47.511 [2024-06-07 21:39:47.602350] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:47.511 [2024-06-07 21:39:47.602357] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:47.511 [2024-06-07 21:39:47.602386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:48.447 21:39:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:48.448 21:39:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:48.448 21:39:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:48.448 21:39:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:48.448 21:39:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:48.448 21:39:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:48.448 21:39:48 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.wzuEKLfHBe 00:22:48.448 21:39:48 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.wzuEKLfHBe 00:22:48.448 21:39:48 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:48.448 [2024-06-07 21:39:48.709101] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:48.706 21:39:48 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:48.706 21:39:48 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:48.965 [2024-06-07 21:39:49.190374] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:48.965 [2024-06-07 21:39:49.190576] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:48.965 21:39:49 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:49.224 malloc0 00:22:49.224 21:39:49 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:49.483 21:39:49 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wzuEKLfHBe 00:22:49.743 [2024-06-07 21:39:49.913531] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:49.743 21:39:49 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1495204 00:22:49.743 21:39:49 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:49.743 21:39:49 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:49.743 21:39:49 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1495204 /var/tmp/bdevperf.sock 00:22:49.743 21:39:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1495204 ']' 00:22:49.743 21:39:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:49.743 21:39:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:49.743 21:39:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:49.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:49.743 21:39:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:49.743 21:39:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:49.743 [2024-06-07 21:39:49.975673] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:22:49.743 [2024-06-07 21:39:49.975732] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1495204 ] 00:22:49.743 EAL: No free 2048 kB hugepages reported on node 1 00:22:50.002 [2024-06-07 21:39:50.042852] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:50.002 [2024-06-07 21:39:50.113945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:50.002 21:39:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:50.002 21:39:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:50.002 21:39:50 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wzuEKLfHBe 00:22:50.266 [2024-06-07 21:39:50.433270] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:50.266 [2024-06-07 21:39:50.433343] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:50.266 TLSTESTn1 00:22:50.524 21:39:50 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:22:50.783 21:39:50 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:22:50.783 "subsystems": [ 00:22:50.783 { 00:22:50.783 "subsystem": "keyring", 00:22:50.783 "config": [] 00:22:50.783 }, 00:22:50.783 { 00:22:50.783 "subsystem": "iobuf", 00:22:50.783 "config": [ 00:22:50.783 { 00:22:50.783 "method": "iobuf_set_options", 00:22:50.783 "params": { 00:22:50.783 "small_pool_count": 8192, 00:22:50.783 "large_pool_count": 1024, 00:22:50.783 "small_bufsize": 8192, 00:22:50.783 "large_bufsize": 135168 00:22:50.783 } 00:22:50.783 } 00:22:50.783 ] 00:22:50.783 }, 00:22:50.783 { 00:22:50.783 "subsystem": "sock", 00:22:50.783 "config": [ 00:22:50.783 { 00:22:50.783 "method": "sock_set_default_impl", 00:22:50.783 "params": { 00:22:50.783 "impl_name": "posix" 00:22:50.783 } 00:22:50.783 }, 00:22:50.783 { 00:22:50.783 "method": "sock_impl_set_options", 00:22:50.783 "params": { 00:22:50.783 "impl_name": "ssl", 00:22:50.783 "recv_buf_size": 4096, 00:22:50.783 "send_buf_size": 4096, 00:22:50.783 "enable_recv_pipe": true, 00:22:50.783 "enable_quickack": false, 00:22:50.783 "enable_placement_id": 0, 00:22:50.783 "enable_zerocopy_send_server": true, 00:22:50.783 "enable_zerocopy_send_client": false, 00:22:50.783 "zerocopy_threshold": 0, 00:22:50.783 "tls_version": 0, 00:22:50.783 "enable_ktls": false 00:22:50.783 } 00:22:50.783 }, 00:22:50.783 { 00:22:50.783 "method": "sock_impl_set_options", 00:22:50.783 "params": { 00:22:50.783 "impl_name": "posix", 00:22:50.783 "recv_buf_size": 2097152, 00:22:50.783 "send_buf_size": 2097152, 00:22:50.783 "enable_recv_pipe": true, 00:22:50.783 "enable_quickack": false, 00:22:50.783 "enable_placement_id": 0, 00:22:50.783 "enable_zerocopy_send_server": true, 00:22:50.783 "enable_zerocopy_send_client": false, 00:22:50.783 "zerocopy_threshold": 0, 00:22:50.783 "tls_version": 0, 00:22:50.783 "enable_ktls": false 00:22:50.783 } 00:22:50.783 } 00:22:50.783 ] 00:22:50.783 }, 00:22:50.783 { 00:22:50.783 "subsystem": "vmd", 00:22:50.783 "config": [] 00:22:50.783 }, 00:22:50.783 { 00:22:50.783 "subsystem": "accel", 00:22:50.783 "config": [ 00:22:50.783 { 00:22:50.783 "method": "accel_set_options", 00:22:50.783 "params": { 00:22:50.783 "small_cache_size": 128, 00:22:50.783 "large_cache_size": 16, 00:22:50.783 "task_count": 2048, 00:22:50.783 "sequence_count": 2048, 00:22:50.783 "buf_count": 2048 00:22:50.783 } 00:22:50.783 } 00:22:50.783 ] 00:22:50.783 }, 00:22:50.783 { 00:22:50.783 "subsystem": "bdev", 00:22:50.783 "config": [ 00:22:50.783 { 00:22:50.784 "method": "bdev_set_options", 00:22:50.784 "params": { 00:22:50.784 "bdev_io_pool_size": 65535, 00:22:50.784 "bdev_io_cache_size": 256, 00:22:50.784 "bdev_auto_examine": true, 00:22:50.784 "iobuf_small_cache_size": 128, 00:22:50.784 "iobuf_large_cache_size": 16 00:22:50.784 } 00:22:50.784 }, 00:22:50.784 { 00:22:50.784 "method": "bdev_raid_set_options", 00:22:50.784 "params": { 00:22:50.784 "process_window_size_kb": 1024 00:22:50.784 } 00:22:50.784 }, 00:22:50.784 { 00:22:50.784 "method": "bdev_iscsi_set_options", 00:22:50.784 "params": { 00:22:50.784 "timeout_sec": 30 00:22:50.784 } 00:22:50.784 }, 00:22:50.784 { 00:22:50.784 "method": "bdev_nvme_set_options", 00:22:50.784 "params": { 00:22:50.784 "action_on_timeout": "none", 00:22:50.784 "timeout_us": 0, 00:22:50.784 "timeout_admin_us": 0, 00:22:50.784 "keep_alive_timeout_ms": 10000, 00:22:50.784 "arbitration_burst": 0, 00:22:50.784 "low_priority_weight": 0, 00:22:50.784 "medium_priority_weight": 0, 00:22:50.784 "high_priority_weight": 0, 00:22:50.784 "nvme_adminq_poll_period_us": 10000, 00:22:50.784 "nvme_ioq_poll_period_us": 0, 00:22:50.784 "io_queue_requests": 0, 00:22:50.784 "delay_cmd_submit": true, 00:22:50.784 "transport_retry_count": 4, 00:22:50.784 "bdev_retry_count": 3, 00:22:50.784 "transport_ack_timeout": 0, 00:22:50.784 "ctrlr_loss_timeout_sec": 0, 00:22:50.784 "reconnect_delay_sec": 0, 00:22:50.784 "fast_io_fail_timeout_sec": 0, 00:22:50.784 "disable_auto_failback": false, 00:22:50.784 "generate_uuids": false, 00:22:50.784 "transport_tos": 0, 00:22:50.784 "nvme_error_stat": false, 00:22:50.784 "rdma_srq_size": 0, 00:22:50.784 "io_path_stat": false, 00:22:50.784 "allow_accel_sequence": false, 00:22:50.784 "rdma_max_cq_size": 0, 00:22:50.784 "rdma_cm_event_timeout_ms": 0, 00:22:50.784 "dhchap_digests": [ 00:22:50.784 "sha256", 00:22:50.784 "sha384", 00:22:50.784 "sha512" 00:22:50.784 ], 00:22:50.784 "dhchap_dhgroups": [ 00:22:50.784 "null", 00:22:50.784 "ffdhe2048", 00:22:50.784 "ffdhe3072", 00:22:50.784 "ffdhe4096", 00:22:50.784 "ffdhe6144", 00:22:50.784 "ffdhe8192" 00:22:50.784 ] 00:22:50.784 } 00:22:50.784 }, 00:22:50.784 { 00:22:50.784 "method": "bdev_nvme_set_hotplug", 00:22:50.784 "params": { 00:22:50.784 "period_us": 100000, 00:22:50.784 "enable": false 00:22:50.784 } 00:22:50.784 }, 00:22:50.784 { 00:22:50.784 "method": "bdev_malloc_create", 00:22:50.784 "params": { 00:22:50.784 "name": "malloc0", 00:22:50.784 "num_blocks": 8192, 00:22:50.784 "block_size": 4096, 00:22:50.784 "physical_block_size": 4096, 00:22:50.784 "uuid": "48ffb2bd-4410-4fc1-88b2-a2268f30ae7d", 00:22:50.784 "optimal_io_boundary": 0 00:22:50.784 } 00:22:50.784 }, 00:22:50.784 { 00:22:50.784 "method": "bdev_wait_for_examine" 00:22:50.784 } 00:22:50.784 ] 00:22:50.784 }, 00:22:50.784 { 00:22:50.784 "subsystem": "nbd", 00:22:50.784 "config": [] 00:22:50.784 }, 00:22:50.784 { 00:22:50.784 "subsystem": "scheduler", 00:22:50.784 "config": [ 00:22:50.784 { 00:22:50.784 "method": "framework_set_scheduler", 00:22:50.784 "params": { 00:22:50.784 "name": "static" 00:22:50.784 } 00:22:50.784 } 00:22:50.784 ] 00:22:50.784 }, 00:22:50.784 { 00:22:50.784 "subsystem": "nvmf", 00:22:50.784 "config": [ 00:22:50.784 { 00:22:50.784 "method": "nvmf_set_config", 00:22:50.784 "params": { 00:22:50.784 "discovery_filter": "match_any", 00:22:50.784 "admin_cmd_passthru": { 00:22:50.784 "identify_ctrlr": false 00:22:50.784 } 00:22:50.784 } 00:22:50.784 }, 00:22:50.784 { 00:22:50.784 "method": "nvmf_set_max_subsystems", 00:22:50.784 "params": { 00:22:50.784 "max_subsystems": 1024 00:22:50.784 } 00:22:50.784 }, 00:22:50.784 { 00:22:50.784 "method": "nvmf_set_crdt", 00:22:50.784 "params": { 00:22:50.784 "crdt1": 0, 00:22:50.784 "crdt2": 0, 00:22:50.784 "crdt3": 0 00:22:50.784 } 00:22:50.784 }, 00:22:50.784 { 00:22:50.784 "method": "nvmf_create_transport", 00:22:50.784 "params": { 00:22:50.784 "trtype": "TCP", 00:22:50.784 "max_queue_depth": 128, 00:22:50.784 "max_io_qpairs_per_ctrlr": 127, 00:22:50.784 "in_capsule_data_size": 4096, 00:22:50.784 "max_io_size": 131072, 00:22:50.784 "io_unit_size": 131072, 00:22:50.784 "max_aq_depth": 128, 00:22:50.784 "num_shared_buffers": 511, 00:22:50.784 "buf_cache_size": 4294967295, 00:22:50.784 "dif_insert_or_strip": false, 00:22:50.784 "zcopy": false, 00:22:50.784 "c2h_success": false, 00:22:50.784 "sock_priority": 0, 00:22:50.784 "abort_timeout_sec": 1, 00:22:50.784 "ack_timeout": 0, 00:22:50.784 "data_wr_pool_size": 0 00:22:50.784 } 00:22:50.784 }, 00:22:50.784 { 00:22:50.784 "method": "nvmf_create_subsystem", 00:22:50.784 "params": { 00:22:50.784 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:50.784 "allow_any_host": false, 00:22:50.784 "serial_number": "SPDK00000000000001", 00:22:50.784 "model_number": "SPDK bdev Controller", 00:22:50.784 "max_namespaces": 10, 00:22:50.784 "min_cntlid": 1, 00:22:50.784 "max_cntlid": 65519, 00:22:50.784 "ana_reporting": false 00:22:50.784 } 00:22:50.784 }, 00:22:50.784 { 00:22:50.784 "method": "nvmf_subsystem_add_host", 00:22:50.784 "params": { 00:22:50.784 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:50.784 "host": "nqn.2016-06.io.spdk:host1", 00:22:50.784 "psk": "/tmp/tmp.wzuEKLfHBe" 00:22:50.784 } 00:22:50.784 }, 00:22:50.784 { 00:22:50.784 "method": "nvmf_subsystem_add_ns", 00:22:50.784 "params": { 00:22:50.784 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:50.784 "namespace": { 00:22:50.784 "nsid": 1, 00:22:50.784 "bdev_name": "malloc0", 00:22:50.784 "nguid": "48FFB2BD44104FC188B2A2268F30AE7D", 00:22:50.784 "uuid": "48ffb2bd-4410-4fc1-88b2-a2268f30ae7d", 00:22:50.784 "no_auto_visible": false 00:22:50.784 } 00:22:50.784 } 00:22:50.784 }, 00:22:50.784 { 00:22:50.784 "method": "nvmf_subsystem_add_listener", 00:22:50.784 "params": { 00:22:50.784 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:50.784 "listen_address": { 00:22:50.784 "trtype": "TCP", 00:22:50.784 "adrfam": "IPv4", 00:22:50.784 "traddr": "10.0.0.2", 00:22:50.784 "trsvcid": "4420" 00:22:50.784 }, 00:22:50.784 "secure_channel": true 00:22:50.784 } 00:22:50.784 } 00:22:50.784 ] 00:22:50.784 } 00:22:50.784 ] 00:22:50.784 }' 00:22:50.784 21:39:50 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:51.043 21:39:51 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:22:51.043 "subsystems": [ 00:22:51.043 { 00:22:51.043 "subsystem": "keyring", 00:22:51.043 "config": [] 00:22:51.043 }, 00:22:51.043 { 00:22:51.043 "subsystem": "iobuf", 00:22:51.043 "config": [ 00:22:51.043 { 00:22:51.043 "method": "iobuf_set_options", 00:22:51.043 "params": { 00:22:51.043 "small_pool_count": 8192, 00:22:51.043 "large_pool_count": 1024, 00:22:51.043 "small_bufsize": 8192, 00:22:51.043 "large_bufsize": 135168 00:22:51.043 } 00:22:51.043 } 00:22:51.043 ] 00:22:51.043 }, 00:22:51.043 { 00:22:51.043 "subsystem": "sock", 00:22:51.043 "config": [ 00:22:51.043 { 00:22:51.043 "method": "sock_set_default_impl", 00:22:51.043 "params": { 00:22:51.043 "impl_name": "posix" 00:22:51.043 } 00:22:51.043 }, 00:22:51.043 { 00:22:51.043 "method": "sock_impl_set_options", 00:22:51.043 "params": { 00:22:51.043 "impl_name": "ssl", 00:22:51.043 "recv_buf_size": 4096, 00:22:51.043 "send_buf_size": 4096, 00:22:51.043 "enable_recv_pipe": true, 00:22:51.043 "enable_quickack": false, 00:22:51.043 "enable_placement_id": 0, 00:22:51.043 "enable_zerocopy_send_server": true, 00:22:51.043 "enable_zerocopy_send_client": false, 00:22:51.043 "zerocopy_threshold": 0, 00:22:51.043 "tls_version": 0, 00:22:51.043 "enable_ktls": false 00:22:51.043 } 00:22:51.043 }, 00:22:51.043 { 00:22:51.043 "method": "sock_impl_set_options", 00:22:51.043 "params": { 00:22:51.043 "impl_name": "posix", 00:22:51.043 "recv_buf_size": 2097152, 00:22:51.043 "send_buf_size": 2097152, 00:22:51.043 "enable_recv_pipe": true, 00:22:51.043 "enable_quickack": false, 00:22:51.043 "enable_placement_id": 0, 00:22:51.043 "enable_zerocopy_send_server": true, 00:22:51.043 "enable_zerocopy_send_client": false, 00:22:51.043 "zerocopy_threshold": 0, 00:22:51.043 "tls_version": 0, 00:22:51.043 "enable_ktls": false 00:22:51.043 } 00:22:51.043 } 00:22:51.043 ] 00:22:51.043 }, 00:22:51.043 { 00:22:51.043 "subsystem": "vmd", 00:22:51.043 "config": [] 00:22:51.043 }, 00:22:51.043 { 00:22:51.043 "subsystem": "accel", 00:22:51.043 "config": [ 00:22:51.043 { 00:22:51.043 "method": "accel_set_options", 00:22:51.043 "params": { 00:22:51.044 "small_cache_size": 128, 00:22:51.044 "large_cache_size": 16, 00:22:51.044 "task_count": 2048, 00:22:51.044 "sequence_count": 2048, 00:22:51.044 "buf_count": 2048 00:22:51.044 } 00:22:51.044 } 00:22:51.044 ] 00:22:51.044 }, 00:22:51.044 { 00:22:51.044 "subsystem": "bdev", 00:22:51.044 "config": [ 00:22:51.044 { 00:22:51.044 "method": "bdev_set_options", 00:22:51.044 "params": { 00:22:51.044 "bdev_io_pool_size": 65535, 00:22:51.044 "bdev_io_cache_size": 256, 00:22:51.044 "bdev_auto_examine": true, 00:22:51.044 "iobuf_small_cache_size": 128, 00:22:51.044 "iobuf_large_cache_size": 16 00:22:51.044 } 00:22:51.044 }, 00:22:51.044 { 00:22:51.044 "method": "bdev_raid_set_options", 00:22:51.044 "params": { 00:22:51.044 "process_window_size_kb": 1024 00:22:51.044 } 00:22:51.044 }, 00:22:51.044 { 00:22:51.044 "method": "bdev_iscsi_set_options", 00:22:51.044 "params": { 00:22:51.044 "timeout_sec": 30 00:22:51.044 } 00:22:51.044 }, 00:22:51.044 { 00:22:51.044 "method": "bdev_nvme_set_options", 00:22:51.044 "params": { 00:22:51.044 "action_on_timeout": "none", 00:22:51.044 "timeout_us": 0, 00:22:51.044 "timeout_admin_us": 0, 00:22:51.044 "keep_alive_timeout_ms": 10000, 00:22:51.044 "arbitration_burst": 0, 00:22:51.044 "low_priority_weight": 0, 00:22:51.044 "medium_priority_weight": 0, 00:22:51.044 "high_priority_weight": 0, 00:22:51.044 "nvme_adminq_poll_period_us": 10000, 00:22:51.044 "nvme_ioq_poll_period_us": 0, 00:22:51.044 "io_queue_requests": 512, 00:22:51.044 "delay_cmd_submit": true, 00:22:51.044 "transport_retry_count": 4, 00:22:51.044 "bdev_retry_count": 3, 00:22:51.044 "transport_ack_timeout": 0, 00:22:51.044 "ctrlr_loss_timeout_sec": 0, 00:22:51.044 "reconnect_delay_sec": 0, 00:22:51.044 "fast_io_fail_timeout_sec": 0, 00:22:51.044 "disable_auto_failback": false, 00:22:51.044 "generate_uuids": false, 00:22:51.044 "transport_tos": 0, 00:22:51.044 "nvme_error_stat": false, 00:22:51.044 "rdma_srq_size": 0, 00:22:51.044 "io_path_stat": false, 00:22:51.044 "allow_accel_sequence": false, 00:22:51.044 "rdma_max_cq_size": 0, 00:22:51.044 "rdma_cm_event_timeout_ms": 0, 00:22:51.044 "dhchap_digests": [ 00:22:51.044 "sha256", 00:22:51.044 "sha384", 00:22:51.044 "sha512" 00:22:51.044 ], 00:22:51.044 "dhchap_dhgroups": [ 00:22:51.044 "null", 00:22:51.044 "ffdhe2048", 00:22:51.044 "ffdhe3072", 00:22:51.044 "ffdhe4096", 00:22:51.044 "ffdhe6144", 00:22:51.044 "ffdhe8192" 00:22:51.044 ] 00:22:51.044 } 00:22:51.044 }, 00:22:51.044 { 00:22:51.044 "method": "bdev_nvme_attach_controller", 00:22:51.044 "params": { 00:22:51.044 "name": "TLSTEST", 00:22:51.044 "trtype": "TCP", 00:22:51.044 "adrfam": "IPv4", 00:22:51.044 "traddr": "10.0.0.2", 00:22:51.044 "trsvcid": "4420", 00:22:51.044 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.044 "prchk_reftag": false, 00:22:51.044 "prchk_guard": false, 00:22:51.044 "ctrlr_loss_timeout_sec": 0, 00:22:51.044 "reconnect_delay_sec": 0, 00:22:51.044 "fast_io_fail_timeout_sec": 0, 00:22:51.044 "psk": "/tmp/tmp.wzuEKLfHBe", 00:22:51.044 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:51.044 "hdgst": false, 00:22:51.044 "ddgst": false 00:22:51.044 } 00:22:51.044 }, 00:22:51.044 { 00:22:51.044 "method": "bdev_nvme_set_hotplug", 00:22:51.044 "params": { 00:22:51.044 "period_us": 100000, 00:22:51.044 "enable": false 00:22:51.044 } 00:22:51.044 }, 00:22:51.044 { 00:22:51.044 "method": "bdev_wait_for_examine" 00:22:51.044 } 00:22:51.044 ] 00:22:51.044 }, 00:22:51.044 { 00:22:51.044 "subsystem": "nbd", 00:22:51.044 "config": [] 00:22:51.044 } 00:22:51.044 ] 00:22:51.044 }' 00:22:51.044 21:39:51 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 1495204 00:22:51.044 21:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1495204 ']' 00:22:51.044 21:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1495204 00:22:51.044 21:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:51.044 21:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:51.044 21:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1495204 00:22:51.044 21:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:22:51.044 21:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:22:51.044 21:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1495204' 00:22:51.044 killing process with pid 1495204 00:22:51.044 21:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1495204 00:22:51.044 Received shutdown signal, test time was about 10.000000 seconds 00:22:51.044 00:22:51.044 Latency(us) 00:22:51.044 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:51.044 =================================================================================================================== 00:22:51.044 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:51.044 [2024-06-07 21:39:51.232494] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:51.044 21:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1495204 00:22:51.302 21:39:51 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 1494673 00:22:51.302 21:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1494673 ']' 00:22:51.302 21:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1494673 00:22:51.302 21:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:51.302 21:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:51.302 21:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1494673 00:22:51.302 21:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:22:51.302 21:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:22:51.302 21:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1494673' 00:22:51.302 killing process with pid 1494673 00:22:51.302 21:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1494673 00:22:51.302 [2024-06-07 21:39:51.454256] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:51.302 21:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1494673 00:22:51.561 21:39:51 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:51.561 21:39:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:51.561 21:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:51.561 21:39:51 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:22:51.561 "subsystems": [ 00:22:51.561 { 00:22:51.561 "subsystem": "keyring", 00:22:51.561 "config": [] 00:22:51.561 }, 00:22:51.561 { 00:22:51.561 "subsystem": "iobuf", 00:22:51.561 "config": [ 00:22:51.561 { 00:22:51.561 "method": "iobuf_set_options", 00:22:51.561 "params": { 00:22:51.561 "small_pool_count": 8192, 00:22:51.561 "large_pool_count": 1024, 00:22:51.561 "small_bufsize": 8192, 00:22:51.561 "large_bufsize": 135168 00:22:51.561 } 00:22:51.561 } 00:22:51.561 ] 00:22:51.561 }, 00:22:51.561 { 00:22:51.561 "subsystem": "sock", 00:22:51.561 "config": [ 00:22:51.561 { 00:22:51.561 "method": "sock_set_default_impl", 00:22:51.561 "params": { 00:22:51.561 "impl_name": "posix" 00:22:51.561 } 00:22:51.561 }, 00:22:51.561 { 00:22:51.561 "method": "sock_impl_set_options", 00:22:51.561 "params": { 00:22:51.561 "impl_name": "ssl", 00:22:51.561 "recv_buf_size": 4096, 00:22:51.561 "send_buf_size": 4096, 00:22:51.561 "enable_recv_pipe": true, 00:22:51.561 "enable_quickack": false, 00:22:51.561 "enable_placement_id": 0, 00:22:51.561 "enable_zerocopy_send_server": true, 00:22:51.561 "enable_zerocopy_send_client": false, 00:22:51.561 "zerocopy_threshold": 0, 00:22:51.561 "tls_version": 0, 00:22:51.561 "enable_ktls": false 00:22:51.561 } 00:22:51.561 }, 00:22:51.561 { 00:22:51.561 "method": "sock_impl_set_options", 00:22:51.561 "params": { 00:22:51.561 "impl_name": "posix", 00:22:51.561 "recv_buf_size": 2097152, 00:22:51.561 "send_buf_size": 2097152, 00:22:51.561 "enable_recv_pipe": true, 00:22:51.561 "enable_quickack": false, 00:22:51.561 "enable_placement_id": 0, 00:22:51.561 "enable_zerocopy_send_server": true, 00:22:51.561 "enable_zerocopy_send_client": false, 00:22:51.561 "zerocopy_threshold": 0, 00:22:51.561 "tls_version": 0, 00:22:51.561 "enable_ktls": false 00:22:51.561 } 00:22:51.561 } 00:22:51.561 ] 00:22:51.561 }, 00:22:51.561 { 00:22:51.561 "subsystem": "vmd", 00:22:51.562 "config": [] 00:22:51.562 }, 00:22:51.562 { 00:22:51.562 "subsystem": "accel", 00:22:51.562 "config": [ 00:22:51.562 { 00:22:51.562 "method": "accel_set_options", 00:22:51.562 "params": { 00:22:51.562 "small_cache_size": 128, 00:22:51.562 "large_cache_size": 16, 00:22:51.562 "task_count": 2048, 00:22:51.562 "sequence_count": 2048, 00:22:51.562 "buf_count": 2048 00:22:51.562 } 00:22:51.562 } 00:22:51.562 ] 00:22:51.562 }, 00:22:51.562 { 00:22:51.562 "subsystem": "bdev", 00:22:51.562 "config": [ 00:22:51.562 { 00:22:51.562 "method": "bdev_set_options", 00:22:51.562 "params": { 00:22:51.562 "bdev_io_pool_size": 65535, 00:22:51.562 "bdev_io_cache_size": 256, 00:22:51.562 "bdev_auto_examine": true, 00:22:51.562 "iobuf_small_cache_size": 128, 00:22:51.562 "iobuf_large_cache_size": 16 00:22:51.562 } 00:22:51.562 }, 00:22:51.562 { 00:22:51.562 "method": "bdev_raid_set_options", 00:22:51.562 "params": { 00:22:51.562 "process_window_size_kb": 1024 00:22:51.562 } 00:22:51.562 }, 00:22:51.562 { 00:22:51.562 "method": "bdev_iscsi_set_options", 00:22:51.562 "params": { 00:22:51.562 "timeout_sec": 30 00:22:51.562 } 00:22:51.562 }, 00:22:51.562 { 00:22:51.562 "method": "bdev_nvme_set_options", 00:22:51.562 "params": { 00:22:51.562 "action_on_timeout": "none", 00:22:51.562 "timeout_us": 0, 00:22:51.562 "timeout_admin_us": 0, 00:22:51.562 "keep_alive_timeout_ms": 10000, 00:22:51.562 "arbitration_burst": 0, 00:22:51.562 "low_priority_weight": 0, 00:22:51.562 "medium_priority_weight": 0, 00:22:51.562 "high_priority_weight": 0, 00:22:51.562 "nvme_adminq_poll_period_us": 10000, 00:22:51.562 "nvme_ioq_poll_period_us": 0, 00:22:51.562 "io_queue_requests": 0, 00:22:51.562 "delay_cmd_submit": true, 00:22:51.562 "transport_retry_count": 4, 00:22:51.562 "bdev_retry_count": 3, 00:22:51.562 "transport_ack_timeout": 0, 00:22:51.562 "ctrlr_loss_timeout_sec": 0, 00:22:51.562 "reconnect_delay_sec": 0, 00:22:51.562 "fast_io_fail_timeout_sec": 0, 00:22:51.562 "disable_auto_failback": false, 00:22:51.562 "generate_uuids": false, 00:22:51.562 "transport_tos": 0, 00:22:51.562 "nvme_error_stat": false, 00:22:51.562 "rdma_srq_size": 0, 00:22:51.562 "io_path_stat": false, 00:22:51.562 "allow_accel_sequence": false, 00:22:51.562 "rdma_max_cq_size": 0, 00:22:51.562 "rdma_cm_event_timeout_ms": 0, 00:22:51.562 "dhchap_digests": [ 00:22:51.562 "sha256", 00:22:51.562 "sha384", 00:22:51.562 "sha512" 00:22:51.562 ], 00:22:51.562 "dhchap_dhgroups": [ 00:22:51.562 "null", 00:22:51.562 "ffdhe2048", 00:22:51.562 "ffdhe3072", 00:22:51.562 "ffdhe4096", 00:22:51.562 "ffdhe6144", 00:22:51.562 "ffdhe8192" 00:22:51.562 ] 00:22:51.562 } 00:22:51.562 }, 00:22:51.562 { 00:22:51.562 "method": "bdev_nvme_set_hotplug", 00:22:51.562 "params": { 00:22:51.562 "period_us": 100000, 00:22:51.562 "enable": false 00:22:51.562 } 00:22:51.562 }, 00:22:51.562 { 00:22:51.562 "method": "bdev_malloc_create", 00:22:51.562 "params": { 00:22:51.562 "name": "malloc0", 00:22:51.562 "num_blocks": 8192, 00:22:51.562 "block_size": 4096, 00:22:51.562 "physical_block_size": 4096, 00:22:51.562 "uuid": "48ffb2bd-4410-4fc1-88b2-a2268f30ae7d", 00:22:51.562 "optimal_io_boundary": 0 00:22:51.562 } 00:22:51.562 }, 00:22:51.562 { 00:22:51.562 "method": "bdev_wait_for_examine" 00:22:51.562 } 00:22:51.562 ] 00:22:51.562 }, 00:22:51.562 { 00:22:51.562 "subsystem": "nbd", 00:22:51.562 "config": [] 00:22:51.562 }, 00:22:51.562 { 00:22:51.562 "subsystem": "scheduler", 00:22:51.562 "config": [ 00:22:51.562 { 00:22:51.562 "method": "framework_set_scheduler", 00:22:51.562 "params": { 00:22:51.562 "name": "static" 00:22:51.562 } 00:22:51.562 } 00:22:51.562 ] 00:22:51.562 }, 00:22:51.562 { 00:22:51.562 "subsystem": "nvmf", 00:22:51.562 "config": [ 00:22:51.562 { 00:22:51.562 "method": "nvmf_set_config", 00:22:51.562 "params": { 00:22:51.562 "discovery_filter": "match_any", 00:22:51.562 "admin_cmd_passthru": { 00:22:51.562 "identify_ctrlr": false 00:22:51.562 } 00:22:51.562 } 00:22:51.562 }, 00:22:51.562 { 00:22:51.562 "method": "nvmf_set_max_subsystems", 00:22:51.562 "params": { 00:22:51.562 "max_subsystems": 1024 00:22:51.562 } 00:22:51.562 }, 00:22:51.562 { 00:22:51.562 "method": "nvmf_set_crdt", 00:22:51.562 "params": { 00:22:51.562 "crdt1": 0, 00:22:51.562 "crdt2": 0, 00:22:51.562 "crdt3": 0 00:22:51.562 } 00:22:51.562 }, 00:22:51.562 { 00:22:51.562 "method": "nvmf_create_transport", 00:22:51.562 "params": { 00:22:51.562 "trtype": "TCP", 00:22:51.562 "max_queue_depth": 128, 00:22:51.562 "max_io_qpairs_per_ctrlr": 127, 00:22:51.562 "in_capsule_data_size": 4096, 00:22:51.562 "max_io_size": 131072, 00:22:51.562 "io_unit_size": 131072, 00:22:51.562 "max_aq_depth": 128, 00:22:51.562 "num_shared_buffers": 511, 00:22:51.562 "buf_cache_size": 4294967295, 00:22:51.562 "dif_insert_or_strip": false, 00:22:51.562 "zcopy": false, 00:22:51.562 "c2h_success": false, 00:22:51.562 "sock_priority": 0, 00:22:51.562 "abort_timeout_sec": 1, 00:22:51.562 "ack_timeout": 0, 00:22:51.562 "data_wr_pool_size": 0 00:22:51.562 } 00:22:51.562 }, 00:22:51.562 { 00:22:51.562 "method": "nvmf_create_subsystem", 00:22:51.562 "params": { 00:22:51.562 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.562 "allow_any_host": false, 00:22:51.562 "serial_number": "SPDK00000000000001", 00:22:51.562 "model_number": "SPDK bdev Controller", 00:22:51.562 "max_namespaces": 10, 00:22:51.562 "min_cntlid": 1, 00:22:51.562 "max_cntlid": 65519, 00:22:51.562 "ana_reporting": false 00:22:51.562 } 00:22:51.562 }, 00:22:51.562 { 00:22:51.562 "method": "nvmf_subsystem_add_host", 00:22:51.562 "params": { 00:22:51.562 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.562 "host": "nqn.2016-06.io.spdk:host1", 00:22:51.562 "psk": "/tmp/tmp.wzuEKLfHBe" 00:22:51.562 } 00:22:51.562 }, 00:22:51.562 { 00:22:51.562 "method": "nvmf_subsystem_add_ns", 00:22:51.562 "params": { 00:22:51.562 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.562 "namespace": { 00:22:51.562 "nsid": 1, 00:22:51.562 "bdev_name": "malloc0", 00:22:51.562 "nguid": "48FFB2BD44104FC188B2A2268F30AE7D", 00:22:51.562 "uuid": "48ffb2bd-4410-4fc1-88b2-a2268f30ae7d", 00:22:51.562 "no_auto_visible": false 00:22:51.562 } 00:22:51.562 } 00:22:51.562 }, 00:22:51.562 { 00:22:51.562 "method": "nvmf_subsystem_add_listener", 00:22:51.562 "params": { 00:22:51.562 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.562 "listen_address": { 00:22:51.562 "trtype": "TCP", 00:22:51.562 "adrfam": "IPv4", 00:22:51.562 "traddr": "10.0.0.2", 00:22:51.562 "trsvcid": "4420" 00:22:51.562 }, 00:22:51.562 "secure_channel": true 00:22:51.562 } 00:22:51.562 } 00:22:51.562 ] 00:22:51.562 } 00:22:51.562 ] 00:22:51.562 }' 00:22:51.562 21:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.562 21:39:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1495489 00:22:51.562 21:39:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:51.562 21:39:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1495489 00:22:51.562 21:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1495489 ']' 00:22:51.562 21:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.562 21:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:51.562 21:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.562 21:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:51.563 21:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.563 [2024-06-07 21:39:51.726745] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:22:51.563 [2024-06-07 21:39:51.726801] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.563 EAL: No free 2048 kB hugepages reported on node 1 00:22:51.563 [2024-06-07 21:39:51.811870] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.821 [2024-06-07 21:39:51.901461] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.821 [2024-06-07 21:39:51.901502] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.821 [2024-06-07 21:39:51.901512] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.821 [2024-06-07 21:39:51.901521] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.821 [2024-06-07 21:39:51.901528] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.821 [2024-06-07 21:39:51.901591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:52.079 [2024-06-07 21:39:52.112966] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:52.080 [2024-06-07 21:39:52.128903] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:52.080 [2024-06-07 21:39:52.144963] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:52.080 [2024-06-07 21:39:52.161333] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:52.647 21:39:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:52.647 21:39:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:52.647 21:39:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:52.647 21:39:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:52.647 21:39:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:52.647 21:39:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:52.647 21:39:52 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1495763 00:22:52.647 21:39:52 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1495763 /var/tmp/bdevperf.sock 00:22:52.647 21:39:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1495763 ']' 00:22:52.647 21:39:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:52.647 21:39:52 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:52.647 21:39:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:52.647 21:39:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:52.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:52.647 21:39:52 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:22:52.647 "subsystems": [ 00:22:52.647 { 00:22:52.647 "subsystem": "keyring", 00:22:52.647 "config": [] 00:22:52.647 }, 00:22:52.647 { 00:22:52.647 "subsystem": "iobuf", 00:22:52.647 "config": [ 00:22:52.647 { 00:22:52.647 "method": "iobuf_set_options", 00:22:52.647 "params": { 00:22:52.647 "small_pool_count": 8192, 00:22:52.647 "large_pool_count": 1024, 00:22:52.647 "small_bufsize": 8192, 00:22:52.647 "large_bufsize": 135168 00:22:52.647 } 00:22:52.647 } 00:22:52.647 ] 00:22:52.647 }, 00:22:52.647 { 00:22:52.647 "subsystem": "sock", 00:22:52.647 "config": [ 00:22:52.647 { 00:22:52.647 "method": "sock_set_default_impl", 00:22:52.647 "params": { 00:22:52.647 "impl_name": "posix" 00:22:52.647 } 00:22:52.647 }, 00:22:52.647 { 00:22:52.647 "method": "sock_impl_set_options", 00:22:52.647 "params": { 00:22:52.647 "impl_name": "ssl", 00:22:52.647 "recv_buf_size": 4096, 00:22:52.647 "send_buf_size": 4096, 00:22:52.647 "enable_recv_pipe": true, 00:22:52.647 "enable_quickack": false, 00:22:52.647 "enable_placement_id": 0, 00:22:52.647 "enable_zerocopy_send_server": true, 00:22:52.647 "enable_zerocopy_send_client": false, 00:22:52.647 "zerocopy_threshold": 0, 00:22:52.647 "tls_version": 0, 00:22:52.647 "enable_ktls": false 00:22:52.647 } 00:22:52.647 }, 00:22:52.647 { 00:22:52.647 "method": "sock_impl_set_options", 00:22:52.647 "params": { 00:22:52.647 "impl_name": "posix", 00:22:52.647 "recv_buf_size": 2097152, 00:22:52.647 "send_buf_size": 2097152, 00:22:52.647 "enable_recv_pipe": true, 00:22:52.647 "enable_quickack": false, 00:22:52.647 "enable_placement_id": 0, 00:22:52.647 "enable_zerocopy_send_server": true, 00:22:52.647 "enable_zerocopy_send_client": false, 00:22:52.647 "zerocopy_threshold": 0, 00:22:52.647 "tls_version": 0, 00:22:52.647 "enable_ktls": false 00:22:52.647 } 00:22:52.647 } 00:22:52.647 ] 00:22:52.647 }, 00:22:52.647 { 00:22:52.647 "subsystem": "vmd", 00:22:52.647 "config": [] 00:22:52.647 }, 00:22:52.647 { 00:22:52.647 "subsystem": "accel", 00:22:52.647 "config": [ 00:22:52.647 { 00:22:52.647 "method": "accel_set_options", 00:22:52.647 "params": { 00:22:52.647 "small_cache_size": 128, 00:22:52.647 "large_cache_size": 16, 00:22:52.647 "task_count": 2048, 00:22:52.647 "sequence_count": 2048, 00:22:52.647 "buf_count": 2048 00:22:52.647 } 00:22:52.647 } 00:22:52.647 ] 00:22:52.647 }, 00:22:52.647 { 00:22:52.647 "subsystem": "bdev", 00:22:52.647 "config": [ 00:22:52.647 { 00:22:52.647 "method": "bdev_set_options", 00:22:52.647 "params": { 00:22:52.647 "bdev_io_pool_size": 65535, 00:22:52.647 "bdev_io_cache_size": 256, 00:22:52.647 "bdev_auto_examine": true, 00:22:52.647 "iobuf_small_cache_size": 128, 00:22:52.647 "iobuf_large_cache_size": 16 00:22:52.647 } 00:22:52.647 }, 00:22:52.647 { 00:22:52.647 "method": "bdev_raid_set_options", 00:22:52.647 "params": { 00:22:52.647 "process_window_size_kb": 1024 00:22:52.647 } 00:22:52.647 }, 00:22:52.647 { 00:22:52.647 "method": "bdev_iscsi_set_options", 00:22:52.647 "params": { 00:22:52.647 "timeout_sec": 30 00:22:52.647 } 00:22:52.647 }, 00:22:52.647 { 00:22:52.647 "method": "bdev_nvme_set_options", 00:22:52.647 "params": { 00:22:52.647 "action_on_timeout": "none", 00:22:52.647 "timeout_us": 0, 00:22:52.647 "timeout_admin_us": 0, 00:22:52.647 "keep_alive_timeout_ms": 10000, 00:22:52.647 "arbitration_burst": 0, 00:22:52.647 "low_priority_weight": 0, 00:22:52.647 "medium_priority_weight": 0, 00:22:52.647 "high_priority_weight": 0, 00:22:52.647 "nvme_adminq_poll_period_us": 10000, 00:22:52.647 "nvme_ioq_poll_period_us": 0, 00:22:52.647 "io_queue_requests": 512, 00:22:52.647 "delay_cmd_submit": true, 00:22:52.647 "transport_retry_count": 4, 00:22:52.647 "bdev_retry_count": 3, 00:22:52.647 "transport_ack_timeout": 0, 00:22:52.647 "ctrlr_loss_timeout_sec": 0, 00:22:52.647 "reconnect_delay_sec": 0, 00:22:52.647 "fast_io_fail_timeout_sec": 0, 00:22:52.647 "disable_auto_failback": false, 00:22:52.647 "generate_uuids": false, 00:22:52.647 "transport_tos": 0, 00:22:52.647 "nvme_error_stat": false, 00:22:52.647 "rdma_srq_size": 0, 00:22:52.647 "io_path_stat": false, 00:22:52.647 "allow_accel_sequence": false, 00:22:52.647 "rdma_max_cq_size": 0, 00:22:52.647 "rdma_cm_event_timeout_ms": 0, 00:22:52.647 "dhchap_digests": [ 00:22:52.647 "sha256", 00:22:52.647 "sha384", 00:22:52.647 "sha512" 00:22:52.647 ], 00:22:52.647 "dhchap_dhgroups": [ 00:22:52.647 "null", 00:22:52.647 "ffdhe2048", 00:22:52.647 "ffdhe3072", 00:22:52.647 "ffdhe4096", 00:22:52.647 "ffdhe6144", 00:22:52.647 "ffdhe8192" 00:22:52.647 ] 00:22:52.647 } 00:22:52.647 }, 00:22:52.647 { 00:22:52.647 "method": "bdev_nvme_attach_controller", 00:22:52.647 "params": { 00:22:52.647 "name": "TLSTEST", 00:22:52.647 "trtype": "TCP", 00:22:52.647 "adrfam": "IPv4", 00:22:52.647 "traddr": "10.0.0.2", 00:22:52.647 "trsvcid": "4420", 00:22:52.647 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:52.647 "prchk_reftag": false, 00:22:52.647 "prchk_guard": false, 00:22:52.647 "ctrlr_loss_timeout_sec": 0, 00:22:52.647 "reconnect_delay_sec": 0, 00:22:52.647 "fast_io_fail_timeout_sec": 0, 00:22:52.647 "psk": "/tmp/tmp.wzuEKLfHBe", 00:22:52.647 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:52.647 "hdgst": false, 00:22:52.647 "ddgst": false 00:22:52.647 } 00:22:52.647 }, 00:22:52.647 { 00:22:52.647 "method": "bdev_nvme_set_hotplug", 00:22:52.647 "params": { 00:22:52.647 "period_us": 100000, 00:22:52.647 "enable": false 00:22:52.647 } 00:22:52.647 }, 00:22:52.647 { 00:22:52.647 "method": "bdev_wait_for_examine" 00:22:52.647 } 00:22:52.648 ] 00:22:52.648 }, 00:22:52.648 { 00:22:52.648 "subsystem": "nbd", 00:22:52.648 "config": [] 00:22:52.648 } 00:22:52.648 ] 00:22:52.648 }' 00:22:52.648 21:39:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:52.648 21:39:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:52.648 [2024-06-07 21:39:52.747260] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:22:52.648 [2024-06-07 21:39:52.747322] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1495763 ] 00:22:52.648 EAL: No free 2048 kB hugepages reported on node 1 00:22:52.648 [2024-06-07 21:39:52.811893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.648 [2024-06-07 21:39:52.879881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:52.907 [2024-06-07 21:39:53.021008] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:52.907 [2024-06-07 21:39:53.021097] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:53.474 21:39:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:53.474 21:39:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:53.474 21:39:53 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:53.732 Running I/O for 10 seconds... 00:23:03.763 00:23:03.763 Latency(us) 00:23:03.763 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.763 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:03.763 Verification LBA range: start 0x0 length 0x2000 00:23:03.763 TLSTESTn1 : 10.02 5019.12 19.61 0.00 0.00 25455.27 4379.00 40274.85 00:23:03.763 =================================================================================================================== 00:23:03.763 Total : 5019.12 19.61 0.00 0.00 25455.27 4379.00 40274.85 00:23:03.763 0 00:23:03.763 21:40:03 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:03.763 21:40:03 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 1495763 00:23:03.763 21:40:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1495763 ']' 00:23:03.763 21:40:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1495763 00:23:03.763 21:40:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:23:03.763 21:40:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:03.763 21:40:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1495763 00:23:03.763 21:40:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:23:03.763 21:40:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:23:03.763 21:40:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1495763' 00:23:03.763 killing process with pid 1495763 00:23:03.763 21:40:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1495763 00:23:03.763 Received shutdown signal, test time was about 10.000000 seconds 00:23:03.763 00:23:03.763 Latency(us) 00:23:03.763 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.763 =================================================================================================================== 00:23:03.763 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:03.763 [2024-06-07 21:40:03.928563] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:03.763 21:40:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1495763 00:23:04.021 21:40:04 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 1495489 00:23:04.021 21:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1495489 ']' 00:23:04.021 21:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1495489 00:23:04.021 21:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:23:04.021 21:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:04.022 21:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1495489 00:23:04.022 21:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:23:04.022 21:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:23:04.022 21:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1495489' 00:23:04.022 killing process with pid 1495489 00:23:04.022 21:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1495489 00:23:04.022 [2024-06-07 21:40:04.154783] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:04.022 21:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1495489 00:23:04.280 21:40:04 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:23:04.280 21:40:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:04.280 21:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:04.280 21:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:04.280 21:40:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1497840 00:23:04.280 21:40:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:04.280 21:40:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1497840 00:23:04.280 21:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1497840 ']' 00:23:04.280 21:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:04.280 21:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:04.280 21:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:04.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:04.280 21:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:04.280 21:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:04.280 [2024-06-07 21:40:04.429968] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:23:04.281 [2024-06-07 21:40:04.430035] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:04.281 EAL: No free 2048 kB hugepages reported on node 1 00:23:04.281 [2024-06-07 21:40:04.526161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.540 [2024-06-07 21:40:04.611282] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:04.540 [2024-06-07 21:40:04.611330] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:04.540 [2024-06-07 21:40:04.611340] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:04.540 [2024-06-07 21:40:04.611349] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:04.540 [2024-06-07 21:40:04.611356] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:04.540 [2024-06-07 21:40:04.611385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:05.107 21:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:05.107 21:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:23:05.107 21:40:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:05.107 21:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:05.107 21:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:05.365 21:40:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:05.365 21:40:05 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.wzuEKLfHBe 00:23:05.365 21:40:05 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.wzuEKLfHBe 00:23:05.365 21:40:05 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:05.365 [2024-06-07 21:40:05.625885] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:05.624 21:40:05 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:05.882 21:40:05 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:05.882 [2024-06-07 21:40:06.123192] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:05.882 [2024-06-07 21:40:06.123400] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:05.882 21:40:06 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:06.141 malloc0 00:23:06.141 21:40:06 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:06.400 21:40:06 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wzuEKLfHBe 00:23:06.658 [2024-06-07 21:40:06.862445] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:06.658 21:40:06 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:06.659 21:40:06 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1498163 00:23:06.659 21:40:06 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:06.659 21:40:06 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1498163 /var/tmp/bdevperf.sock 00:23:06.659 21:40:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1498163 ']' 00:23:06.659 21:40:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:06.659 21:40:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:06.659 21:40:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:06.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:06.659 21:40:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:06.659 21:40:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:06.917 [2024-06-07 21:40:06.929442] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:23:06.917 [2024-06-07 21:40:06.929506] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1498163 ] 00:23:06.917 EAL: No free 2048 kB hugepages reported on node 1 00:23:06.917 [2024-06-07 21:40:07.012931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.917 [2024-06-07 21:40:07.101012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:07.176 21:40:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:07.176 21:40:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:23:07.176 21:40:07 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.wzuEKLfHBe 00:23:07.176 21:40:07 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:07.433 [2024-06-07 21:40:07.657359] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:07.691 nvme0n1 00:23:07.691 21:40:07 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:07.691 Running I/O for 1 seconds... 00:23:09.063 00:23:09.063 Latency(us) 00:23:09.063 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.063 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:09.063 Verification LBA range: start 0x0 length 0x2000 00:23:09.063 nvme0n1 : 1.03 3706.97 14.48 0.00 0.00 34098.89 10545.34 63867.81 00:23:09.063 =================================================================================================================== 00:23:09.063 Total : 3706.97 14.48 0.00 0.00 34098.89 10545.34 63867.81 00:23:09.063 0 00:23:09.063 21:40:08 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 1498163 00:23:09.063 21:40:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1498163 ']' 00:23:09.063 21:40:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1498163 00:23:09.063 21:40:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:23:09.063 21:40:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:09.063 21:40:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1498163 00:23:09.063 21:40:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:23:09.063 21:40:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:23:09.063 21:40:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1498163' 00:23:09.063 killing process with pid 1498163 00:23:09.063 21:40:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1498163 00:23:09.063 Received shutdown signal, test time was about 1.000000 seconds 00:23:09.063 00:23:09.063 Latency(us) 00:23:09.063 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.063 =================================================================================================================== 00:23:09.063 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:09.063 21:40:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1498163 00:23:09.063 21:40:09 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 1497840 00:23:09.063 21:40:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1497840 ']' 00:23:09.063 21:40:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1497840 00:23:09.063 21:40:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:23:09.063 21:40:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:09.063 21:40:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1497840 00:23:09.063 21:40:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:09.063 21:40:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:09.063 21:40:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1497840' 00:23:09.063 killing process with pid 1497840 00:23:09.063 21:40:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1497840 00:23:09.063 [2024-06-07 21:40:09.206209] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:09.063 21:40:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1497840 00:23:09.321 21:40:09 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:23:09.322 21:40:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:09.322 21:40:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:09.322 21:40:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.322 21:40:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1498696 00:23:09.322 21:40:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1498696 00:23:09.322 21:40:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:09.322 21:40:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1498696 ']' 00:23:09.322 21:40:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.322 21:40:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:09.322 21:40:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.322 21:40:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:09.322 21:40:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.322 [2024-06-07 21:40:09.480816] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:23:09.322 [2024-06-07 21:40:09.480876] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:09.322 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.322 [2024-06-07 21:40:09.574569] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.580 [2024-06-07 21:40:09.658201] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:09.580 [2024-06-07 21:40:09.658245] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:09.580 [2024-06-07 21:40:09.658255] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:09.580 [2024-06-07 21:40:09.658263] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:09.580 [2024-06-07 21:40:09.658272] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:09.580 [2024-06-07 21:40:09.658300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:10.516 21:40:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:10.516 21:40:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:23:10.516 21:40:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:10.516 21:40:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:10.516 21:40:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.516 21:40:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:10.516 21:40:10 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:23:10.516 21:40:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:10.516 21:40:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.516 [2024-06-07 21:40:10.459057] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:10.516 malloc0 00:23:10.516 [2024-06-07 21:40:10.488330] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:10.516 [2024-06-07 21:40:10.488546] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:10.516 21:40:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:10.516 21:40:10 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=1498974 00:23:10.516 21:40:10 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 1498974 /var/tmp/bdevperf.sock 00:23:10.516 21:40:10 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:10.516 21:40:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1498974 ']' 00:23:10.516 21:40:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:10.516 21:40:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:10.516 21:40:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:10.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:10.516 21:40:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:10.516 21:40:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.516 [2024-06-07 21:40:10.565242] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:23:10.516 [2024-06-07 21:40:10.565298] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1498974 ] 00:23:10.516 EAL: No free 2048 kB hugepages reported on node 1 00:23:10.516 [2024-06-07 21:40:10.646782] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.516 [2024-06-07 21:40:10.738137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:10.774 21:40:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:10.774 21:40:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:23:10.774 21:40:10 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.wzuEKLfHBe 00:23:11.032 21:40:11 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:11.032 [2024-06-07 21:40:11.297848] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:11.291 nvme0n1 00:23:11.291 21:40:11 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:11.291 Running I/O for 1 seconds... 00:23:12.666 00:23:12.666 Latency(us) 00:23:12.666 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:12.666 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:12.666 Verification LBA range: start 0x0 length 0x2000 00:23:12.666 nvme0n1 : 1.04 3597.45 14.05 0.00 0.00 34997.87 7238.75 62914.56 00:23:12.666 =================================================================================================================== 00:23:12.666 Total : 3597.45 14.05 0.00 0.00 34997.87 7238.75 62914.56 00:23:12.666 0 00:23:12.666 21:40:12 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:23:12.666 21:40:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:12.666 21:40:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:12.666 21:40:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:12.666 21:40:12 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:23:12.666 "subsystems": [ 00:23:12.666 { 00:23:12.666 "subsystem": "keyring", 00:23:12.666 "config": [ 00:23:12.666 { 00:23:12.666 "method": "keyring_file_add_key", 00:23:12.666 "params": { 00:23:12.666 "name": "key0", 00:23:12.666 "path": "/tmp/tmp.wzuEKLfHBe" 00:23:12.666 } 00:23:12.666 } 00:23:12.666 ] 00:23:12.666 }, 00:23:12.666 { 00:23:12.666 "subsystem": "iobuf", 00:23:12.666 "config": [ 00:23:12.666 { 00:23:12.666 "method": "iobuf_set_options", 00:23:12.666 "params": { 00:23:12.666 "small_pool_count": 8192, 00:23:12.666 "large_pool_count": 1024, 00:23:12.666 "small_bufsize": 8192, 00:23:12.666 "large_bufsize": 135168 00:23:12.666 } 00:23:12.666 } 00:23:12.666 ] 00:23:12.666 }, 00:23:12.666 { 00:23:12.666 "subsystem": "sock", 00:23:12.666 "config": [ 00:23:12.666 { 00:23:12.666 "method": "sock_set_default_impl", 00:23:12.666 "params": { 00:23:12.666 "impl_name": "posix" 00:23:12.666 } 00:23:12.666 }, 00:23:12.666 { 00:23:12.666 "method": "sock_impl_set_options", 00:23:12.666 "params": { 00:23:12.666 "impl_name": "ssl", 00:23:12.666 "recv_buf_size": 4096, 00:23:12.666 "send_buf_size": 4096, 00:23:12.666 "enable_recv_pipe": true, 00:23:12.666 "enable_quickack": false, 00:23:12.666 "enable_placement_id": 0, 00:23:12.666 "enable_zerocopy_send_server": true, 00:23:12.666 "enable_zerocopy_send_client": false, 00:23:12.666 "zerocopy_threshold": 0, 00:23:12.666 "tls_version": 0, 00:23:12.666 "enable_ktls": false 00:23:12.666 } 00:23:12.666 }, 00:23:12.666 { 00:23:12.666 "method": "sock_impl_set_options", 00:23:12.666 "params": { 00:23:12.666 "impl_name": "posix", 00:23:12.666 "recv_buf_size": 2097152, 00:23:12.666 "send_buf_size": 2097152, 00:23:12.666 "enable_recv_pipe": true, 00:23:12.666 "enable_quickack": false, 00:23:12.666 "enable_placement_id": 0, 00:23:12.666 "enable_zerocopy_send_server": true, 00:23:12.666 "enable_zerocopy_send_client": false, 00:23:12.666 "zerocopy_threshold": 0, 00:23:12.666 "tls_version": 0, 00:23:12.666 "enable_ktls": false 00:23:12.666 } 00:23:12.666 } 00:23:12.666 ] 00:23:12.666 }, 00:23:12.666 { 00:23:12.666 "subsystem": "vmd", 00:23:12.666 "config": [] 00:23:12.666 }, 00:23:12.666 { 00:23:12.666 "subsystem": "accel", 00:23:12.666 "config": [ 00:23:12.666 { 00:23:12.666 "method": "accel_set_options", 00:23:12.666 "params": { 00:23:12.666 "small_cache_size": 128, 00:23:12.666 "large_cache_size": 16, 00:23:12.666 "task_count": 2048, 00:23:12.666 "sequence_count": 2048, 00:23:12.666 "buf_count": 2048 00:23:12.666 } 00:23:12.666 } 00:23:12.666 ] 00:23:12.666 }, 00:23:12.666 { 00:23:12.666 "subsystem": "bdev", 00:23:12.666 "config": [ 00:23:12.666 { 00:23:12.666 "method": "bdev_set_options", 00:23:12.666 "params": { 00:23:12.666 "bdev_io_pool_size": 65535, 00:23:12.666 "bdev_io_cache_size": 256, 00:23:12.666 "bdev_auto_examine": true, 00:23:12.666 "iobuf_small_cache_size": 128, 00:23:12.666 "iobuf_large_cache_size": 16 00:23:12.666 } 00:23:12.666 }, 00:23:12.666 { 00:23:12.666 "method": "bdev_raid_set_options", 00:23:12.666 "params": { 00:23:12.666 "process_window_size_kb": 1024 00:23:12.666 } 00:23:12.666 }, 00:23:12.666 { 00:23:12.666 "method": "bdev_iscsi_set_options", 00:23:12.666 "params": { 00:23:12.666 "timeout_sec": 30 00:23:12.666 } 00:23:12.666 }, 00:23:12.666 { 00:23:12.666 "method": "bdev_nvme_set_options", 00:23:12.666 "params": { 00:23:12.666 "action_on_timeout": "none", 00:23:12.666 "timeout_us": 0, 00:23:12.666 "timeout_admin_us": 0, 00:23:12.666 "keep_alive_timeout_ms": 10000, 00:23:12.666 "arbitration_burst": 0, 00:23:12.666 "low_priority_weight": 0, 00:23:12.666 "medium_priority_weight": 0, 00:23:12.666 "high_priority_weight": 0, 00:23:12.666 "nvme_adminq_poll_period_us": 10000, 00:23:12.666 "nvme_ioq_poll_period_us": 0, 00:23:12.666 "io_queue_requests": 0, 00:23:12.666 "delay_cmd_submit": true, 00:23:12.666 "transport_retry_count": 4, 00:23:12.666 "bdev_retry_count": 3, 00:23:12.666 "transport_ack_timeout": 0, 00:23:12.666 "ctrlr_loss_timeout_sec": 0, 00:23:12.666 "reconnect_delay_sec": 0, 00:23:12.666 "fast_io_fail_timeout_sec": 0, 00:23:12.666 "disable_auto_failback": false, 00:23:12.666 "generate_uuids": false, 00:23:12.666 "transport_tos": 0, 00:23:12.666 "nvme_error_stat": false, 00:23:12.666 "rdma_srq_size": 0, 00:23:12.666 "io_path_stat": false, 00:23:12.666 "allow_accel_sequence": false, 00:23:12.666 "rdma_max_cq_size": 0, 00:23:12.666 "rdma_cm_event_timeout_ms": 0, 00:23:12.666 "dhchap_digests": [ 00:23:12.666 "sha256", 00:23:12.666 "sha384", 00:23:12.666 "sha512" 00:23:12.666 ], 00:23:12.666 "dhchap_dhgroups": [ 00:23:12.666 "null", 00:23:12.666 "ffdhe2048", 00:23:12.666 "ffdhe3072", 00:23:12.666 "ffdhe4096", 00:23:12.666 "ffdhe6144", 00:23:12.666 "ffdhe8192" 00:23:12.666 ] 00:23:12.666 } 00:23:12.666 }, 00:23:12.666 { 00:23:12.666 "method": "bdev_nvme_set_hotplug", 00:23:12.666 "params": { 00:23:12.666 "period_us": 100000, 00:23:12.666 "enable": false 00:23:12.666 } 00:23:12.666 }, 00:23:12.666 { 00:23:12.666 "method": "bdev_malloc_create", 00:23:12.666 "params": { 00:23:12.666 "name": "malloc0", 00:23:12.666 "num_blocks": 8192, 00:23:12.666 "block_size": 4096, 00:23:12.666 "physical_block_size": 4096, 00:23:12.666 "uuid": "39bde223-2e6f-4765-acc9-133bc7c5ab9a", 00:23:12.666 "optimal_io_boundary": 0 00:23:12.666 } 00:23:12.666 }, 00:23:12.666 { 00:23:12.666 "method": "bdev_wait_for_examine" 00:23:12.666 } 00:23:12.666 ] 00:23:12.666 }, 00:23:12.666 { 00:23:12.666 "subsystem": "nbd", 00:23:12.666 "config": [] 00:23:12.666 }, 00:23:12.666 { 00:23:12.666 "subsystem": "scheduler", 00:23:12.666 "config": [ 00:23:12.666 { 00:23:12.666 "method": "framework_set_scheduler", 00:23:12.666 "params": { 00:23:12.666 "name": "static" 00:23:12.666 } 00:23:12.666 } 00:23:12.666 ] 00:23:12.667 }, 00:23:12.667 { 00:23:12.667 "subsystem": "nvmf", 00:23:12.667 "config": [ 00:23:12.667 { 00:23:12.667 "method": "nvmf_set_config", 00:23:12.667 "params": { 00:23:12.667 "discovery_filter": "match_any", 00:23:12.667 "admin_cmd_passthru": { 00:23:12.667 "identify_ctrlr": false 00:23:12.667 } 00:23:12.667 } 00:23:12.667 }, 00:23:12.667 { 00:23:12.667 "method": "nvmf_set_max_subsystems", 00:23:12.667 "params": { 00:23:12.667 "max_subsystems": 1024 00:23:12.667 } 00:23:12.667 }, 00:23:12.667 { 00:23:12.667 "method": "nvmf_set_crdt", 00:23:12.667 "params": { 00:23:12.667 "crdt1": 0, 00:23:12.667 "crdt2": 0, 00:23:12.667 "crdt3": 0 00:23:12.667 } 00:23:12.667 }, 00:23:12.667 { 00:23:12.667 "method": "nvmf_create_transport", 00:23:12.667 "params": { 00:23:12.667 "trtype": "TCP", 00:23:12.667 "max_queue_depth": 128, 00:23:12.667 "max_io_qpairs_per_ctrlr": 127, 00:23:12.667 "in_capsule_data_size": 4096, 00:23:12.667 "max_io_size": 131072, 00:23:12.667 "io_unit_size": 131072, 00:23:12.667 "max_aq_depth": 128, 00:23:12.667 "num_shared_buffers": 511, 00:23:12.667 "buf_cache_size": 4294967295, 00:23:12.667 "dif_insert_or_strip": false, 00:23:12.667 "zcopy": false, 00:23:12.667 "c2h_success": false, 00:23:12.667 "sock_priority": 0, 00:23:12.667 "abort_timeout_sec": 1, 00:23:12.667 "ack_timeout": 0, 00:23:12.667 "data_wr_pool_size": 0 00:23:12.667 } 00:23:12.667 }, 00:23:12.667 { 00:23:12.667 "method": "nvmf_create_subsystem", 00:23:12.667 "params": { 00:23:12.667 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:12.667 "allow_any_host": false, 00:23:12.667 "serial_number": "00000000000000000000", 00:23:12.667 "model_number": "SPDK bdev Controller", 00:23:12.667 "max_namespaces": 32, 00:23:12.667 "min_cntlid": 1, 00:23:12.667 "max_cntlid": 65519, 00:23:12.667 "ana_reporting": false 00:23:12.667 } 00:23:12.667 }, 00:23:12.667 { 00:23:12.667 "method": "nvmf_subsystem_add_host", 00:23:12.667 "params": { 00:23:12.667 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:12.667 "host": "nqn.2016-06.io.spdk:host1", 00:23:12.667 "psk": "key0" 00:23:12.667 } 00:23:12.667 }, 00:23:12.667 { 00:23:12.667 "method": "nvmf_subsystem_add_ns", 00:23:12.667 "params": { 00:23:12.667 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:12.667 "namespace": { 00:23:12.667 "nsid": 1, 00:23:12.667 "bdev_name": "malloc0", 00:23:12.667 "nguid": "39BDE2232E6F4765ACC9133BC7C5AB9A", 00:23:12.667 "uuid": "39bde223-2e6f-4765-acc9-133bc7c5ab9a", 00:23:12.667 "no_auto_visible": false 00:23:12.667 } 00:23:12.667 } 00:23:12.667 }, 00:23:12.667 { 00:23:12.667 "method": "nvmf_subsystem_add_listener", 00:23:12.667 "params": { 00:23:12.667 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:12.667 "listen_address": { 00:23:12.667 "trtype": "TCP", 00:23:12.667 "adrfam": "IPv4", 00:23:12.667 "traddr": "10.0.0.2", 00:23:12.667 "trsvcid": "4420" 00:23:12.667 }, 00:23:12.667 "secure_channel": true 00:23:12.667 } 00:23:12.667 } 00:23:12.667 ] 00:23:12.667 } 00:23:12.667 ] 00:23:12.667 }' 00:23:12.667 21:40:12 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:12.926 21:40:12 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:23:12.926 "subsystems": [ 00:23:12.926 { 00:23:12.926 "subsystem": "keyring", 00:23:12.926 "config": [ 00:23:12.926 { 00:23:12.926 "method": "keyring_file_add_key", 00:23:12.926 "params": { 00:23:12.926 "name": "key0", 00:23:12.926 "path": "/tmp/tmp.wzuEKLfHBe" 00:23:12.926 } 00:23:12.926 } 00:23:12.926 ] 00:23:12.926 }, 00:23:12.926 { 00:23:12.926 "subsystem": "iobuf", 00:23:12.926 "config": [ 00:23:12.926 { 00:23:12.926 "method": "iobuf_set_options", 00:23:12.926 "params": { 00:23:12.926 "small_pool_count": 8192, 00:23:12.926 "large_pool_count": 1024, 00:23:12.926 "small_bufsize": 8192, 00:23:12.926 "large_bufsize": 135168 00:23:12.926 } 00:23:12.926 } 00:23:12.926 ] 00:23:12.926 }, 00:23:12.926 { 00:23:12.926 "subsystem": "sock", 00:23:12.926 "config": [ 00:23:12.926 { 00:23:12.926 "method": "sock_set_default_impl", 00:23:12.926 "params": { 00:23:12.926 "impl_name": "posix" 00:23:12.926 } 00:23:12.926 }, 00:23:12.926 { 00:23:12.926 "method": "sock_impl_set_options", 00:23:12.926 "params": { 00:23:12.926 "impl_name": "ssl", 00:23:12.926 "recv_buf_size": 4096, 00:23:12.926 "send_buf_size": 4096, 00:23:12.926 "enable_recv_pipe": true, 00:23:12.926 "enable_quickack": false, 00:23:12.926 "enable_placement_id": 0, 00:23:12.926 "enable_zerocopy_send_server": true, 00:23:12.926 "enable_zerocopy_send_client": false, 00:23:12.926 "zerocopy_threshold": 0, 00:23:12.926 "tls_version": 0, 00:23:12.926 "enable_ktls": false 00:23:12.926 } 00:23:12.926 }, 00:23:12.926 { 00:23:12.926 "method": "sock_impl_set_options", 00:23:12.926 "params": { 00:23:12.926 "impl_name": "posix", 00:23:12.926 "recv_buf_size": 2097152, 00:23:12.926 "send_buf_size": 2097152, 00:23:12.926 "enable_recv_pipe": true, 00:23:12.926 "enable_quickack": false, 00:23:12.926 "enable_placement_id": 0, 00:23:12.926 "enable_zerocopy_send_server": true, 00:23:12.926 "enable_zerocopy_send_client": false, 00:23:12.926 "zerocopy_threshold": 0, 00:23:12.926 "tls_version": 0, 00:23:12.926 "enable_ktls": false 00:23:12.926 } 00:23:12.926 } 00:23:12.926 ] 00:23:12.927 }, 00:23:12.927 { 00:23:12.927 "subsystem": "vmd", 00:23:12.927 "config": [] 00:23:12.927 }, 00:23:12.927 { 00:23:12.927 "subsystem": "accel", 00:23:12.927 "config": [ 00:23:12.927 { 00:23:12.927 "method": "accel_set_options", 00:23:12.927 "params": { 00:23:12.927 "small_cache_size": 128, 00:23:12.927 "large_cache_size": 16, 00:23:12.927 "task_count": 2048, 00:23:12.927 "sequence_count": 2048, 00:23:12.927 "buf_count": 2048 00:23:12.927 } 00:23:12.927 } 00:23:12.927 ] 00:23:12.927 }, 00:23:12.927 { 00:23:12.927 "subsystem": "bdev", 00:23:12.927 "config": [ 00:23:12.927 { 00:23:12.927 "method": "bdev_set_options", 00:23:12.927 "params": { 00:23:12.927 "bdev_io_pool_size": 65535, 00:23:12.927 "bdev_io_cache_size": 256, 00:23:12.927 "bdev_auto_examine": true, 00:23:12.927 "iobuf_small_cache_size": 128, 00:23:12.927 "iobuf_large_cache_size": 16 00:23:12.927 } 00:23:12.927 }, 00:23:12.927 { 00:23:12.927 "method": "bdev_raid_set_options", 00:23:12.927 "params": { 00:23:12.927 "process_window_size_kb": 1024 00:23:12.927 } 00:23:12.927 }, 00:23:12.927 { 00:23:12.927 "method": "bdev_iscsi_set_options", 00:23:12.927 "params": { 00:23:12.927 "timeout_sec": 30 00:23:12.927 } 00:23:12.927 }, 00:23:12.927 { 00:23:12.927 "method": "bdev_nvme_set_options", 00:23:12.927 "params": { 00:23:12.927 "action_on_timeout": "none", 00:23:12.927 "timeout_us": 0, 00:23:12.927 "timeout_admin_us": 0, 00:23:12.927 "keep_alive_timeout_ms": 10000, 00:23:12.927 "arbitration_burst": 0, 00:23:12.927 "low_priority_weight": 0, 00:23:12.927 "medium_priority_weight": 0, 00:23:12.927 "high_priority_weight": 0, 00:23:12.927 "nvme_adminq_poll_period_us": 10000, 00:23:12.927 "nvme_ioq_poll_period_us": 0, 00:23:12.927 "io_queue_requests": 512, 00:23:12.927 "delay_cmd_submit": true, 00:23:12.927 "transport_retry_count": 4, 00:23:12.927 "bdev_retry_count": 3, 00:23:12.927 "transport_ack_timeout": 0, 00:23:12.927 "ctrlr_loss_timeout_sec": 0, 00:23:12.927 "reconnect_delay_sec": 0, 00:23:12.927 "fast_io_fail_timeout_sec": 0, 00:23:12.927 "disable_auto_failback": false, 00:23:12.927 "generate_uuids": false, 00:23:12.927 "transport_tos": 0, 00:23:12.927 "nvme_error_stat": false, 00:23:12.927 "rdma_srq_size": 0, 00:23:12.927 "io_path_stat": false, 00:23:12.927 "allow_accel_sequence": false, 00:23:12.927 "rdma_max_cq_size": 0, 00:23:12.927 "rdma_cm_event_timeout_ms": 0, 00:23:12.927 "dhchap_digests": [ 00:23:12.927 "sha256", 00:23:12.927 "sha384", 00:23:12.927 "sha512" 00:23:12.927 ], 00:23:12.927 "dhchap_dhgroups": [ 00:23:12.927 "null", 00:23:12.927 "ffdhe2048", 00:23:12.927 "ffdhe3072", 00:23:12.927 "ffdhe4096", 00:23:12.927 "ffdhe6144", 00:23:12.927 "ffdhe8192" 00:23:12.927 ] 00:23:12.927 } 00:23:12.927 }, 00:23:12.927 { 00:23:12.927 "method": "bdev_nvme_attach_controller", 00:23:12.927 "params": { 00:23:12.927 "name": "nvme0", 00:23:12.927 "trtype": "TCP", 00:23:12.927 "adrfam": "IPv4", 00:23:12.927 "traddr": "10.0.0.2", 00:23:12.927 "trsvcid": "4420", 00:23:12.927 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:12.927 "prchk_reftag": false, 00:23:12.927 "prchk_guard": false, 00:23:12.927 "ctrlr_loss_timeout_sec": 0, 00:23:12.927 "reconnect_delay_sec": 0, 00:23:12.927 "fast_io_fail_timeout_sec": 0, 00:23:12.927 "psk": "key0", 00:23:12.927 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:12.927 "hdgst": false, 00:23:12.927 "ddgst": false 00:23:12.927 } 00:23:12.927 }, 00:23:12.927 { 00:23:12.927 "method": "bdev_nvme_set_hotplug", 00:23:12.927 "params": { 00:23:12.927 "period_us": 100000, 00:23:12.927 "enable": false 00:23:12.927 } 00:23:12.927 }, 00:23:12.927 { 00:23:12.927 "method": "bdev_enable_histogram", 00:23:12.927 "params": { 00:23:12.927 "name": "nvme0n1", 00:23:12.927 "enable": true 00:23:12.927 } 00:23:12.927 }, 00:23:12.927 { 00:23:12.927 "method": "bdev_wait_for_examine" 00:23:12.927 } 00:23:12.927 ] 00:23:12.927 }, 00:23:12.927 { 00:23:12.927 "subsystem": "nbd", 00:23:12.927 "config": [] 00:23:12.927 } 00:23:12.927 ] 00:23:12.927 }' 00:23:12.927 21:40:12 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 1498974 00:23:12.927 21:40:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1498974 ']' 00:23:12.927 21:40:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1498974 00:23:12.927 21:40:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:23:12.927 21:40:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:12.927 21:40:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1498974 00:23:12.927 21:40:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:23:12.927 21:40:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:23:12.927 21:40:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1498974' 00:23:12.927 killing process with pid 1498974 00:23:12.927 21:40:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1498974 00:23:12.927 Received shutdown signal, test time was about 1.000000 seconds 00:23:12.927 00:23:12.927 Latency(us) 00:23:12.927 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:12.927 =================================================================================================================== 00:23:12.927 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:12.927 21:40:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1498974 00:23:13.186 21:40:13 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 1498696 00:23:13.186 21:40:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1498696 ']' 00:23:13.186 21:40:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1498696 00:23:13.186 21:40:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:23:13.186 21:40:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:13.186 21:40:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1498696 00:23:13.186 21:40:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:13.186 21:40:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:13.186 21:40:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1498696' 00:23:13.186 killing process with pid 1498696 00:23:13.186 21:40:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1498696 00:23:13.186 21:40:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1498696 00:23:13.444 21:40:13 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:23:13.444 21:40:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:13.444 21:40:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:13.444 21:40:13 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:23:13.444 "subsystems": [ 00:23:13.444 { 00:23:13.444 "subsystem": "keyring", 00:23:13.444 "config": [ 00:23:13.444 { 00:23:13.444 "method": "keyring_file_add_key", 00:23:13.444 "params": { 00:23:13.444 "name": "key0", 00:23:13.444 "path": "/tmp/tmp.wzuEKLfHBe" 00:23:13.444 } 00:23:13.444 } 00:23:13.444 ] 00:23:13.444 }, 00:23:13.444 { 00:23:13.444 "subsystem": "iobuf", 00:23:13.444 "config": [ 00:23:13.444 { 00:23:13.444 "method": "iobuf_set_options", 00:23:13.444 "params": { 00:23:13.444 "small_pool_count": 8192, 00:23:13.444 "large_pool_count": 1024, 00:23:13.444 "small_bufsize": 8192, 00:23:13.444 "large_bufsize": 135168 00:23:13.444 } 00:23:13.444 } 00:23:13.444 ] 00:23:13.444 }, 00:23:13.444 { 00:23:13.444 "subsystem": "sock", 00:23:13.444 "config": [ 00:23:13.445 { 00:23:13.445 "method": "sock_set_default_impl", 00:23:13.445 "params": { 00:23:13.445 "impl_name": "posix" 00:23:13.445 } 00:23:13.445 }, 00:23:13.445 { 00:23:13.445 "method": "sock_impl_set_options", 00:23:13.445 "params": { 00:23:13.445 "impl_name": "ssl", 00:23:13.445 "recv_buf_size": 4096, 00:23:13.445 "send_buf_size": 4096, 00:23:13.445 "enable_recv_pipe": true, 00:23:13.445 "enable_quickack": false, 00:23:13.445 "enable_placement_id": 0, 00:23:13.445 "enable_zerocopy_send_server": true, 00:23:13.445 "enable_zerocopy_send_client": false, 00:23:13.445 "zerocopy_threshold": 0, 00:23:13.445 "tls_version": 0, 00:23:13.445 "enable_ktls": false 00:23:13.445 } 00:23:13.445 }, 00:23:13.445 { 00:23:13.445 "method": "sock_impl_set_options", 00:23:13.445 "params": { 00:23:13.445 "impl_name": "posix", 00:23:13.445 "recv_buf_size": 2097152, 00:23:13.445 "send_buf_size": 2097152, 00:23:13.445 "enable_recv_pipe": true, 00:23:13.445 "enable_quickack": false, 00:23:13.445 "enable_placement_id": 0, 00:23:13.445 "enable_zerocopy_send_server": true, 00:23:13.445 "enable_zerocopy_send_client": false, 00:23:13.445 "zerocopy_threshold": 0, 00:23:13.445 "tls_version": 0, 00:23:13.445 "enable_ktls": false 00:23:13.445 } 00:23:13.445 } 00:23:13.445 ] 00:23:13.445 }, 00:23:13.445 { 00:23:13.445 "subsystem": "vmd", 00:23:13.445 "config": [] 00:23:13.445 }, 00:23:13.445 { 00:23:13.445 "subsystem": "accel", 00:23:13.445 "config": [ 00:23:13.445 { 00:23:13.445 "method": "accel_set_options", 00:23:13.445 "params": { 00:23:13.445 "small_cache_size": 128, 00:23:13.445 "large_cache_size": 16, 00:23:13.445 "task_count": 2048, 00:23:13.445 "sequence_count": 2048, 00:23:13.445 "buf_count": 2048 00:23:13.445 } 00:23:13.445 } 00:23:13.445 ] 00:23:13.445 }, 00:23:13.445 { 00:23:13.445 "subsystem": "bdev", 00:23:13.445 "config": [ 00:23:13.445 { 00:23:13.445 "method": "bdev_set_options", 00:23:13.445 "params": { 00:23:13.445 "bdev_io_pool_size": 65535, 00:23:13.445 "bdev_io_cache_size": 256, 00:23:13.445 "bdev_auto_examine": true, 00:23:13.445 "iobuf_small_cache_size": 128, 00:23:13.445 "iobuf_large_cache_size": 16 00:23:13.445 } 00:23:13.445 }, 00:23:13.445 { 00:23:13.445 "method": "bdev_raid_set_options", 00:23:13.445 "params": { 00:23:13.445 "process_window_size_kb": 1024 00:23:13.445 } 00:23:13.445 }, 00:23:13.445 { 00:23:13.445 "method": "bdev_iscsi_set_options", 00:23:13.445 "params": { 00:23:13.445 "timeout_sec": 30 00:23:13.445 } 00:23:13.445 }, 00:23:13.445 { 00:23:13.445 "method": "bdev_nvme_set_options", 00:23:13.445 "params": { 00:23:13.445 "action_on_timeout": "none", 00:23:13.445 "timeout_us": 0, 00:23:13.445 "timeout_admin_us": 0, 00:23:13.445 "keep_alive_timeout_ms": 10000, 00:23:13.445 "arbitration_burst": 0, 00:23:13.445 "low_priority_weight": 0, 00:23:13.445 "medium_priority_weight": 0, 00:23:13.445 "high_priority_weight": 0, 00:23:13.445 "nvme_adminq_poll_period_us": 10000, 00:23:13.445 "nvme_ioq_poll_period_us": 0, 00:23:13.445 "io_queue_requests": 0, 00:23:13.445 "delay_cmd_submit": true, 00:23:13.445 "transport_retry_count": 4, 00:23:13.445 "bdev_retry_count": 3, 00:23:13.445 "transport_ack_timeout": 0, 00:23:13.445 "ctrlr_loss_timeout_sec": 0, 00:23:13.445 "reconnect_delay_sec": 0, 00:23:13.445 "fast_io_fail_timeout_sec": 0, 00:23:13.445 "disable_auto_failback": false, 00:23:13.445 "generate_uuids": false, 00:23:13.445 "transport_tos": 0, 00:23:13.445 "nvme_error_stat": false, 00:23:13.445 "rdma_srq_size": 0, 00:23:13.445 "io_path_stat": false, 00:23:13.445 "allow_accel_sequence": false, 00:23:13.445 "rdma_max_cq_size": 0, 00:23:13.445 "rdma_cm_event_timeout_ms": 0, 00:23:13.445 "dhchap_digests": [ 00:23:13.445 "sha256", 00:23:13.445 "sha384", 00:23:13.445 "sha512" 00:23:13.445 ], 00:23:13.445 "dhchap_dhgroups": [ 00:23:13.445 "null", 00:23:13.445 "ffdhe2048", 00:23:13.445 "ffdhe3072", 00:23:13.445 "ffdhe4096", 00:23:13.445 "ffdhe6144", 00:23:13.445 "ffdhe8192" 00:23:13.445 ] 00:23:13.445 } 00:23:13.445 }, 00:23:13.445 { 00:23:13.445 "method": "bdev_nvme_set_hotplug", 00:23:13.445 "params": { 00:23:13.445 "period_us": 100000, 00:23:13.445 "enable": false 00:23:13.445 } 00:23:13.445 }, 00:23:13.445 { 00:23:13.445 "method": "bdev_malloc_create", 00:23:13.445 "params": { 00:23:13.445 "name": "malloc0", 00:23:13.445 "num_blocks": 8192, 00:23:13.445 "block_size": 4096, 00:23:13.445 "physical_block_size": 4096, 00:23:13.445 "uuid": "39bde223-2e6f-4765-acc9-133bc7c5ab9a", 00:23:13.445 "optimal_io_boundary": 0 00:23:13.445 } 00:23:13.445 }, 00:23:13.445 { 00:23:13.445 "method": "bdev_wait_for_examine" 00:23:13.445 } 00:23:13.445 ] 00:23:13.445 }, 00:23:13.445 { 00:23:13.445 "subsystem": "nbd", 00:23:13.445 "config": [] 00:23:13.445 }, 00:23:13.445 { 00:23:13.445 "subsystem": "scheduler", 00:23:13.445 "config": [ 00:23:13.445 { 00:23:13.445 "method": "framework_set_scheduler", 00:23:13.445 "params": { 00:23:13.445 "name": "static" 00:23:13.445 } 00:23:13.445 } 00:23:13.445 ] 00:23:13.445 }, 00:23:13.445 { 00:23:13.445 "subsystem": "nvmf", 00:23:13.445 "config": [ 00:23:13.445 { 00:23:13.445 "method": "nvmf_set_config", 00:23:13.445 "params": { 00:23:13.445 "discovery_filter": "match_any", 00:23:13.445 "admin_cmd_passthru": { 00:23:13.445 "identify_ctrlr": false 00:23:13.445 } 00:23:13.445 } 00:23:13.445 }, 00:23:13.445 { 00:23:13.445 "method": "nvmf_set_max_subsystems", 00:23:13.445 "params": { 00:23:13.445 "max_subsystems": 1024 00:23:13.445 } 00:23:13.445 }, 00:23:13.445 { 00:23:13.445 "method": "nvmf_set_crdt", 00:23:13.445 "params": { 00:23:13.445 "crdt1": 0, 00:23:13.445 "crdt2": 0, 00:23:13.445 "crdt3": 0 00:23:13.445 } 00:23:13.445 }, 00:23:13.445 { 00:23:13.445 "method": "nvmf_create_transport", 00:23:13.445 "params": { 00:23:13.445 "trtype": "TCP", 00:23:13.445 "max_queue_depth": 128, 00:23:13.445 "max_io_qpairs_per_ctrlr": 127, 00:23:13.445 "in_capsule_data_size": 4096, 00:23:13.445 "max_io_size": 131072, 00:23:13.445 "io_unit_size": 131072, 00:23:13.445 "max_aq_depth": 128, 00:23:13.445 "num_shared_buffers": 511, 00:23:13.445 "buf_cache_size": 4294967295, 00:23:13.445 "dif_insert_or_strip": false, 00:23:13.445 "zcopy": false, 00:23:13.445 "c2h_success": false, 00:23:13.445 "sock_priority": 0, 00:23:13.445 "abort_timeout_sec": 1, 00:23:13.445 "ack_timeout": 0, 00:23:13.445 "data_wr_pool_size": 0 00:23:13.445 } 00:23:13.445 }, 00:23:13.445 { 00:23:13.445 "method": "nvmf_create_subsystem", 00:23:13.445 "params": { 00:23:13.445 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:13.445 "allow_any_host": false, 00:23:13.445 "serial_number": "00000000000000000000", 00:23:13.446 "model_number": "SPDK bdev Controller", 00:23:13.446 "max_namespaces": 32, 00:23:13.446 "min_cntlid": 1, 00:23:13.446 "max_cntlid": 65519, 00:23:13.446 "ana_reporting": false 00:23:13.446 } 00:23:13.446 }, 00:23:13.446 { 00:23:13.446 "method": "nvmf_subsystem_add_host", 00:23:13.446 "params": { 00:23:13.446 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:13.446 "host": "nqn.2016-06.io.spdk:host1", 00:23:13.446 "psk": "key0" 00:23:13.446 } 00:23:13.446 }, 00:23:13.446 { 00:23:13.446 "method": "nvmf_subsystem_add_ns", 00:23:13.446 "params": { 00:23:13.446 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:13.446 "namespace": { 00:23:13.446 "nsid": 1, 00:23:13.446 "bdev_name": "malloc0", 00:23:13.446 "nguid": "39BDE2232E6F4765ACC9133BC7C5AB9A", 00:23:13.446 "uuid": "39bde223-2e6f-4765-acc9-133bc7c5ab9a", 00:23:13.446 "no_auto_visible": false 00:23:13.446 } 00:23:13.446 } 00:23:13.446 }, 00:23:13.446 { 00:23:13.446 "method": "nvmf_subsystem_add_listener", 00:23:13.446 "params": { 00:23:13.446 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:13.446 "listen_address": { 00:23:13.446 "trtype": "TCP", 00:23:13.446 "adrfam": "IPv4", 00:23:13.446 "traddr": "10.0.0.2", 00:23:13.446 "trsvcid": "4420" 00:23:13.446 }, 00:23:13.446 "secure_channel": true 00:23:13.446 } 00:23:13.446 } 00:23:13.446 ] 00:23:13.446 } 00:23:13.446 ] 00:23:13.446 }' 00:23:13.446 21:40:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:13.446 21:40:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1499515 00:23:13.446 21:40:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1499515 00:23:13.446 21:40:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:13.446 21:40:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1499515 ']' 00:23:13.446 21:40:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:13.446 21:40:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:13.446 21:40:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:13.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:13.446 21:40:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:13.446 21:40:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:13.446 [2024-06-07 21:40:13.567756] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:23:13.446 [2024-06-07 21:40:13.567812] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:13.446 EAL: No free 2048 kB hugepages reported on node 1 00:23:13.446 [2024-06-07 21:40:13.661020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.704 [2024-06-07 21:40:13.750907] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:13.704 [2024-06-07 21:40:13.750943] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:13.704 [2024-06-07 21:40:13.750953] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:13.704 [2024-06-07 21:40:13.750962] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:13.704 [2024-06-07 21:40:13.750969] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:13.704 [2024-06-07 21:40:13.751034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.704 [2024-06-07 21:40:13.970118] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:13.960 [2024-06-07 21:40:14.002115] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:13.960 [2024-06-07 21:40:14.012352] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:14.524 21:40:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:14.524 21:40:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:23:14.524 21:40:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:14.524 21:40:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:14.524 21:40:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:14.524 21:40:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:14.524 21:40:14 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=1499611 00:23:14.524 21:40:14 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 1499611 /var/tmp/bdevperf.sock 00:23:14.524 21:40:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1499611 ']' 00:23:14.524 21:40:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:14.524 21:40:14 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:14.524 21:40:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:14.524 21:40:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:14.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:14.524 21:40:14 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:23:14.524 "subsystems": [ 00:23:14.524 { 00:23:14.524 "subsystem": "keyring", 00:23:14.524 "config": [ 00:23:14.524 { 00:23:14.524 "method": "keyring_file_add_key", 00:23:14.524 "params": { 00:23:14.524 "name": "key0", 00:23:14.524 "path": "/tmp/tmp.wzuEKLfHBe" 00:23:14.524 } 00:23:14.524 } 00:23:14.524 ] 00:23:14.524 }, 00:23:14.524 { 00:23:14.524 "subsystem": "iobuf", 00:23:14.524 "config": [ 00:23:14.524 { 00:23:14.524 "method": "iobuf_set_options", 00:23:14.524 "params": { 00:23:14.524 "small_pool_count": 8192, 00:23:14.524 "large_pool_count": 1024, 00:23:14.524 "small_bufsize": 8192, 00:23:14.524 "large_bufsize": 135168 00:23:14.524 } 00:23:14.524 } 00:23:14.524 ] 00:23:14.524 }, 00:23:14.524 { 00:23:14.524 "subsystem": "sock", 00:23:14.524 "config": [ 00:23:14.524 { 00:23:14.525 "method": "sock_set_default_impl", 00:23:14.525 "params": { 00:23:14.525 "impl_name": "posix" 00:23:14.525 } 00:23:14.525 }, 00:23:14.525 { 00:23:14.525 "method": "sock_impl_set_options", 00:23:14.525 "params": { 00:23:14.525 "impl_name": "ssl", 00:23:14.525 "recv_buf_size": 4096, 00:23:14.525 "send_buf_size": 4096, 00:23:14.525 "enable_recv_pipe": true, 00:23:14.525 "enable_quickack": false, 00:23:14.525 "enable_placement_id": 0, 00:23:14.525 "enable_zerocopy_send_server": true, 00:23:14.525 "enable_zerocopy_send_client": false, 00:23:14.525 "zerocopy_threshold": 0, 00:23:14.525 "tls_version": 0, 00:23:14.525 "enable_ktls": false 00:23:14.525 } 00:23:14.525 }, 00:23:14.525 { 00:23:14.525 "method": "sock_impl_set_options", 00:23:14.525 "params": { 00:23:14.525 "impl_name": "posix", 00:23:14.525 "recv_buf_size": 2097152, 00:23:14.525 "send_buf_size": 2097152, 00:23:14.525 "enable_recv_pipe": true, 00:23:14.525 "enable_quickack": false, 00:23:14.525 "enable_placement_id": 0, 00:23:14.525 "enable_zerocopy_send_server": true, 00:23:14.525 "enable_zerocopy_send_client": false, 00:23:14.525 "zerocopy_threshold": 0, 00:23:14.525 "tls_version": 0, 00:23:14.525 "enable_ktls": false 00:23:14.525 } 00:23:14.525 } 00:23:14.525 ] 00:23:14.525 }, 00:23:14.525 { 00:23:14.525 "subsystem": "vmd", 00:23:14.525 "config": [] 00:23:14.525 }, 00:23:14.525 { 00:23:14.525 "subsystem": "accel", 00:23:14.525 "config": [ 00:23:14.525 { 00:23:14.525 "method": "accel_set_options", 00:23:14.525 "params": { 00:23:14.525 "small_cache_size": 128, 00:23:14.525 "large_cache_size": 16, 00:23:14.525 "task_count": 2048, 00:23:14.525 "sequence_count": 2048, 00:23:14.525 "buf_count": 2048 00:23:14.525 } 00:23:14.525 } 00:23:14.525 ] 00:23:14.525 }, 00:23:14.525 { 00:23:14.525 "subsystem": "bdev", 00:23:14.525 "config": [ 00:23:14.525 { 00:23:14.525 "method": "bdev_set_options", 00:23:14.525 "params": { 00:23:14.525 "bdev_io_pool_size": 65535, 00:23:14.525 "bdev_io_cache_size": 256, 00:23:14.525 "bdev_auto_examine": true, 00:23:14.525 "iobuf_small_cache_size": 128, 00:23:14.525 "iobuf_large_cache_size": 16 00:23:14.525 } 00:23:14.525 }, 00:23:14.525 { 00:23:14.525 "method": "bdev_raid_set_options", 00:23:14.525 "params": { 00:23:14.525 "process_window_size_kb": 1024 00:23:14.525 } 00:23:14.525 }, 00:23:14.525 { 00:23:14.525 "method": "bdev_iscsi_set_options", 00:23:14.525 "params": { 00:23:14.525 "timeout_sec": 30 00:23:14.525 } 00:23:14.525 }, 00:23:14.525 { 00:23:14.525 "method": "bdev_nvme_set_options", 00:23:14.525 "params": { 00:23:14.525 "action_on_timeout": "none", 00:23:14.525 "timeout_us": 0, 00:23:14.525 "timeout_admin_us": 0, 00:23:14.525 "keep_alive_timeout_ms": 10000, 00:23:14.525 "arbitration_burst": 0, 00:23:14.525 "low_priority_weight": 0, 00:23:14.525 "medium_priority_weight": 0, 00:23:14.525 "high_priority_weight": 0, 00:23:14.525 "nvme_adminq_poll_period_us": 10000, 00:23:14.525 "nvme_ioq_poll_period_us": 0, 00:23:14.525 "io_queue_requests": 512, 00:23:14.525 "delay_cmd_submit": true, 00:23:14.525 "transport_retry_count": 4, 00:23:14.525 "bdev_retry_count": 3, 00:23:14.525 "transport_ack_timeout": 0, 00:23:14.525 "ctrlr_loss_timeout_sec": 0, 00:23:14.525 "reconnect_delay_sec": 0, 00:23:14.525 "fast_io_fail_timeout_sec": 0, 00:23:14.525 "disable_auto_failback": false, 00:23:14.525 "generate_uuids": false, 00:23:14.525 "transport_tos": 0, 00:23:14.525 "nvme_error_stat": false, 00:23:14.525 "rdma_srq_size": 0, 00:23:14.525 "io_path_stat": false, 00:23:14.525 "allow_accel_sequence": false, 00:23:14.525 "rdma_max_cq_size": 0, 00:23:14.525 "rdma_cm_event_timeout_ms": 0, 00:23:14.525 "dhchap_digests": [ 00:23:14.525 "sha256", 00:23:14.525 "sha384", 00:23:14.525 "sha512" 00:23:14.525 ], 00:23:14.525 "dhchap_dhgroups": [ 00:23:14.525 "null", 00:23:14.525 "ffdhe2048", 00:23:14.525 "ffdhe3072", 00:23:14.525 "ffdhe4096", 00:23:14.525 "ffdhe6144", 00:23:14.525 "ffdhe8192" 00:23:14.525 ] 00:23:14.525 } 00:23:14.525 }, 00:23:14.525 { 00:23:14.525 "method": "bdev_nvme_attach_controller", 00:23:14.525 "params": { 00:23:14.525 "name": "nvme0", 00:23:14.525 "trtype": "TCP", 00:23:14.525 "adrfam": "IPv4", 00:23:14.525 "traddr": "10.0.0.2", 00:23:14.525 "trsvcid": "4420", 00:23:14.525 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.525 "prchk_reftag": false, 00:23:14.525 "prchk_guard": false, 00:23:14.525 "ctrlr_loss_timeout_sec": 0, 00:23:14.525 "reconnect_delay_sec": 0, 00:23:14.525 "fast_io_fail_timeout_sec": 0, 00:23:14.525 "psk": "key0", 00:23:14.525 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:14.525 "hdgst": false, 00:23:14.525 "ddgst": false 00:23:14.525 } 00:23:14.525 }, 00:23:14.525 { 00:23:14.525 "method": "bdev_nvme_set_hotplug", 00:23:14.525 "params": { 00:23:14.525 "period_us": 100000, 00:23:14.525 "enable": false 00:23:14.525 } 00:23:14.525 }, 00:23:14.525 { 00:23:14.525 "method": "bdev_enable_histogram", 00:23:14.525 "params": { 00:23:14.525 "name": "nvme0n1", 00:23:14.525 "enable": true 00:23:14.525 } 00:23:14.525 }, 00:23:14.525 { 00:23:14.525 "method": "bdev_wait_for_examine" 00:23:14.525 } 00:23:14.525 ] 00:23:14.525 }, 00:23:14.525 { 00:23:14.525 "subsystem": "nbd", 00:23:14.525 "config": [] 00:23:14.525 } 00:23:14.525 ] 00:23:14.525 }' 00:23:14.525 21:40:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:14.525 21:40:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:14.525 [2024-06-07 21:40:14.593475] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:23:14.525 [2024-06-07 21:40:14.593534] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1499611 ] 00:23:14.525 EAL: No free 2048 kB hugepages reported on node 1 00:23:14.525 [2024-06-07 21:40:14.673566] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.525 [2024-06-07 21:40:14.763568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.782 [2024-06-07 21:40:14.920804] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:15.347 21:40:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:15.347 21:40:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:23:15.347 21:40:15 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:15.347 21:40:15 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:23:15.604 21:40:15 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.604 21:40:15 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:15.604 Running I/O for 1 seconds... 00:23:16.977 00:23:16.977 Latency(us) 00:23:16.977 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:16.977 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:16.977 Verification LBA range: start 0x0 length 0x2000 00:23:16.977 nvme0n1 : 1.03 3670.34 14.34 0.00 0.00 34333.94 9353.77 51713.86 00:23:16.977 =================================================================================================================== 00:23:16.977 Total : 3670.34 14.34 0.00 0.00 34333.94 9353.77 51713.86 00:23:16.977 0 00:23:16.977 21:40:16 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:23:16.977 21:40:16 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:23:16.977 21:40:16 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:16.977 21:40:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # type=--id 00:23:16.977 21:40:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # id=0 00:23:16.977 21:40:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:23:16.977 21:40:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:16.977 21:40:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:23:16.977 21:40:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:23:16.977 21:40:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # for n in $shm_files 00:23:16.977 21:40:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:16.977 nvmf_trace.0 00:23:16.977 21:40:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@822 -- # return 0 00:23:16.977 21:40:17 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 1499611 00:23:16.977 21:40:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1499611 ']' 00:23:16.977 21:40:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1499611 00:23:16.977 21:40:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:23:16.977 21:40:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:16.977 21:40:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1499611 00:23:16.977 21:40:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:23:16.977 21:40:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:23:16.977 21:40:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1499611' 00:23:16.977 killing process with pid 1499611 00:23:16.977 21:40:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1499611 00:23:16.977 Received shutdown signal, test time was about 1.000000 seconds 00:23:16.977 00:23:16.977 Latency(us) 00:23:16.977 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:16.977 =================================================================================================================== 00:23:16.977 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:16.977 21:40:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1499611 00:23:17.234 21:40:17 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:17.234 21:40:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:17.234 21:40:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:23:17.234 21:40:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:17.234 21:40:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:23:17.234 21:40:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:17.234 21:40:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:17.234 rmmod nvme_tcp 00:23:17.234 rmmod nvme_fabrics 00:23:17.234 rmmod nvme_keyring 00:23:17.234 21:40:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:17.234 21:40:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:23:17.234 21:40:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:23:17.234 21:40:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1499515 ']' 00:23:17.234 21:40:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1499515 00:23:17.234 21:40:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1499515 ']' 00:23:17.234 21:40:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1499515 00:23:17.234 21:40:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:23:17.234 21:40:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:17.234 21:40:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1499515 00:23:17.234 21:40:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:17.234 21:40:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:17.234 21:40:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1499515' 00:23:17.234 killing process with pid 1499515 00:23:17.234 21:40:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1499515 00:23:17.234 21:40:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1499515 00:23:17.493 21:40:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:17.493 21:40:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:17.493 21:40:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:17.493 21:40:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:17.493 21:40:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:17.493 21:40:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:17.493 21:40:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:17.493 21:40:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.392 21:40:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:19.392 21:40:19 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.gMciauc7pl /tmp/tmp.7SHRavbuNN /tmp/tmp.wzuEKLfHBe 00:23:19.392 00:23:19.392 real 1m26.738s 00:23:19.392 user 2m13.944s 00:23:19.392 sys 0m29.514s 00:23:19.392 21:40:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:19.392 21:40:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:19.392 ************************************ 00:23:19.392 END TEST nvmf_tls 00:23:19.392 ************************************ 00:23:19.652 21:40:19 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:19.652 21:40:19 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:23:19.652 21:40:19 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:19.652 21:40:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:19.652 ************************************ 00:23:19.652 START TEST nvmf_fips 00:23:19.652 ************************************ 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:19.652 * Looking for test storage... 00:23:19.652 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:19.652 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:23:19.653 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:19.653 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:19.653 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:19.653 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:19.653 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:19.653 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:19.653 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:19.653 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:19.653 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:19.653 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:23:19.653 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:23:19.653 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:23:19.653 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:23:19.653 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:23:19.653 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:19.653 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:19.653 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:19.653 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:19.653 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:19.653 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:19.653 21:40:19 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:23:19.653 21:40:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:23:19.653 21:40:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:19.653 21:40:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:23:19.653 21:40:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:19.653 21:40:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:19.653 21:40:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:23:19.653 21:40:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:23:19.653 21:40:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:23:19.653 21:40:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:23:19.653 21:40:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:23:19.653 21:40:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:23:19.653 21:40:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:19.653 21:40:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:23:19.653 21:40:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:23:19.653 21:40:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:23:19.653 21:40:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:23:19.912 21:40:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:23:19.912 21:40:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:23:19.912 21:40:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:19.912 21:40:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:23:19.912 21:40:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:23:19.912 21:40:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@649 -- # local es=0 00:23:19.912 21:40:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:19.912 21:40:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@637 -- # local arg=openssl 00:23:19.912 21:40:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:19.912 21:40:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # type -t openssl 00:23:19.912 21:40:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:19.912 21:40:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # type -P openssl 00:23:19.912 21:40:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:19.912 21:40:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # arg=/usr/bin/openssl 00:23:19.912 21:40:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # [[ -x /usr/bin/openssl ]] 00:23:19.912 21:40:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # openssl md5 /dev/fd/62 00:23:19.912 Error setting digest 00:23:19.912 0062E344C67F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:23:19.912 0062E344C67F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:23:19.912 21:40:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # es=1 00:23:19.912 21:40:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:19.912 21:40:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:19.912 21:40:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:19.912 21:40:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:23:19.912 21:40:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:19.912 21:40:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:19.912 21:40:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:19.912 21:40:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:19.912 21:40:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:19.912 21:40:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.912 21:40:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:19.912 21:40:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.912 21:40:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:19.912 21:40:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:19.912 21:40:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:23:19.912 21:40:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:26.471 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:26.471 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:26.471 Found net devices under 0000:af:00.0: cvl_0_0 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:26.471 Found net devices under 0000:af:00.1: cvl_0_1 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:26.471 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:23:26.472 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:26.472 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:26.472 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:26.472 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:26.472 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:26.472 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:26.472 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:26.472 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:26.472 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:26.472 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:26.472 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:26.472 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:26.472 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:26.472 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:26.472 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:26.472 21:40:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:26.472 21:40:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:26.472 21:40:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:26.472 21:40:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:26.472 21:40:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:26.472 21:40:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:26.472 21:40:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:26.472 21:40:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:26.472 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:26.472 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:23:26.472 00:23:26.472 --- 10.0.0.2 ping statistics --- 00:23:26.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.472 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:23:26.472 21:40:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:26.472 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:26.472 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:23:26.472 00:23:26.472 --- 10.0.0.1 ping statistics --- 00:23:26.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.472 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:23:26.472 21:40:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:26.472 21:40:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:23:26.472 21:40:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:26.472 21:40:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:26.472 21:40:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:26.472 21:40:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:26.472 21:40:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:26.472 21:40:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:26.472 21:40:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:26.472 21:40:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:23:26.472 21:40:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:26.472 21:40:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:26.472 21:40:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:26.472 21:40:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1504106 00:23:26.472 21:40:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1504106 00:23:26.472 21:40:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@830 -- # '[' -z 1504106 ']' 00:23:26.472 21:40:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:26.472 21:40:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:26.472 21:40:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:26.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:26.472 21:40:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:26.472 21:40:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:26.472 21:40:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:26.472 [2024-06-07 21:40:26.365193] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:23:26.472 [2024-06-07 21:40:26.365253] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:26.472 EAL: No free 2048 kB hugepages reported on node 1 00:23:26.472 [2024-06-07 21:40:26.451554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.472 [2024-06-07 21:40:26.542177] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:26.472 [2024-06-07 21:40:26.542214] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:26.472 [2024-06-07 21:40:26.542224] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:26.472 [2024-06-07 21:40:26.542233] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:26.472 [2024-06-07 21:40:26.542240] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:26.472 [2024-06-07 21:40:26.542266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:27.038 21:40:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:27.038 21:40:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@863 -- # return 0 00:23:27.038 21:40:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:27.038 21:40:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:27.038 21:40:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:27.038 21:40:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:27.038 21:40:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:23:27.038 21:40:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:27.038 21:40:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:27.038 21:40:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:27.038 21:40:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:27.038 21:40:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:27.038 21:40:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:27.038 21:40:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:27.297 [2024-06-07 21:40:27.531244] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:27.297 [2024-06-07 21:40:27.547242] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:27.297 [2024-06-07 21:40:27.547439] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:27.556 [2024-06-07 21:40:27.576477] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:27.556 malloc0 00:23:27.556 21:40:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:27.556 21:40:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1504391 00:23:27.556 21:40:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1504391 /var/tmp/bdevperf.sock 00:23:27.556 21:40:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@830 -- # '[' -z 1504391 ']' 00:23:27.556 21:40:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:27.556 21:40:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:27.556 21:40:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:27.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:27.556 21:40:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:27.556 21:40:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:27.556 21:40:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:27.556 [2024-06-07 21:40:27.678005] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:23:27.556 [2024-06-07 21:40:27.678074] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1504391 ] 00:23:27.556 EAL: No free 2048 kB hugepages reported on node 1 00:23:27.556 [2024-06-07 21:40:27.741851] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.556 [2024-06-07 21:40:27.812719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:27.815 21:40:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:27.815 21:40:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@863 -- # return 0 00:23:27.815 21:40:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:28.074 [2024-06-07 21:40:28.129222] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:28.074 [2024-06-07 21:40:28.129293] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:28.074 TLSTESTn1 00:23:28.074 21:40:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:28.074 Running I/O for 10 seconds... 00:23:40.278 00:23:40.278 Latency(us) 00:23:40.278 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.278 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:40.278 Verification LBA range: start 0x0 length 0x2000 00:23:40.278 TLSTESTn1 : 10.03 3440.71 13.44 0.00 0.00 37129.85 4706.68 48377.48 00:23:40.278 =================================================================================================================== 00:23:40.278 Total : 3440.71 13.44 0.00 0.00 37129.85 4706.68 48377.48 00:23:40.278 0 00:23:40.278 21:40:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:23:40.278 21:40:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:23:40.278 21:40:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # type=--id 00:23:40.278 21:40:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # id=0 00:23:40.278 21:40:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:23:40.278 21:40:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:40.278 21:40:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:23:40.278 21:40:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:23:40.278 21:40:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # for n in $shm_files 00:23:40.278 21:40:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:40.278 nvmf_trace.0 00:23:40.278 21:40:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@822 -- # return 0 00:23:40.278 21:40:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1504391 00:23:40.278 21:40:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@949 -- # '[' -z 1504391 ']' 00:23:40.278 21:40:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # kill -0 1504391 00:23:40.278 21:40:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # uname 00:23:40.278 21:40:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:40.278 21:40:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1504391 00:23:40.278 21:40:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:23:40.278 21:40:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:23:40.278 21:40:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1504391' 00:23:40.278 killing process with pid 1504391 00:23:40.278 21:40:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@968 -- # kill 1504391 00:23:40.278 Received shutdown signal, test time was about 10.000000 seconds 00:23:40.278 00:23:40.279 Latency(us) 00:23:40.279 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.279 =================================================================================================================== 00:23:40.279 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:40.279 [2024-06-07 21:40:38.561811] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:40.279 21:40:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@973 -- # wait 1504391 00:23:40.279 21:40:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:23:40.279 21:40:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:40.279 21:40:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:23:40.279 21:40:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:40.279 21:40:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:23:40.279 21:40:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:40.279 21:40:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:40.279 rmmod nvme_tcp 00:23:40.279 rmmod nvme_fabrics 00:23:40.279 rmmod nvme_keyring 00:23:40.279 21:40:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:40.279 21:40:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:23:40.279 21:40:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:23:40.279 21:40:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1504106 ']' 00:23:40.279 21:40:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1504106 00:23:40.279 21:40:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@949 -- # '[' -z 1504106 ']' 00:23:40.279 21:40:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # kill -0 1504106 00:23:40.279 21:40:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # uname 00:23:40.279 21:40:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:40.279 21:40:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1504106 00:23:40.279 21:40:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:23:40.279 21:40:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:23:40.279 21:40:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1504106' 00:23:40.279 killing process with pid 1504106 00:23:40.279 21:40:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@968 -- # kill 1504106 00:23:40.279 [2024-06-07 21:40:38.851957] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:40.279 21:40:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@973 -- # wait 1504106 00:23:40.279 21:40:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:40.279 21:40:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:40.279 21:40:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:40.279 21:40:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:40.279 21:40:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:40.279 21:40:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:40.279 21:40:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:40.279 21:40:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:41.216 21:40:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:41.216 21:40:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:41.216 00:23:41.216 real 0m21.419s 00:23:41.216 user 0m22.301s 00:23:41.216 sys 0m9.582s 00:23:41.216 21:40:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:41.216 21:40:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:41.216 ************************************ 00:23:41.216 END TEST nvmf_fips 00:23:41.216 ************************************ 00:23:41.216 21:40:41 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:23:41.216 21:40:41 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:23:41.216 21:40:41 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:23:41.216 21:40:41 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:23:41.216 21:40:41 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:23:41.216 21:40:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:47.859 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:47.859 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:47.859 Found net devices under 0000:af:00.0: cvl_0_0 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:47.859 Found net devices under 0000:af:00.1: cvl_0_1 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:47.859 21:40:47 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:23:47.860 21:40:47 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:47.860 21:40:47 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:23:47.860 21:40:47 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:47.860 21:40:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:47.860 ************************************ 00:23:47.860 START TEST nvmf_perf_adq 00:23:47.860 ************************************ 00:23:47.860 21:40:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:47.860 * Looking for test storage... 00:23:47.860 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:47.860 21:40:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:47.860 21:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:23:47.860 21:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:47.860 21:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:47.860 21:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:47.860 21:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:47.860 21:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:47.860 21:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:47.860 21:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:47.860 21:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:47.860 21:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:47.860 21:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:47.860 21:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:23:47.860 21:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:23:47.860 21:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:47.860 21:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:47.860 21:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:47.860 21:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:47.860 21:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:47.860 21:40:47 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:47.860 21:40:47 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:47.860 21:40:47 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:47.860 21:40:47 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.860 21:40:47 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.860 21:40:47 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.860 21:40:47 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:23:47.860 21:40:47 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.860 21:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:23:47.860 21:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:47.860 21:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:47.860 21:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:47.860 21:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:47.860 21:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:47.860 21:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:47.860 21:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:47.860 21:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:47.860 21:40:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:23:47.860 21:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:23:47.860 21:40:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:53.131 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:53.131 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:53.131 Found net devices under 0000:af:00.0: cvl_0_0 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:53.131 Found net devices under 0000:af:00.1: cvl_0_1 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:23:53.131 21:40:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:23:54.508 21:40:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:23:56.412 21:40:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:01.687 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:01.687 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:01.687 Found net devices under 0000:af:00.0: cvl_0_0 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:01.687 Found net devices under 0000:af:00.1: cvl_0_1 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:01.687 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:01.688 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:01.688 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:01.688 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:01.688 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:01.688 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:01.688 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:01.688 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:01.688 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:24:01.688 00:24:01.688 --- 10.0.0.2 ping statistics --- 00:24:01.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.688 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:24:01.688 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:01.688 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:01.688 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:24:01.688 00:24:01.688 --- 10.0.0.1 ping statistics --- 00:24:01.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.688 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:24:01.688 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:01.688 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:24:01.688 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:01.688 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:01.688 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:01.688 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:01.688 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:01.688 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:01.688 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:01.946 21:41:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:01.946 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:01.946 21:41:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:01.946 21:41:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:01.946 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1515454 00:24:01.946 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1515454 00:24:01.946 21:41:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@830 -- # '[' -z 1515454 ']' 00:24:01.946 21:41:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:01.946 21:41:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:01.946 21:41:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:01.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:01.946 21:41:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:01.946 21:41:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:01.946 21:41:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:01.946 [2024-06-07 21:41:02.041541] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:24:01.946 [2024-06-07 21:41:02.041594] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:01.946 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.946 [2024-06-07 21:41:02.135653] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:02.204 [2024-06-07 21:41:02.229957] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:02.204 [2024-06-07 21:41:02.229997] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:02.204 [2024-06-07 21:41:02.230008] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:02.204 [2024-06-07 21:41:02.230017] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:02.205 [2024-06-07 21:41:02.230031] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:02.205 [2024-06-07 21:41:02.230088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:02.205 [2024-06-07 21:41:02.230105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:02.205 [2024-06-07 21:41:02.230231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:02.205 [2024-06-07 21:41:02.230232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:02.771 21:41:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:02.771 21:41:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@863 -- # return 0 00:24:02.771 21:41:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:02.771 21:41:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:02.771 21:41:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:02.771 21:41:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:02.771 21:41:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:24:02.771 21:41:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:24:02.771 21:41:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:24:02.771 21:41:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:02.771 21:41:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:02.771 21:41:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:03.030 21:41:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:24:03.030 21:41:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:24:03.030 21:41:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:03.030 21:41:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:03.030 21:41:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:03.030 21:41:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:24:03.030 21:41:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:03.030 21:41:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:03.030 21:41:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:03.030 21:41:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:24:03.030 21:41:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:03.030 21:41:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:03.030 [2024-06-07 21:41:03.180090] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:03.030 21:41:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:03.030 21:41:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:03.030 21:41:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:03.030 21:41:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:03.030 Malloc1 00:24:03.030 21:41:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:03.030 21:41:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:03.030 21:41:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:03.030 21:41:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:03.030 21:41:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:03.030 21:41:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:03.030 21:41:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:03.030 21:41:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:03.030 21:41:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:03.030 21:41:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:03.030 21:41:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:03.030 21:41:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:03.030 [2024-06-07 21:41:03.231925] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:03.030 21:41:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:03.030 21:41:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1515635 00:24:03.030 21:41:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:24:03.030 21:41:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:03.030 EAL: No free 2048 kB hugepages reported on node 1 00:24:05.559 21:41:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:24:05.559 21:41:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:05.559 21:41:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:05.559 21:41:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:05.559 21:41:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:24:05.559 "tick_rate": 2200000000, 00:24:05.559 "poll_groups": [ 00:24:05.559 { 00:24:05.559 "name": "nvmf_tgt_poll_group_000", 00:24:05.559 "admin_qpairs": 1, 00:24:05.559 "io_qpairs": 1, 00:24:05.559 "current_admin_qpairs": 1, 00:24:05.559 "current_io_qpairs": 1, 00:24:05.559 "pending_bdev_io": 0, 00:24:05.559 "completed_nvme_io": 15566, 00:24:05.559 "transports": [ 00:24:05.559 { 00:24:05.559 "trtype": "TCP" 00:24:05.559 } 00:24:05.559 ] 00:24:05.559 }, 00:24:05.559 { 00:24:05.559 "name": "nvmf_tgt_poll_group_001", 00:24:05.559 "admin_qpairs": 0, 00:24:05.559 "io_qpairs": 1, 00:24:05.559 "current_admin_qpairs": 0, 00:24:05.559 "current_io_qpairs": 1, 00:24:05.559 "pending_bdev_io": 0, 00:24:05.559 "completed_nvme_io": 20154, 00:24:05.559 "transports": [ 00:24:05.559 { 00:24:05.559 "trtype": "TCP" 00:24:05.559 } 00:24:05.559 ] 00:24:05.559 }, 00:24:05.559 { 00:24:05.559 "name": "nvmf_tgt_poll_group_002", 00:24:05.559 "admin_qpairs": 0, 00:24:05.559 "io_qpairs": 1, 00:24:05.559 "current_admin_qpairs": 0, 00:24:05.559 "current_io_qpairs": 1, 00:24:05.559 "pending_bdev_io": 0, 00:24:05.559 "completed_nvme_io": 15503, 00:24:05.559 "transports": [ 00:24:05.559 { 00:24:05.559 "trtype": "TCP" 00:24:05.559 } 00:24:05.559 ] 00:24:05.559 }, 00:24:05.559 { 00:24:05.559 "name": "nvmf_tgt_poll_group_003", 00:24:05.559 "admin_qpairs": 0, 00:24:05.559 "io_qpairs": 1, 00:24:05.559 "current_admin_qpairs": 0, 00:24:05.559 "current_io_qpairs": 1, 00:24:05.559 "pending_bdev_io": 0, 00:24:05.559 "completed_nvme_io": 15459, 00:24:05.559 "transports": [ 00:24:05.559 { 00:24:05.559 "trtype": "TCP" 00:24:05.559 } 00:24:05.559 ] 00:24:05.559 } 00:24:05.559 ] 00:24:05.559 }' 00:24:05.559 21:41:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:24:05.559 21:41:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:24:05.559 21:41:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:24:05.559 21:41:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:24:05.559 21:41:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1515635 00:24:13.693 Initializing NVMe Controllers 00:24:13.693 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:13.693 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:13.693 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:13.693 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:13.693 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:13.693 Initialization complete. Launching workers. 00:24:13.693 ======================================================== 00:24:13.693 Latency(us) 00:24:13.693 Device Information : IOPS MiB/s Average min max 00:24:13.693 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7967.50 31.12 8036.27 3566.23 11646.11 00:24:13.693 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10422.19 40.71 6140.62 1273.37 10558.57 00:24:13.693 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 8059.10 31.48 7959.29 2858.82 47558.71 00:24:13.693 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8083.20 31.57 7918.37 2736.56 12322.92 00:24:13.693 ======================================================== 00:24:13.693 Total : 34531.98 134.89 7418.58 1273.37 47558.71 00:24:13.693 00:24:13.693 21:41:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:24:13.693 21:41:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:13.693 21:41:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:24:13.693 21:41:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:13.693 21:41:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:24:13.693 21:41:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:13.693 21:41:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:13.693 rmmod nvme_tcp 00:24:13.693 rmmod nvme_fabrics 00:24:13.693 rmmod nvme_keyring 00:24:13.693 21:41:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:13.693 21:41:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:24:13.693 21:41:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:24:13.693 21:41:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1515454 ']' 00:24:13.693 21:41:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1515454 00:24:13.693 21:41:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@949 -- # '[' -z 1515454 ']' 00:24:13.693 21:41:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # kill -0 1515454 00:24:13.693 21:41:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # uname 00:24:13.693 21:41:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:13.693 21:41:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1515454 00:24:13.693 21:41:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:24:13.693 21:41:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:24:13.693 21:41:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1515454' 00:24:13.693 killing process with pid 1515454 00:24:13.693 21:41:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@968 -- # kill 1515454 00:24:13.693 21:41:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@973 -- # wait 1515454 00:24:13.693 21:41:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:13.693 21:41:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:13.693 21:41:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:13.693 21:41:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:13.693 21:41:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:13.693 21:41:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:13.693 21:41:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:13.693 21:41:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:15.596 21:41:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:15.596 21:41:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:24:15.596 21:41:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:24:16.973 21:41:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:24:19.506 21:41:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:24:24.778 21:41:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:24:24.778 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:24.778 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:24.778 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:24.778 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:24.778 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:24.778 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:24.778 21:41:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:24.778 21:41:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:24.778 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:24.779 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:24.779 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:24.779 Found net devices under 0000:af:00.0: cvl_0_0 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:24.779 Found net devices under 0000:af:00.1: cvl_0_1 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:24.779 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:24.779 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:24:24.779 00:24:24.779 --- 10.0.0.2 ping statistics --- 00:24:24.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.779 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:24.779 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:24.779 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:24:24.779 00:24:24.779 --- 10.0.0.1 ping statistics --- 00:24:24.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.779 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:24:24.779 net.core.busy_poll = 1 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:24:24.779 net.core.busy_read = 1 00:24:24.779 21:41:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:24:24.780 21:41:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:24:24.780 21:41:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:24:24.780 21:41:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:24:24.780 21:41:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:24:24.780 21:41:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:24.780 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:24.780 21:41:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:24.780 21:41:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:24.780 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1519793 00:24:24.780 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1519793 00:24:24.780 21:41:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:24.780 21:41:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@830 -- # '[' -z 1519793 ']' 00:24:24.780 21:41:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:24.780 21:41:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:24.780 21:41:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:24.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:24.780 21:41:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:24.780 21:41:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:24.780 [2024-06-07 21:41:24.858111] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:24:24.780 [2024-06-07 21:41:24.858153] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:24.780 EAL: No free 2048 kB hugepages reported on node 1 00:24:24.780 [2024-06-07 21:41:24.929316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:24.780 [2024-06-07 21:41:25.025322] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:24.780 [2024-06-07 21:41:25.025371] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:24.780 [2024-06-07 21:41:25.025382] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:24.780 [2024-06-07 21:41:25.025391] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:24.780 [2024-06-07 21:41:25.025399] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:24.780 [2024-06-07 21:41:25.029049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:24.780 [2024-06-07 21:41:25.029064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:24.780 [2024-06-07 21:41:25.029177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:24.780 [2024-06-07 21:41:25.029178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.038 21:41:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:25.038 21:41:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@863 -- # return 0 00:24:25.038 21:41:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:25.038 21:41:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:25.038 21:41:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:25.038 21:41:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:25.038 21:41:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:24:25.038 21:41:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:24:25.038 21:41:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:24:25.038 21:41:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:25.038 21:41:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:25.038 21:41:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:25.038 21:41:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:24:25.038 21:41:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:24:25.038 21:41:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:25.038 21:41:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:25.038 21:41:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:25.038 21:41:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:24:25.038 21:41:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:25.038 21:41:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:25.038 21:41:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:25.038 21:41:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:24:25.038 21:41:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:25.038 21:41:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:25.038 [2024-06-07 21:41:25.275091] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:25.038 21:41:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:25.038 21:41:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:25.038 21:41:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:25.038 21:41:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:25.038 Malloc1 00:24:25.038 21:41:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:25.038 21:41:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:25.038 21:41:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:25.039 21:41:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:25.296 21:41:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:25.296 21:41:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:25.296 21:41:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:25.296 21:41:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:25.296 21:41:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:25.296 21:41:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:25.296 21:41:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:25.296 21:41:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:25.296 [2024-06-07 21:41:25.326732] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:25.296 21:41:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:25.296 21:41:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1519821 00:24:25.296 21:41:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:24:25.296 21:41:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:25.296 EAL: No free 2048 kB hugepages reported on node 1 00:24:27.195 21:41:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:24:27.195 21:41:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:27.195 21:41:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:27.195 21:41:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:27.195 21:41:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:24:27.195 "tick_rate": 2200000000, 00:24:27.195 "poll_groups": [ 00:24:27.195 { 00:24:27.195 "name": "nvmf_tgt_poll_group_000", 00:24:27.195 "admin_qpairs": 1, 00:24:27.195 "io_qpairs": 3, 00:24:27.195 "current_admin_qpairs": 1, 00:24:27.195 "current_io_qpairs": 3, 00:24:27.195 "pending_bdev_io": 0, 00:24:27.195 "completed_nvme_io": 21165, 00:24:27.195 "transports": [ 00:24:27.195 { 00:24:27.195 "trtype": "TCP" 00:24:27.195 } 00:24:27.195 ] 00:24:27.195 }, 00:24:27.195 { 00:24:27.195 "name": "nvmf_tgt_poll_group_001", 00:24:27.195 "admin_qpairs": 0, 00:24:27.195 "io_qpairs": 1, 00:24:27.195 "current_admin_qpairs": 0, 00:24:27.195 "current_io_qpairs": 1, 00:24:27.195 "pending_bdev_io": 0, 00:24:27.195 "completed_nvme_io": 29194, 00:24:27.195 "transports": [ 00:24:27.195 { 00:24:27.195 "trtype": "TCP" 00:24:27.195 } 00:24:27.195 ] 00:24:27.195 }, 00:24:27.195 { 00:24:27.195 "name": "nvmf_tgt_poll_group_002", 00:24:27.195 "admin_qpairs": 0, 00:24:27.195 "io_qpairs": 0, 00:24:27.195 "current_admin_qpairs": 0, 00:24:27.195 "current_io_qpairs": 0, 00:24:27.195 "pending_bdev_io": 0, 00:24:27.195 "completed_nvme_io": 0, 00:24:27.195 "transports": [ 00:24:27.195 { 00:24:27.195 "trtype": "TCP" 00:24:27.195 } 00:24:27.195 ] 00:24:27.195 }, 00:24:27.195 { 00:24:27.195 "name": "nvmf_tgt_poll_group_003", 00:24:27.195 "admin_qpairs": 0, 00:24:27.195 "io_qpairs": 0, 00:24:27.195 "current_admin_qpairs": 0, 00:24:27.195 "current_io_qpairs": 0, 00:24:27.195 "pending_bdev_io": 0, 00:24:27.195 "completed_nvme_io": 0, 00:24:27.195 "transports": [ 00:24:27.195 { 00:24:27.195 "trtype": "TCP" 00:24:27.195 } 00:24:27.195 ] 00:24:27.195 } 00:24:27.195 ] 00:24:27.195 }' 00:24:27.195 21:41:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:24:27.195 21:41:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:24:27.195 21:41:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:24:27.195 21:41:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:24:27.195 21:41:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1519821 00:24:35.305 Initializing NVMe Controllers 00:24:35.305 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:35.305 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:35.305 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:35.305 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:35.305 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:35.305 Initialization complete. Launching workers. 00:24:35.305 ======================================================== 00:24:35.305 Latency(us) 00:24:35.305 Device Information : IOPS MiB/s Average min max 00:24:35.305 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 3848.10 15.03 16640.23 2176.82 67877.26 00:24:35.305 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 15375.70 60.06 4161.93 1481.47 8082.25 00:24:35.305 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 3625.80 14.16 17714.47 3504.29 66085.79 00:24:35.305 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 3744.50 14.63 17101.45 2275.39 67208.13 00:24:35.305 ======================================================== 00:24:35.305 Total : 26594.09 103.88 9637.15 1481.47 67877.26 00:24:35.305 00:24:35.305 21:41:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:24:35.305 21:41:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:35.305 21:41:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:24:35.305 21:41:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:35.305 21:41:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:24:35.305 21:41:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:35.305 21:41:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:35.305 rmmod nvme_tcp 00:24:35.305 rmmod nvme_fabrics 00:24:35.305 rmmod nvme_keyring 00:24:35.305 21:41:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:35.564 21:41:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:24:35.564 21:41:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:24:35.564 21:41:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1519793 ']' 00:24:35.564 21:41:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1519793 00:24:35.564 21:41:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@949 -- # '[' -z 1519793 ']' 00:24:35.564 21:41:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # kill -0 1519793 00:24:35.564 21:41:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # uname 00:24:35.564 21:41:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:35.564 21:41:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1519793 00:24:35.564 21:41:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:24:35.564 21:41:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:24:35.564 21:41:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1519793' 00:24:35.564 killing process with pid 1519793 00:24:35.564 21:41:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@968 -- # kill 1519793 00:24:35.564 21:41:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@973 -- # wait 1519793 00:24:35.824 21:41:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:35.824 21:41:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:35.824 21:41:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:35.824 21:41:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:35.824 21:41:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:35.824 21:41:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.824 21:41:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:35.824 21:41:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:39.171 21:41:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:39.171 21:41:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:24:39.171 00:24:39.171 real 0m51.545s 00:24:39.171 user 2m47.545s 00:24:39.171 sys 0m9.951s 00:24:39.171 21:41:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:39.171 21:41:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:39.171 ************************************ 00:24:39.171 END TEST nvmf_perf_adq 00:24:39.171 ************************************ 00:24:39.171 21:41:38 nvmf_tcp -- nvmf/nvmf.sh@82 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:39.171 21:41:38 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:24:39.171 21:41:38 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:39.171 21:41:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:39.171 ************************************ 00:24:39.171 START TEST nvmf_shutdown 00:24:39.171 ************************************ 00:24:39.171 21:41:39 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:39.171 * Looking for test storage... 00:24:39.171 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:39.171 21:41:39 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:39.171 21:41:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:24:39.171 21:41:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:39.171 21:41:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:39.171 21:41:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:39.171 21:41:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:39.171 21:41:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:39.171 21:41:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:39.171 21:41:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:39.171 21:41:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:39.171 21:41:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:39.171 21:41:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:39.171 21:41:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:24:39.171 21:41:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:24:39.171 21:41:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:39.171 21:41:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:39.171 21:41:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:39.171 21:41:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:39.171 21:41:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:39.171 21:41:39 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:39.171 21:41:39 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:39.171 21:41:39 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:39.171 21:41:39 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.171 21:41:39 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.172 21:41:39 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.172 21:41:39 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:24:39.172 21:41:39 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.172 21:41:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:24:39.172 21:41:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:39.172 21:41:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:39.172 21:41:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:39.172 21:41:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:39.172 21:41:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:39.172 21:41:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:39.172 21:41:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:39.172 21:41:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:39.172 21:41:39 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:39.172 21:41:39 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:39.172 21:41:39 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:24:39.172 21:41:39 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:24:39.172 21:41:39 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:39.172 21:41:39 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:39.172 ************************************ 00:24:39.172 START TEST nvmf_shutdown_tc1 00:24:39.172 ************************************ 00:24:39.172 21:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc1 00:24:39.172 21:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:24:39.172 21:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:39.172 21:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:39.172 21:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:39.172 21:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:39.172 21:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:39.172 21:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:39.172 21:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:39.172 21:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:39.172 21:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:39.172 21:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:39.172 21:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:39.172 21:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:39.172 21:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:45.731 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:45.731 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:45.731 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:45.731 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:45.731 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:45.731 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:45.731 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:45.731 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:24:45.731 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:45.731 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:45.732 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:45.732 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:45.732 Found net devices under 0000:af:00.0: cvl_0_0 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:45.732 Found net devices under 0000:af:00.1: cvl_0_1 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:45.732 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:45.732 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:24:45.732 00:24:45.732 --- 10.0.0.2 ping statistics --- 00:24:45.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.732 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:45.732 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:45.732 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:24:45.732 00:24:45.732 --- 10.0.0.1 ping statistics --- 00:24:45.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.732 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:45.732 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:45.733 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:45.733 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:45.733 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:45.733 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:45.733 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:45.733 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:45.733 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1525804 00:24:45.733 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1525804 00:24:45.733 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:45.733 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # '[' -z 1525804 ']' 00:24:45.733 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:45.733 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:45.733 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:45.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:45.733 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:45.733 21:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:45.733 [2024-06-07 21:41:45.618994] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:24:45.733 [2024-06-07 21:41:45.619068] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:45.733 EAL: No free 2048 kB hugepages reported on node 1 00:24:45.733 [2024-06-07 21:41:45.706135] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:45.733 [2024-06-07 21:41:45.797209] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:45.733 [2024-06-07 21:41:45.797254] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:45.733 [2024-06-07 21:41:45.797265] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:45.733 [2024-06-07 21:41:45.797274] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:45.733 [2024-06-07 21:41:45.797282] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:45.733 [2024-06-07 21:41:45.797390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:45.733 [2024-06-07 21:41:45.797504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:45.733 [2024-06-07 21:41:45.797616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:45.733 [2024-06-07 21:41:45.797617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:46.301 21:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:46.301 21:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@863 -- # return 0 00:24:46.301 21:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:46.301 21:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:46.560 21:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:46.560 21:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:46.560 21:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:46.560 21:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:46.560 21:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:46.560 [2024-06-07 21:41:46.610759] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:46.560 21:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:46.560 21:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:46.560 21:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:46.560 21:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:46.560 21:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:46.560 21:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:46.560 21:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:46.560 21:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:46.560 21:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:46.560 21:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:46.560 21:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:46.560 21:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:46.560 21:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:46.560 21:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:46.560 21:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:46.560 21:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:46.560 21:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:46.560 21:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:46.560 21:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:46.560 21:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:46.560 21:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:46.560 21:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:46.560 21:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:46.560 21:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:46.560 21:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:46.560 21:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:46.560 21:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:46.560 21:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:46.560 21:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:46.560 Malloc1 00:24:46.560 [2024-06-07 21:41:46.710759] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:46.560 Malloc2 00:24:46.560 Malloc3 00:24:46.560 Malloc4 00:24:46.819 Malloc5 00:24:46.819 Malloc6 00:24:46.819 Malloc7 00:24:46.819 Malloc8 00:24:46.819 Malloc9 00:24:46.819 Malloc10 00:24:47.079 21:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:47.079 21:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:47.079 21:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:47.079 21:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:47.079 21:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1526170 00:24:47.079 21:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1526170 /var/tmp/bdevperf.sock 00:24:47.079 21:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # '[' -z 1526170 ']' 00:24:47.079 21:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:47.079 21:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:24:47.079 21:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:47.079 21:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:47.079 21:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:47.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:47.079 21:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:24:47.079 21:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:47.079 21:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:47.079 21:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:24:47.079 21:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:47.079 21:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:47.079 { 00:24:47.079 "params": { 00:24:47.079 "name": "Nvme$subsystem", 00:24:47.079 "trtype": "$TEST_TRANSPORT", 00:24:47.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:47.079 "adrfam": "ipv4", 00:24:47.079 "trsvcid": "$NVMF_PORT", 00:24:47.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:47.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:47.079 "hdgst": ${hdgst:-false}, 00:24:47.079 "ddgst": ${ddgst:-false} 00:24:47.079 }, 00:24:47.079 "method": "bdev_nvme_attach_controller" 00:24:47.079 } 00:24:47.079 EOF 00:24:47.079 )") 00:24:47.079 21:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:47.079 21:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:47.079 21:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:47.079 { 00:24:47.079 "params": { 00:24:47.079 "name": "Nvme$subsystem", 00:24:47.079 "trtype": "$TEST_TRANSPORT", 00:24:47.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:47.079 "adrfam": "ipv4", 00:24:47.079 "trsvcid": "$NVMF_PORT", 00:24:47.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:47.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:47.079 "hdgst": ${hdgst:-false}, 00:24:47.079 "ddgst": ${ddgst:-false} 00:24:47.079 }, 00:24:47.079 "method": "bdev_nvme_attach_controller" 00:24:47.079 } 00:24:47.079 EOF 00:24:47.079 )") 00:24:47.079 21:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:47.079 21:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:47.079 21:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:47.079 { 00:24:47.079 "params": { 00:24:47.079 "name": "Nvme$subsystem", 00:24:47.079 "trtype": "$TEST_TRANSPORT", 00:24:47.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:47.079 "adrfam": "ipv4", 00:24:47.079 "trsvcid": "$NVMF_PORT", 00:24:47.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:47.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:47.079 "hdgst": ${hdgst:-false}, 00:24:47.079 "ddgst": ${ddgst:-false} 00:24:47.079 }, 00:24:47.079 "method": "bdev_nvme_attach_controller" 00:24:47.079 } 00:24:47.079 EOF 00:24:47.079 )") 00:24:47.079 21:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:47.079 21:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:47.079 21:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:47.079 { 00:24:47.079 "params": { 00:24:47.079 "name": "Nvme$subsystem", 00:24:47.079 "trtype": "$TEST_TRANSPORT", 00:24:47.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:47.079 "adrfam": "ipv4", 00:24:47.079 "trsvcid": "$NVMF_PORT", 00:24:47.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:47.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:47.079 "hdgst": ${hdgst:-false}, 00:24:47.079 "ddgst": ${ddgst:-false} 00:24:47.079 }, 00:24:47.079 "method": "bdev_nvme_attach_controller" 00:24:47.079 } 00:24:47.079 EOF 00:24:47.079 )") 00:24:47.080 21:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:47.080 21:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:47.080 21:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:47.080 { 00:24:47.080 "params": { 00:24:47.080 "name": "Nvme$subsystem", 00:24:47.080 "trtype": "$TEST_TRANSPORT", 00:24:47.080 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:47.080 "adrfam": "ipv4", 00:24:47.080 "trsvcid": "$NVMF_PORT", 00:24:47.080 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:47.080 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:47.080 "hdgst": ${hdgst:-false}, 00:24:47.080 "ddgst": ${ddgst:-false} 00:24:47.080 }, 00:24:47.080 "method": "bdev_nvme_attach_controller" 00:24:47.080 } 00:24:47.080 EOF 00:24:47.080 )") 00:24:47.080 21:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:47.080 21:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:47.080 21:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:47.080 { 00:24:47.080 "params": { 00:24:47.080 "name": "Nvme$subsystem", 00:24:47.080 "trtype": "$TEST_TRANSPORT", 00:24:47.080 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:47.080 "adrfam": "ipv4", 00:24:47.080 "trsvcid": "$NVMF_PORT", 00:24:47.080 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:47.080 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:47.080 "hdgst": ${hdgst:-false}, 00:24:47.080 "ddgst": ${ddgst:-false} 00:24:47.080 }, 00:24:47.080 "method": "bdev_nvme_attach_controller" 00:24:47.080 } 00:24:47.080 EOF 00:24:47.080 )") 00:24:47.080 21:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:47.080 21:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:47.080 21:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:47.080 { 00:24:47.080 "params": { 00:24:47.080 "name": "Nvme$subsystem", 00:24:47.080 "trtype": "$TEST_TRANSPORT", 00:24:47.080 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:47.080 "adrfam": "ipv4", 00:24:47.080 "trsvcid": "$NVMF_PORT", 00:24:47.080 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:47.080 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:47.080 "hdgst": ${hdgst:-false}, 00:24:47.080 "ddgst": ${ddgst:-false} 00:24:47.080 }, 00:24:47.080 "method": "bdev_nvme_attach_controller" 00:24:47.080 } 00:24:47.080 EOF 00:24:47.080 )") 00:24:47.080 [2024-06-07 21:41:47.191772] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:24:47.080 [2024-06-07 21:41:47.191835] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:24:47.080 21:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:47.080 21:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:47.080 21:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:47.080 { 00:24:47.080 "params": { 00:24:47.080 "name": "Nvme$subsystem", 00:24:47.080 "trtype": "$TEST_TRANSPORT", 00:24:47.080 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:47.080 "adrfam": "ipv4", 00:24:47.080 "trsvcid": "$NVMF_PORT", 00:24:47.080 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:47.080 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:47.080 "hdgst": ${hdgst:-false}, 00:24:47.080 "ddgst": ${ddgst:-false} 00:24:47.080 }, 00:24:47.080 "method": "bdev_nvme_attach_controller" 00:24:47.080 } 00:24:47.080 EOF 00:24:47.080 )") 00:24:47.080 21:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:47.080 21:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:47.080 21:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:47.080 { 00:24:47.080 "params": { 00:24:47.080 "name": "Nvme$subsystem", 00:24:47.080 "trtype": "$TEST_TRANSPORT", 00:24:47.080 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:47.080 "adrfam": "ipv4", 00:24:47.080 "trsvcid": "$NVMF_PORT", 00:24:47.080 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:47.080 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:47.080 "hdgst": ${hdgst:-false}, 00:24:47.080 "ddgst": ${ddgst:-false} 00:24:47.080 }, 00:24:47.080 "method": "bdev_nvme_attach_controller" 00:24:47.080 } 00:24:47.080 EOF 00:24:47.080 )") 00:24:47.080 21:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:47.080 21:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:47.080 21:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:47.080 { 00:24:47.080 "params": { 00:24:47.080 "name": "Nvme$subsystem", 00:24:47.080 "trtype": "$TEST_TRANSPORT", 00:24:47.080 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:47.080 "adrfam": "ipv4", 00:24:47.080 "trsvcid": "$NVMF_PORT", 00:24:47.080 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:47.080 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:47.080 "hdgst": ${hdgst:-false}, 00:24:47.080 "ddgst": ${ddgst:-false} 00:24:47.080 }, 00:24:47.080 "method": "bdev_nvme_attach_controller" 00:24:47.080 } 00:24:47.080 EOF 00:24:47.080 )") 00:24:47.080 21:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:47.080 21:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:24:47.080 21:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:24:47.080 21:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:47.080 "params": { 00:24:47.080 "name": "Nvme1", 00:24:47.080 "trtype": "tcp", 00:24:47.080 "traddr": "10.0.0.2", 00:24:47.080 "adrfam": "ipv4", 00:24:47.080 "trsvcid": "4420", 00:24:47.080 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:47.080 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:47.080 "hdgst": false, 00:24:47.080 "ddgst": false 00:24:47.080 }, 00:24:47.080 "method": "bdev_nvme_attach_controller" 00:24:47.080 },{ 00:24:47.080 "params": { 00:24:47.080 "name": "Nvme2", 00:24:47.080 "trtype": "tcp", 00:24:47.080 "traddr": "10.0.0.2", 00:24:47.080 "adrfam": "ipv4", 00:24:47.080 "trsvcid": "4420", 00:24:47.080 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:47.080 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:47.080 "hdgst": false, 00:24:47.080 "ddgst": false 00:24:47.080 }, 00:24:47.080 "method": "bdev_nvme_attach_controller" 00:24:47.080 },{ 00:24:47.080 "params": { 00:24:47.080 "name": "Nvme3", 00:24:47.080 "trtype": "tcp", 00:24:47.080 "traddr": "10.0.0.2", 00:24:47.080 "adrfam": "ipv4", 00:24:47.080 "trsvcid": "4420", 00:24:47.080 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:47.080 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:47.080 "hdgst": false, 00:24:47.080 "ddgst": false 00:24:47.080 }, 00:24:47.080 "method": "bdev_nvme_attach_controller" 00:24:47.080 },{ 00:24:47.080 "params": { 00:24:47.080 "name": "Nvme4", 00:24:47.080 "trtype": "tcp", 00:24:47.080 "traddr": "10.0.0.2", 00:24:47.080 "adrfam": "ipv4", 00:24:47.080 "trsvcid": "4420", 00:24:47.080 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:47.080 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:47.080 "hdgst": false, 00:24:47.080 "ddgst": false 00:24:47.080 }, 00:24:47.080 "method": "bdev_nvme_attach_controller" 00:24:47.080 },{ 00:24:47.080 "params": { 00:24:47.080 "name": "Nvme5", 00:24:47.080 "trtype": "tcp", 00:24:47.080 "traddr": "10.0.0.2", 00:24:47.080 "adrfam": "ipv4", 00:24:47.080 "trsvcid": "4420", 00:24:47.080 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:47.080 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:47.080 "hdgst": false, 00:24:47.080 "ddgst": false 00:24:47.080 }, 00:24:47.080 "method": "bdev_nvme_attach_controller" 00:24:47.080 },{ 00:24:47.080 "params": { 00:24:47.080 "name": "Nvme6", 00:24:47.080 "trtype": "tcp", 00:24:47.080 "traddr": "10.0.0.2", 00:24:47.080 "adrfam": "ipv4", 00:24:47.080 "trsvcid": "4420", 00:24:47.080 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:47.080 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:47.080 "hdgst": false, 00:24:47.080 "ddgst": false 00:24:47.080 }, 00:24:47.080 "method": "bdev_nvme_attach_controller" 00:24:47.080 },{ 00:24:47.080 "params": { 00:24:47.080 "name": "Nvme7", 00:24:47.080 "trtype": "tcp", 00:24:47.080 "traddr": "10.0.0.2", 00:24:47.080 "adrfam": "ipv4", 00:24:47.080 "trsvcid": "4420", 00:24:47.080 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:47.080 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:47.080 "hdgst": false, 00:24:47.080 "ddgst": false 00:24:47.080 }, 00:24:47.080 "method": "bdev_nvme_attach_controller" 00:24:47.080 },{ 00:24:47.080 "params": { 00:24:47.080 "name": "Nvme8", 00:24:47.080 "trtype": "tcp", 00:24:47.080 "traddr": "10.0.0.2", 00:24:47.080 "adrfam": "ipv4", 00:24:47.080 "trsvcid": "4420", 00:24:47.080 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:47.080 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:47.080 "hdgst": false, 00:24:47.080 "ddgst": false 00:24:47.080 }, 00:24:47.080 "method": "bdev_nvme_attach_controller" 00:24:47.080 },{ 00:24:47.080 "params": { 00:24:47.080 "name": "Nvme9", 00:24:47.080 "trtype": "tcp", 00:24:47.080 "traddr": "10.0.0.2", 00:24:47.080 "adrfam": "ipv4", 00:24:47.080 "trsvcid": "4420", 00:24:47.081 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:47.081 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:47.081 "hdgst": false, 00:24:47.081 "ddgst": false 00:24:47.081 }, 00:24:47.081 "method": "bdev_nvme_attach_controller" 00:24:47.081 },{ 00:24:47.081 "params": { 00:24:47.081 "name": "Nvme10", 00:24:47.081 "trtype": "tcp", 00:24:47.081 "traddr": "10.0.0.2", 00:24:47.081 "adrfam": "ipv4", 00:24:47.081 "trsvcid": "4420", 00:24:47.081 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:47.081 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:47.081 "hdgst": false, 00:24:47.081 "ddgst": false 00:24:47.081 }, 00:24:47.081 "method": "bdev_nvme_attach_controller" 00:24:47.081 }' 00:24:47.081 EAL: No free 2048 kB hugepages reported on node 1 00:24:47.081 [2024-06-07 21:41:47.282897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.339 [2024-06-07 21:41:47.370968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:48.718 21:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:48.718 21:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@863 -- # return 0 00:24:48.718 21:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:48.718 21:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:48.718 21:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:48.718 21:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:48.718 21:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1526170 00:24:48.718 21:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:24:48.718 21:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:24:49.655 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1526170 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:24:49.655 21:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1525804 00:24:49.655 21:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:24:49.655 21:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:49.655 21:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:24:49.655 21:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:24:49.655 21:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:49.655 21:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:49.655 { 00:24:49.655 "params": { 00:24:49.655 "name": "Nvme$subsystem", 00:24:49.655 "trtype": "$TEST_TRANSPORT", 00:24:49.655 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:49.655 "adrfam": "ipv4", 00:24:49.655 "trsvcid": "$NVMF_PORT", 00:24:49.655 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:49.655 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:49.655 "hdgst": ${hdgst:-false}, 00:24:49.655 "ddgst": ${ddgst:-false} 00:24:49.655 }, 00:24:49.655 "method": "bdev_nvme_attach_controller" 00:24:49.655 } 00:24:49.655 EOF 00:24:49.655 )") 00:24:49.655 21:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:49.655 21:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:49.655 21:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:49.655 { 00:24:49.655 "params": { 00:24:49.655 "name": "Nvme$subsystem", 00:24:49.655 "trtype": "$TEST_TRANSPORT", 00:24:49.655 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:49.655 "adrfam": "ipv4", 00:24:49.655 "trsvcid": "$NVMF_PORT", 00:24:49.655 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:49.655 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:49.655 "hdgst": ${hdgst:-false}, 00:24:49.655 "ddgst": ${ddgst:-false} 00:24:49.655 }, 00:24:49.655 "method": "bdev_nvme_attach_controller" 00:24:49.655 } 00:24:49.655 EOF 00:24:49.655 )") 00:24:49.655 21:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:49.655 21:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:49.655 21:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:49.655 { 00:24:49.655 "params": { 00:24:49.655 "name": "Nvme$subsystem", 00:24:49.655 "trtype": "$TEST_TRANSPORT", 00:24:49.655 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:49.655 "adrfam": "ipv4", 00:24:49.655 "trsvcid": "$NVMF_PORT", 00:24:49.655 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:49.656 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:49.656 "hdgst": ${hdgst:-false}, 00:24:49.656 "ddgst": ${ddgst:-false} 00:24:49.656 }, 00:24:49.656 "method": "bdev_nvme_attach_controller" 00:24:49.656 } 00:24:49.656 EOF 00:24:49.656 )") 00:24:49.656 21:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:49.656 21:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:49.656 21:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:49.656 { 00:24:49.656 "params": { 00:24:49.656 "name": "Nvme$subsystem", 00:24:49.656 "trtype": "$TEST_TRANSPORT", 00:24:49.656 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:49.656 "adrfam": "ipv4", 00:24:49.656 "trsvcid": "$NVMF_PORT", 00:24:49.656 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:49.656 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:49.656 "hdgst": ${hdgst:-false}, 00:24:49.656 "ddgst": ${ddgst:-false} 00:24:49.656 }, 00:24:49.656 "method": "bdev_nvme_attach_controller" 00:24:49.656 } 00:24:49.656 EOF 00:24:49.656 )") 00:24:49.656 21:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:49.656 21:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:49.656 21:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:49.656 { 00:24:49.656 "params": { 00:24:49.656 "name": "Nvme$subsystem", 00:24:49.656 "trtype": "$TEST_TRANSPORT", 00:24:49.656 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:49.656 "adrfam": "ipv4", 00:24:49.656 "trsvcid": "$NVMF_PORT", 00:24:49.656 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:49.656 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:49.656 "hdgst": ${hdgst:-false}, 00:24:49.656 "ddgst": ${ddgst:-false} 00:24:49.656 }, 00:24:49.656 "method": "bdev_nvme_attach_controller" 00:24:49.656 } 00:24:49.656 EOF 00:24:49.656 )") 00:24:49.656 21:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:49.656 21:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:49.656 21:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:49.656 { 00:24:49.656 "params": { 00:24:49.656 "name": "Nvme$subsystem", 00:24:49.656 "trtype": "$TEST_TRANSPORT", 00:24:49.656 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:49.656 "adrfam": "ipv4", 00:24:49.656 "trsvcid": "$NVMF_PORT", 00:24:49.656 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:49.656 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:49.656 "hdgst": ${hdgst:-false}, 00:24:49.656 "ddgst": ${ddgst:-false} 00:24:49.656 }, 00:24:49.656 "method": "bdev_nvme_attach_controller" 00:24:49.656 } 00:24:49.656 EOF 00:24:49.656 )") 00:24:49.656 21:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:49.656 21:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:49.656 21:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:49.656 { 00:24:49.656 "params": { 00:24:49.656 "name": "Nvme$subsystem", 00:24:49.656 "trtype": "$TEST_TRANSPORT", 00:24:49.656 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:49.656 "adrfam": "ipv4", 00:24:49.656 "trsvcid": "$NVMF_PORT", 00:24:49.656 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:49.656 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:49.656 "hdgst": ${hdgst:-false}, 00:24:49.656 "ddgst": ${ddgst:-false} 00:24:49.656 }, 00:24:49.656 "method": "bdev_nvme_attach_controller" 00:24:49.656 } 00:24:49.656 EOF 00:24:49.656 )") 00:24:49.656 21:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:49.656 [2024-06-07 21:41:49.761790] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:24:49.656 [2024-06-07 21:41:49.761852] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1526637 ] 00:24:49.656 21:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:49.656 21:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:49.656 { 00:24:49.656 "params": { 00:24:49.656 "name": "Nvme$subsystem", 00:24:49.656 "trtype": "$TEST_TRANSPORT", 00:24:49.656 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:49.656 "adrfam": "ipv4", 00:24:49.656 "trsvcid": "$NVMF_PORT", 00:24:49.656 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:49.656 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:49.656 "hdgst": ${hdgst:-false}, 00:24:49.656 "ddgst": ${ddgst:-false} 00:24:49.656 }, 00:24:49.656 "method": "bdev_nvme_attach_controller" 00:24:49.656 } 00:24:49.656 EOF 00:24:49.656 )") 00:24:49.656 21:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:49.656 21:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:49.656 21:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:49.656 { 00:24:49.656 "params": { 00:24:49.656 "name": "Nvme$subsystem", 00:24:49.656 "trtype": "$TEST_TRANSPORT", 00:24:49.656 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:49.656 "adrfam": "ipv4", 00:24:49.656 "trsvcid": "$NVMF_PORT", 00:24:49.656 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:49.656 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:49.656 "hdgst": ${hdgst:-false}, 00:24:49.656 "ddgst": ${ddgst:-false} 00:24:49.656 }, 00:24:49.656 "method": "bdev_nvme_attach_controller" 00:24:49.656 } 00:24:49.656 EOF 00:24:49.656 )") 00:24:49.656 21:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:49.656 21:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:49.656 21:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:49.656 { 00:24:49.656 "params": { 00:24:49.656 "name": "Nvme$subsystem", 00:24:49.656 "trtype": "$TEST_TRANSPORT", 00:24:49.656 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:49.656 "adrfam": "ipv4", 00:24:49.656 "trsvcid": "$NVMF_PORT", 00:24:49.656 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:49.656 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:49.656 "hdgst": ${hdgst:-false}, 00:24:49.656 "ddgst": ${ddgst:-false} 00:24:49.656 }, 00:24:49.656 "method": "bdev_nvme_attach_controller" 00:24:49.656 } 00:24:49.656 EOF 00:24:49.656 )") 00:24:49.656 21:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:49.656 21:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:24:49.656 21:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:24:49.656 21:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:49.656 "params": { 00:24:49.656 "name": "Nvme1", 00:24:49.656 "trtype": "tcp", 00:24:49.656 "traddr": "10.0.0.2", 00:24:49.656 "adrfam": "ipv4", 00:24:49.656 "trsvcid": "4420", 00:24:49.656 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:49.656 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:49.656 "hdgst": false, 00:24:49.656 "ddgst": false 00:24:49.656 }, 00:24:49.656 "method": "bdev_nvme_attach_controller" 00:24:49.656 },{ 00:24:49.656 "params": { 00:24:49.656 "name": "Nvme2", 00:24:49.656 "trtype": "tcp", 00:24:49.656 "traddr": "10.0.0.2", 00:24:49.656 "adrfam": "ipv4", 00:24:49.656 "trsvcid": "4420", 00:24:49.656 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:49.656 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:49.656 "hdgst": false, 00:24:49.656 "ddgst": false 00:24:49.656 }, 00:24:49.656 "method": "bdev_nvme_attach_controller" 00:24:49.656 },{ 00:24:49.656 "params": { 00:24:49.656 "name": "Nvme3", 00:24:49.656 "trtype": "tcp", 00:24:49.656 "traddr": "10.0.0.2", 00:24:49.656 "adrfam": "ipv4", 00:24:49.656 "trsvcid": "4420", 00:24:49.656 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:49.656 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:49.656 "hdgst": false, 00:24:49.656 "ddgst": false 00:24:49.656 }, 00:24:49.656 "method": "bdev_nvme_attach_controller" 00:24:49.656 },{ 00:24:49.656 "params": { 00:24:49.656 "name": "Nvme4", 00:24:49.656 "trtype": "tcp", 00:24:49.656 "traddr": "10.0.0.2", 00:24:49.656 "adrfam": "ipv4", 00:24:49.656 "trsvcid": "4420", 00:24:49.656 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:49.656 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:49.656 "hdgst": false, 00:24:49.656 "ddgst": false 00:24:49.656 }, 00:24:49.656 "method": "bdev_nvme_attach_controller" 00:24:49.656 },{ 00:24:49.656 "params": { 00:24:49.656 "name": "Nvme5", 00:24:49.656 "trtype": "tcp", 00:24:49.656 "traddr": "10.0.0.2", 00:24:49.656 "adrfam": "ipv4", 00:24:49.656 "trsvcid": "4420", 00:24:49.656 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:49.656 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:49.656 "hdgst": false, 00:24:49.656 "ddgst": false 00:24:49.656 }, 00:24:49.656 "method": "bdev_nvme_attach_controller" 00:24:49.656 },{ 00:24:49.656 "params": { 00:24:49.656 "name": "Nvme6", 00:24:49.656 "trtype": "tcp", 00:24:49.656 "traddr": "10.0.0.2", 00:24:49.656 "adrfam": "ipv4", 00:24:49.656 "trsvcid": "4420", 00:24:49.656 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:49.656 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:49.656 "hdgst": false, 00:24:49.656 "ddgst": false 00:24:49.657 }, 00:24:49.657 "method": "bdev_nvme_attach_controller" 00:24:49.657 },{ 00:24:49.657 "params": { 00:24:49.657 "name": "Nvme7", 00:24:49.657 "trtype": "tcp", 00:24:49.657 "traddr": "10.0.0.2", 00:24:49.657 "adrfam": "ipv4", 00:24:49.657 "trsvcid": "4420", 00:24:49.657 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:49.657 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:49.657 "hdgst": false, 00:24:49.657 "ddgst": false 00:24:49.657 }, 00:24:49.657 "method": "bdev_nvme_attach_controller" 00:24:49.657 },{ 00:24:49.657 "params": { 00:24:49.657 "name": "Nvme8", 00:24:49.657 "trtype": "tcp", 00:24:49.657 "traddr": "10.0.0.2", 00:24:49.657 "adrfam": "ipv4", 00:24:49.657 "trsvcid": "4420", 00:24:49.657 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:49.657 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:49.657 "hdgst": false, 00:24:49.657 "ddgst": false 00:24:49.657 }, 00:24:49.657 "method": "bdev_nvme_attach_controller" 00:24:49.657 },{ 00:24:49.657 "params": { 00:24:49.657 "name": "Nvme9", 00:24:49.657 "trtype": "tcp", 00:24:49.657 "traddr": "10.0.0.2", 00:24:49.657 "adrfam": "ipv4", 00:24:49.657 "trsvcid": "4420", 00:24:49.657 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:49.657 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:49.657 "hdgst": false, 00:24:49.657 "ddgst": false 00:24:49.657 }, 00:24:49.657 "method": "bdev_nvme_attach_controller" 00:24:49.657 },{ 00:24:49.657 "params": { 00:24:49.657 "name": "Nvme10", 00:24:49.657 "trtype": "tcp", 00:24:49.657 "traddr": "10.0.0.2", 00:24:49.657 "adrfam": "ipv4", 00:24:49.657 "trsvcid": "4420", 00:24:49.657 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:49.657 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:49.657 "hdgst": false, 00:24:49.657 "ddgst": false 00:24:49.657 }, 00:24:49.657 "method": "bdev_nvme_attach_controller" 00:24:49.657 }' 00:24:49.657 EAL: No free 2048 kB hugepages reported on node 1 00:24:49.657 [2024-06-07 21:41:49.852014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.916 [2024-06-07 21:41:49.939641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:51.294 Running I/O for 1 seconds... 00:24:52.673 00:24:52.673 Latency(us) 00:24:52.673 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:52.673 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:52.673 Verification LBA range: start 0x0 length 0x400 00:24:52.673 Nvme1n1 : 1.09 181.59 11.35 0.00 0.00 330699.83 22997.18 274536.26 00:24:52.673 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:52.673 Verification LBA range: start 0x0 length 0x400 00:24:52.673 Nvme2n1 : 1.18 162.84 10.18 0.00 0.00 380486.75 26214.40 312666.30 00:24:52.673 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:52.673 Verification LBA range: start 0x0 length 0x400 00:24:52.673 Nvme3n1 : 1.18 216.27 13.52 0.00 0.00 280535.04 22878.02 308853.29 00:24:52.673 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:52.673 Verification LBA range: start 0x0 length 0x400 00:24:52.673 Nvme4n1 : 1.22 209.99 13.12 0.00 0.00 283493.93 20614.05 305040.29 00:24:52.673 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:52.673 Verification LBA range: start 0x0 length 0x400 00:24:52.673 Nvme5n1 : 1.10 174.09 10.88 0.00 0.00 331681.36 25856.93 303133.79 00:24:52.673 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:52.673 Verification LBA range: start 0x0 length 0x400 00:24:52.673 Nvme6n1 : 1.23 208.65 13.04 0.00 0.00 273352.15 28359.21 306946.79 00:24:52.673 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:52.673 Verification LBA range: start 0x0 length 0x400 00:24:52.673 Nvme7n1 : 1.23 208.48 13.03 0.00 0.00 267786.71 50045.67 280255.77 00:24:52.673 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:52.673 Verification LBA range: start 0x0 length 0x400 00:24:52.673 Nvme8n1 : 1.24 207.21 12.95 0.00 0.00 264189.44 17635.14 299320.79 00:24:52.673 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:52.673 Verification LBA range: start 0x0 length 0x400 00:24:52.673 Nvme9n1 : 1.24 206.50 12.91 0.00 0.00 259338.01 18945.86 306946.79 00:24:52.673 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:52.673 Verification LBA range: start 0x0 length 0x400 00:24:52.673 Nvme10n1 : 1.25 204.79 12.80 0.00 0.00 256018.50 13405.09 339357.32 00:24:52.673 =================================================================================================================== 00:24:52.673 Total : 1980.42 123.78 0.00 0.00 288416.84 13405.09 339357.32 00:24:52.673 21:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:24:52.673 21:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:52.673 21:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:52.673 21:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:52.673 21:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:52.673 21:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:52.673 21:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:24:52.673 21:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:52.673 21:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:24:52.673 21:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:52.674 21:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:52.674 rmmod nvme_tcp 00:24:52.674 rmmod nvme_fabrics 00:24:52.674 rmmod nvme_keyring 00:24:52.674 21:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:52.674 21:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:24:52.674 21:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:24:52.674 21:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1525804 ']' 00:24:52.674 21:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1525804 00:24:52.674 21:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@949 -- # '[' -z 1525804 ']' 00:24:52.674 21:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # kill -0 1525804 00:24:52.674 21:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # uname 00:24:52.674 21:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:52.674 21:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1525804 00:24:52.674 21:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:24:52.674 21:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:24:52.674 21:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1525804' 00:24:52.674 killing process with pid 1525804 00:24:52.674 21:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # kill 1525804 00:24:52.674 21:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # wait 1525804 00:24:53.242 21:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:53.242 21:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:53.242 21:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:53.242 21:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:53.242 21:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:53.242 21:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:53.242 21:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:53.242 21:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:55.147 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:55.147 00:24:55.147 real 0m16.256s 00:24:55.147 user 0m36.074s 00:24:55.147 sys 0m6.265s 00:24:55.147 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:55.147 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:55.147 ************************************ 00:24:55.147 END TEST nvmf_shutdown_tc1 00:24:55.147 ************************************ 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:55.407 ************************************ 00:24:55.407 START TEST nvmf_shutdown_tc2 00:24:55.407 ************************************ 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc2 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:55.407 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:55.407 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:55.407 Found net devices under 0000:af:00.0: cvl_0_0 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:55.407 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:55.408 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:55.408 Found net devices under 0000:af:00.1: cvl_0_1 00:24:55.408 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:55.408 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:55.408 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:24:55.408 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:55.408 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:55.408 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:55.408 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:55.408 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:55.408 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:55.408 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:55.408 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:55.408 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:55.408 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:55.408 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:55.408 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:55.408 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:55.408 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:55.408 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:55.408 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:55.408 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:55.408 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:55.408 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:55.408 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:55.667 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:55.667 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:55.667 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:55.667 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:55.667 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:24:55.667 00:24:55.667 --- 10.0.0.2 ping statistics --- 00:24:55.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.667 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:24:55.667 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:55.667 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:55.667 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:24:55.667 00:24:55.667 --- 10.0.0.1 ping statistics --- 00:24:55.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.667 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:24:55.667 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:55.667 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:24:55.667 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:55.667 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:55.667 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:55.667 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:55.667 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:55.667 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:55.667 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:55.667 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:55.667 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:55.667 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:55.667 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:55.667 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1527790 00:24:55.667 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1527790 00:24:55.667 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:55.667 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # '[' -z 1527790 ']' 00:24:55.667 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:55.667 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:55.667 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:55.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:55.667 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:55.667 21:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:55.668 [2024-06-07 21:41:55.888227] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:24:55.668 [2024-06-07 21:41:55.888270] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:55.668 EAL: No free 2048 kB hugepages reported on node 1 00:24:55.927 [2024-06-07 21:41:55.962770] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:55.927 [2024-06-07 21:41:56.053343] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:55.927 [2024-06-07 21:41:56.053382] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:55.927 [2024-06-07 21:41:56.053393] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:55.927 [2024-06-07 21:41:56.053402] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:55.927 [2024-06-07 21:41:56.053409] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:55.927 [2024-06-07 21:41:56.053517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:55.927 [2024-06-07 21:41:56.053542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:55.927 [2024-06-07 21:41:56.053657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:55.927 [2024-06-07 21:41:56.053656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:55.927 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:55.927 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@863 -- # return 0 00:24:55.927 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:55.927 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:55.927 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:56.186 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:56.186 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:56.186 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:56.186 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:56.186 [2024-06-07 21:41:56.214212] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:56.186 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:56.186 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:56.186 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:56.186 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:56.186 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:56.186 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:56.186 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:56.186 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:56.186 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:56.186 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:56.186 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:56.186 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:56.187 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:56.187 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:56.187 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:56.187 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:56.187 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:56.187 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:56.187 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:56.187 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:56.187 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:56.187 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:56.187 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:56.187 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:56.187 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:56.187 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:56.187 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:56.187 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:56.187 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:56.187 Malloc1 00:24:56.187 [2024-06-07 21:41:56.314316] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:56.187 Malloc2 00:24:56.187 Malloc3 00:24:56.187 Malloc4 00:24:56.446 Malloc5 00:24:56.446 Malloc6 00:24:56.446 Malloc7 00:24:56.446 Malloc8 00:24:56.446 Malloc9 00:24:56.446 Malloc10 00:24:56.446 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:56.446 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:56.446 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:56.446 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:56.706 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1528096 00:24:56.706 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1528096 /var/tmp/bdevperf.sock 00:24:56.706 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # '[' -z 1528096 ']' 00:24:56.706 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:56.706 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:56.706 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:56.706 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:56.706 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:56.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:56.706 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:24:56.706 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:56.706 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:24:56.706 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:56.706 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:56.706 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:56.706 { 00:24:56.706 "params": { 00:24:56.706 "name": "Nvme$subsystem", 00:24:56.706 "trtype": "$TEST_TRANSPORT", 00:24:56.706 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.706 "adrfam": "ipv4", 00:24:56.706 "trsvcid": "$NVMF_PORT", 00:24:56.706 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.706 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.706 "hdgst": ${hdgst:-false}, 00:24:56.706 "ddgst": ${ddgst:-false} 00:24:56.706 }, 00:24:56.706 "method": "bdev_nvme_attach_controller" 00:24:56.706 } 00:24:56.706 EOF 00:24:56.706 )") 00:24:56.706 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:56.706 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:56.706 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:56.706 { 00:24:56.706 "params": { 00:24:56.706 "name": "Nvme$subsystem", 00:24:56.706 "trtype": "$TEST_TRANSPORT", 00:24:56.706 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.706 "adrfam": "ipv4", 00:24:56.706 "trsvcid": "$NVMF_PORT", 00:24:56.706 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.706 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.706 "hdgst": ${hdgst:-false}, 00:24:56.706 "ddgst": ${ddgst:-false} 00:24:56.706 }, 00:24:56.706 "method": "bdev_nvme_attach_controller" 00:24:56.706 } 00:24:56.706 EOF 00:24:56.706 )") 00:24:56.706 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:56.706 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:56.706 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:56.706 { 00:24:56.706 "params": { 00:24:56.706 "name": "Nvme$subsystem", 00:24:56.706 "trtype": "$TEST_TRANSPORT", 00:24:56.706 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.706 "adrfam": "ipv4", 00:24:56.706 "trsvcid": "$NVMF_PORT", 00:24:56.706 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.706 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.706 "hdgst": ${hdgst:-false}, 00:24:56.706 "ddgst": ${ddgst:-false} 00:24:56.706 }, 00:24:56.706 "method": "bdev_nvme_attach_controller" 00:24:56.706 } 00:24:56.706 EOF 00:24:56.706 )") 00:24:56.706 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:56.706 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:56.706 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:56.706 { 00:24:56.706 "params": { 00:24:56.706 "name": "Nvme$subsystem", 00:24:56.706 "trtype": "$TEST_TRANSPORT", 00:24:56.706 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.706 "adrfam": "ipv4", 00:24:56.706 "trsvcid": "$NVMF_PORT", 00:24:56.706 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.706 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.706 "hdgst": ${hdgst:-false}, 00:24:56.706 "ddgst": ${ddgst:-false} 00:24:56.706 }, 00:24:56.706 "method": "bdev_nvme_attach_controller" 00:24:56.706 } 00:24:56.706 EOF 00:24:56.706 )") 00:24:56.706 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:56.706 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:56.706 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:56.706 { 00:24:56.706 "params": { 00:24:56.706 "name": "Nvme$subsystem", 00:24:56.706 "trtype": "$TEST_TRANSPORT", 00:24:56.706 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.706 "adrfam": "ipv4", 00:24:56.706 "trsvcid": "$NVMF_PORT", 00:24:56.706 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.706 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.706 "hdgst": ${hdgst:-false}, 00:24:56.707 "ddgst": ${ddgst:-false} 00:24:56.707 }, 00:24:56.707 "method": "bdev_nvme_attach_controller" 00:24:56.707 } 00:24:56.707 EOF 00:24:56.707 )") 00:24:56.707 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:56.707 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:56.707 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:56.707 { 00:24:56.707 "params": { 00:24:56.707 "name": "Nvme$subsystem", 00:24:56.707 "trtype": "$TEST_TRANSPORT", 00:24:56.707 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.707 "adrfam": "ipv4", 00:24:56.707 "trsvcid": "$NVMF_PORT", 00:24:56.707 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.707 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.707 "hdgst": ${hdgst:-false}, 00:24:56.707 "ddgst": ${ddgst:-false} 00:24:56.707 }, 00:24:56.707 "method": "bdev_nvme_attach_controller" 00:24:56.707 } 00:24:56.707 EOF 00:24:56.707 )") 00:24:56.707 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:56.707 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:56.707 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:56.707 { 00:24:56.707 "params": { 00:24:56.707 "name": "Nvme$subsystem", 00:24:56.707 "trtype": "$TEST_TRANSPORT", 00:24:56.707 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.707 "adrfam": "ipv4", 00:24:56.707 "trsvcid": "$NVMF_PORT", 00:24:56.707 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.707 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.707 "hdgst": ${hdgst:-false}, 00:24:56.707 "ddgst": ${ddgst:-false} 00:24:56.707 }, 00:24:56.707 "method": "bdev_nvme_attach_controller" 00:24:56.707 } 00:24:56.707 EOF 00:24:56.707 )") 00:24:56.707 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:56.707 [2024-06-07 21:41:56.793433] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:24:56.707 [2024-06-07 21:41:56.793493] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1528096 ] 00:24:56.707 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:56.707 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:56.707 { 00:24:56.707 "params": { 00:24:56.707 "name": "Nvme$subsystem", 00:24:56.707 "trtype": "$TEST_TRANSPORT", 00:24:56.707 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.707 "adrfam": "ipv4", 00:24:56.707 "trsvcid": "$NVMF_PORT", 00:24:56.707 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.707 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.707 "hdgst": ${hdgst:-false}, 00:24:56.707 "ddgst": ${ddgst:-false} 00:24:56.707 }, 00:24:56.707 "method": "bdev_nvme_attach_controller" 00:24:56.707 } 00:24:56.707 EOF 00:24:56.707 )") 00:24:56.707 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:56.707 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:56.707 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:56.707 { 00:24:56.707 "params": { 00:24:56.707 "name": "Nvme$subsystem", 00:24:56.707 "trtype": "$TEST_TRANSPORT", 00:24:56.707 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.707 "adrfam": "ipv4", 00:24:56.707 "trsvcid": "$NVMF_PORT", 00:24:56.707 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.707 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.707 "hdgst": ${hdgst:-false}, 00:24:56.707 "ddgst": ${ddgst:-false} 00:24:56.707 }, 00:24:56.707 "method": "bdev_nvme_attach_controller" 00:24:56.707 } 00:24:56.707 EOF 00:24:56.707 )") 00:24:56.707 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:56.707 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:56.707 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:56.707 { 00:24:56.707 "params": { 00:24:56.707 "name": "Nvme$subsystem", 00:24:56.707 "trtype": "$TEST_TRANSPORT", 00:24:56.707 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.707 "adrfam": "ipv4", 00:24:56.707 "trsvcid": "$NVMF_PORT", 00:24:56.707 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.707 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.707 "hdgst": ${hdgst:-false}, 00:24:56.707 "ddgst": ${ddgst:-false} 00:24:56.707 }, 00:24:56.707 "method": "bdev_nvme_attach_controller" 00:24:56.707 } 00:24:56.707 EOF 00:24:56.707 )") 00:24:56.707 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:56.707 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:24:56.707 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:24:56.707 21:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:56.707 "params": { 00:24:56.707 "name": "Nvme1", 00:24:56.707 "trtype": "tcp", 00:24:56.707 "traddr": "10.0.0.2", 00:24:56.707 "adrfam": "ipv4", 00:24:56.707 "trsvcid": "4420", 00:24:56.707 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:56.707 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:56.707 "hdgst": false, 00:24:56.707 "ddgst": false 00:24:56.707 }, 00:24:56.707 "method": "bdev_nvme_attach_controller" 00:24:56.707 },{ 00:24:56.707 "params": { 00:24:56.707 "name": "Nvme2", 00:24:56.707 "trtype": "tcp", 00:24:56.707 "traddr": "10.0.0.2", 00:24:56.707 "adrfam": "ipv4", 00:24:56.707 "trsvcid": "4420", 00:24:56.707 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:56.707 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:56.707 "hdgst": false, 00:24:56.707 "ddgst": false 00:24:56.707 }, 00:24:56.707 "method": "bdev_nvme_attach_controller" 00:24:56.707 },{ 00:24:56.707 "params": { 00:24:56.707 "name": "Nvme3", 00:24:56.707 "trtype": "tcp", 00:24:56.707 "traddr": "10.0.0.2", 00:24:56.707 "adrfam": "ipv4", 00:24:56.707 "trsvcid": "4420", 00:24:56.707 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:56.707 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:56.707 "hdgst": false, 00:24:56.707 "ddgst": false 00:24:56.707 }, 00:24:56.707 "method": "bdev_nvme_attach_controller" 00:24:56.707 },{ 00:24:56.707 "params": { 00:24:56.707 "name": "Nvme4", 00:24:56.707 "trtype": "tcp", 00:24:56.707 "traddr": "10.0.0.2", 00:24:56.707 "adrfam": "ipv4", 00:24:56.707 "trsvcid": "4420", 00:24:56.707 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:56.707 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:56.707 "hdgst": false, 00:24:56.707 "ddgst": false 00:24:56.707 }, 00:24:56.707 "method": "bdev_nvme_attach_controller" 00:24:56.707 },{ 00:24:56.707 "params": { 00:24:56.707 "name": "Nvme5", 00:24:56.707 "trtype": "tcp", 00:24:56.707 "traddr": "10.0.0.2", 00:24:56.707 "adrfam": "ipv4", 00:24:56.707 "trsvcid": "4420", 00:24:56.707 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:56.707 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:56.707 "hdgst": false, 00:24:56.707 "ddgst": false 00:24:56.707 }, 00:24:56.707 "method": "bdev_nvme_attach_controller" 00:24:56.707 },{ 00:24:56.707 "params": { 00:24:56.707 "name": "Nvme6", 00:24:56.707 "trtype": "tcp", 00:24:56.707 "traddr": "10.0.0.2", 00:24:56.707 "adrfam": "ipv4", 00:24:56.707 "trsvcid": "4420", 00:24:56.707 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:56.707 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:56.707 "hdgst": false, 00:24:56.707 "ddgst": false 00:24:56.707 }, 00:24:56.707 "method": "bdev_nvme_attach_controller" 00:24:56.707 },{ 00:24:56.707 "params": { 00:24:56.707 "name": "Nvme7", 00:24:56.707 "trtype": "tcp", 00:24:56.707 "traddr": "10.0.0.2", 00:24:56.707 "adrfam": "ipv4", 00:24:56.707 "trsvcid": "4420", 00:24:56.707 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:56.707 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:56.707 "hdgst": false, 00:24:56.707 "ddgst": false 00:24:56.707 }, 00:24:56.707 "method": "bdev_nvme_attach_controller" 00:24:56.707 },{ 00:24:56.707 "params": { 00:24:56.707 "name": "Nvme8", 00:24:56.707 "trtype": "tcp", 00:24:56.707 "traddr": "10.0.0.2", 00:24:56.707 "adrfam": "ipv4", 00:24:56.707 "trsvcid": "4420", 00:24:56.707 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:56.707 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:56.707 "hdgst": false, 00:24:56.707 "ddgst": false 00:24:56.707 }, 00:24:56.707 "method": "bdev_nvme_attach_controller" 00:24:56.707 },{ 00:24:56.707 "params": { 00:24:56.707 "name": "Nvme9", 00:24:56.707 "trtype": "tcp", 00:24:56.707 "traddr": "10.0.0.2", 00:24:56.707 "adrfam": "ipv4", 00:24:56.707 "trsvcid": "4420", 00:24:56.707 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:56.707 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:56.707 "hdgst": false, 00:24:56.707 "ddgst": false 00:24:56.707 }, 00:24:56.707 "method": "bdev_nvme_attach_controller" 00:24:56.707 },{ 00:24:56.707 "params": { 00:24:56.707 "name": "Nvme10", 00:24:56.707 "trtype": "tcp", 00:24:56.707 "traddr": "10.0.0.2", 00:24:56.708 "adrfam": "ipv4", 00:24:56.708 "trsvcid": "4420", 00:24:56.708 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:56.708 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:56.708 "hdgst": false, 00:24:56.708 "ddgst": false 00:24:56.708 }, 00:24:56.708 "method": "bdev_nvme_attach_controller" 00:24:56.708 }' 00:24:56.708 EAL: No free 2048 kB hugepages reported on node 1 00:24:56.708 [2024-06-07 21:41:56.884411] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.708 [2024-06-07 21:41:56.970168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:58.608 Running I/O for 10 seconds... 00:24:58.608 21:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:58.608 21:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@863 -- # return 0 00:24:58.608 21:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:58.608 21:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:58.608 21:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:58.608 21:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:58.608 21:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:58.608 21:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:58.608 21:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:58.608 21:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:24:58.608 21:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:24:58.608 21:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:58.608 21:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:58.609 21:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:58.609 21:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:58.609 21:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:58.609 21:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:58.609 21:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:58.609 21:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:24:58.609 21:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:24:58.609 21:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:58.867 21:41:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:58.867 21:41:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:58.867 21:41:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:58.867 21:41:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:58.867 21:41:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:58.867 21:41:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:58.867 21:41:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:58.867 21:41:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:24:58.867 21:41:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:24:58.867 21:41:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:59.126 21:41:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:59.126 21:41:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:59.126 21:41:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:59.126 21:41:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:59.126 21:41:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:59.126 21:41:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:59.385 21:41:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:59.385 21:41:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:24:59.385 21:41:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:24:59.385 21:41:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:24:59.385 21:41:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:24:59.385 21:41:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:24:59.385 21:41:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1528096 00:24:59.385 21:41:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@949 -- # '[' -z 1528096 ']' 00:24:59.385 21:41:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # kill -0 1528096 00:24:59.385 21:41:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # uname 00:24:59.385 21:41:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:59.385 21:41:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1528096 00:24:59.385 21:41:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:24:59.385 21:41:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:24:59.386 21:41:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1528096' 00:24:59.386 killing process with pid 1528096 00:24:59.386 21:41:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # kill 1528096 00:24:59.386 21:41:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # wait 1528096 00:24:59.386 Received shutdown signal, test time was about 1.045714 seconds 00:24:59.386 00:24:59.386 Latency(us) 00:24:59.386 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:59.386 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:59.386 Verification LBA range: start 0x0 length 0x400 00:24:59.386 Nvme1n1 : 1.00 192.73 12.05 0.00 0.00 327573.26 27286.81 312666.30 00:24:59.386 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:59.386 Verification LBA range: start 0x0 length 0x400 00:24:59.386 Nvme2n1 : 1.02 187.54 11.72 0.00 0.00 328165.62 23712.12 371767.85 00:24:59.386 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:59.386 Verification LBA range: start 0x0 length 0x400 00:24:59.386 Nvme3n1 : 1.04 241.85 15.12 0.00 0.00 248332.03 23950.43 299320.79 00:24:59.386 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:59.386 Verification LBA range: start 0x0 length 0x400 00:24:59.386 Nvme4n1 : 1.01 189.22 11.83 0.00 0.00 310146.02 31457.28 244032.23 00:24:59.386 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:59.386 Verification LBA range: start 0x0 length 0x400 00:24:59.386 Nvme5n1 : 1.02 188.22 11.76 0.00 0.00 304148.48 46947.61 333637.82 00:24:59.386 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:59.386 Verification LBA range: start 0x0 length 0x400 00:24:59.386 Nvme6n1 : 1.03 186.33 11.65 0.00 0.00 299565.30 27167.65 306946.79 00:24:59.386 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:59.386 Verification LBA range: start 0x0 length 0x400 00:24:59.386 Nvme7n1 : 1.02 189.01 11.81 0.00 0.00 285846.81 23354.65 268816.76 00:24:59.386 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:59.386 Verification LBA range: start 0x0 length 0x400 00:24:59.386 Nvme8n1 : 1.01 191.01 11.94 0.00 0.00 275600.91 26571.87 268816.76 00:24:59.386 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:59.386 Verification LBA range: start 0x0 length 0x400 00:24:59.386 Nvme9n1 : 1.04 185.46 11.59 0.00 0.00 277629.36 15847.80 318385.80 00:24:59.386 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:59.386 Verification LBA range: start 0x0 length 0x400 00:24:59.386 Nvme10n1 : 1.04 183.78 11.49 0.00 0.00 273117.87 23473.80 350796.33 00:24:59.386 =================================================================================================================== 00:24:59.386 Total : 1935.15 120.95 0.00 0.00 291658.61 15847.80 371767.85 00:24:59.645 21:41:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:25:00.582 21:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1527790 00:25:00.582 21:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:25:00.582 21:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:00.582 21:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:00.582 21:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:00.582 21:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:00.582 21:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:00.582 21:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:25:00.582 21:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:00.582 21:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:25:00.582 21:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:00.582 21:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:00.582 rmmod nvme_tcp 00:25:00.582 rmmod nvme_fabrics 00:25:00.582 rmmod nvme_keyring 00:25:00.841 21:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:00.841 21:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:25:00.841 21:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:25:00.841 21:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1527790 ']' 00:25:00.841 21:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1527790 00:25:00.841 21:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@949 -- # '[' -z 1527790 ']' 00:25:00.841 21:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # kill -0 1527790 00:25:00.841 21:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # uname 00:25:00.841 21:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:00.841 21:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1527790 00:25:00.841 21:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:25:00.841 21:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:25:00.841 21:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1527790' 00:25:00.841 killing process with pid 1527790 00:25:00.841 21:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # kill 1527790 00:25:00.841 21:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # wait 1527790 00:25:01.100 21:42:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:01.100 21:42:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:01.100 21:42:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:01.100 21:42:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:01.100 21:42:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:01.100 21:42:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:01.100 21:42:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:01.100 21:42:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.631 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:03.632 00:25:03.632 real 0m7.913s 00:25:03.632 user 0m23.956s 00:25:03.632 sys 0m1.410s 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:03.632 ************************************ 00:25:03.632 END TEST nvmf_shutdown_tc2 00:25:03.632 ************************************ 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:03.632 ************************************ 00:25:03.632 START TEST nvmf_shutdown_tc3 00:25:03.632 ************************************ 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc3 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:03.632 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:03.632 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:03.632 Found net devices under 0000:af:00.0: cvl_0_0 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:03.632 Found net devices under 0000:af:00.1: cvl_0_1 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:03.632 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:03.633 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:03.633 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:03.633 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:03.633 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:03.633 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:03.633 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:03.633 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:03.633 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:03.633 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:03.633 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:03.633 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:25:03.633 00:25:03.633 --- 10.0.0.2 ping statistics --- 00:25:03.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.633 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:25:03.633 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:03.633 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:03.633 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:25:03.633 00:25:03.633 --- 10.0.0.1 ping statistics --- 00:25:03.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.633 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:25:03.633 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:03.633 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:25:03.633 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:03.633 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:03.633 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:03.633 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:03.633 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:03.633 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:03.633 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:03.633 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:25:03.633 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:03.633 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:03.633 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:03.633 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1529543 00:25:03.633 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1529543 00:25:03.633 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:03.633 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # '[' -z 1529543 ']' 00:25:03.633 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:03.633 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:03.633 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:03.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:03.633 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:03.633 21:42:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:03.633 [2024-06-07 21:42:03.882765] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:25:03.633 [2024-06-07 21:42:03.882823] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:03.891 EAL: No free 2048 kB hugepages reported on node 1 00:25:03.891 [2024-06-07 21:42:03.970205] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:03.891 [2024-06-07 21:42:04.063190] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:03.891 [2024-06-07 21:42:04.063233] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:03.891 [2024-06-07 21:42:04.063244] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:03.891 [2024-06-07 21:42:04.063253] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:03.892 [2024-06-07 21:42:04.063260] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:03.892 [2024-06-07 21:42:04.063366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:03.892 [2024-06-07 21:42:04.063863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:03.892 [2024-06-07 21:42:04.063951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:03.892 [2024-06-07 21:42:04.063951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:04.826 21:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:04.826 21:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@863 -- # return 0 00:25:04.826 21:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:04.826 21:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:04.826 21:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:04.826 21:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:04.826 21:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:04.826 21:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:04.826 21:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:04.826 [2024-06-07 21:42:04.853650] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:04.826 21:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:04.826 21:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:25:04.826 21:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:25:04.826 21:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:04.826 21:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:04.826 21:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:04.826 21:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:04.826 21:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:25:04.826 21:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:04.826 21:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:25:04.826 21:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:04.826 21:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:25:04.826 21:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:04.826 21:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:25:04.826 21:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:04.826 21:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:25:04.826 21:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:04.826 21:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:25:04.826 21:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:04.826 21:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:25:04.826 21:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:04.826 21:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:25:04.826 21:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:04.826 21:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:25:04.826 21:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:04.826 21:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:25:04.826 21:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:25:04.826 21:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:04.826 21:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:04.826 Malloc1 00:25:04.826 [2024-06-07 21:42:04.953514] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:04.826 Malloc2 00:25:04.826 Malloc3 00:25:04.826 Malloc4 00:25:05.084 Malloc5 00:25:05.084 Malloc6 00:25:05.084 Malloc7 00:25:05.084 Malloc8 00:25:05.084 Malloc9 00:25:05.084 Malloc10 00:25:05.344 21:42:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:05.344 21:42:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:25:05.344 21:42:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:05.344 21:42:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:05.344 21:42:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1529890 00:25:05.344 21:42:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1529890 /var/tmp/bdevperf.sock 00:25:05.344 21:42:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # '[' -z 1529890 ']' 00:25:05.344 21:42:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:05.344 21:42:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:05.344 21:42:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:05.344 21:42:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:05.344 21:42:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:05.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:05.344 21:42:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:05.344 21:42:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:25:05.344 21:42:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:05.344 21:42:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:25:05.344 21:42:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:05.344 21:42:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:05.344 { 00:25:05.344 "params": { 00:25:05.344 "name": "Nvme$subsystem", 00:25:05.344 "trtype": "$TEST_TRANSPORT", 00:25:05.344 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:05.344 "adrfam": "ipv4", 00:25:05.344 "trsvcid": "$NVMF_PORT", 00:25:05.344 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:05.344 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:05.344 "hdgst": ${hdgst:-false}, 00:25:05.344 "ddgst": ${ddgst:-false} 00:25:05.344 }, 00:25:05.344 "method": "bdev_nvme_attach_controller" 00:25:05.344 } 00:25:05.344 EOF 00:25:05.344 )") 00:25:05.344 21:42:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:25:05.344 21:42:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:05.344 21:42:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:05.344 { 00:25:05.344 "params": { 00:25:05.344 "name": "Nvme$subsystem", 00:25:05.344 "trtype": "$TEST_TRANSPORT", 00:25:05.344 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:05.344 "adrfam": "ipv4", 00:25:05.344 "trsvcid": "$NVMF_PORT", 00:25:05.344 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:05.344 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:05.344 "hdgst": ${hdgst:-false}, 00:25:05.344 "ddgst": ${ddgst:-false} 00:25:05.344 }, 00:25:05.344 "method": "bdev_nvme_attach_controller" 00:25:05.344 } 00:25:05.344 EOF 00:25:05.344 )") 00:25:05.344 21:42:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:25:05.344 21:42:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:05.344 21:42:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:05.344 { 00:25:05.344 "params": { 00:25:05.344 "name": "Nvme$subsystem", 00:25:05.344 "trtype": "$TEST_TRANSPORT", 00:25:05.344 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:05.344 "adrfam": "ipv4", 00:25:05.344 "trsvcid": "$NVMF_PORT", 00:25:05.344 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:05.344 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:05.344 "hdgst": ${hdgst:-false}, 00:25:05.344 "ddgst": ${ddgst:-false} 00:25:05.344 }, 00:25:05.344 "method": "bdev_nvme_attach_controller" 00:25:05.344 } 00:25:05.344 EOF 00:25:05.344 )") 00:25:05.344 21:42:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:25:05.344 21:42:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:05.344 21:42:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:05.344 { 00:25:05.344 "params": { 00:25:05.344 "name": "Nvme$subsystem", 00:25:05.344 "trtype": "$TEST_TRANSPORT", 00:25:05.344 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:05.344 "adrfam": "ipv4", 00:25:05.344 "trsvcid": "$NVMF_PORT", 00:25:05.344 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:05.344 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:05.344 "hdgst": ${hdgst:-false}, 00:25:05.344 "ddgst": ${ddgst:-false} 00:25:05.344 }, 00:25:05.344 "method": "bdev_nvme_attach_controller" 00:25:05.344 } 00:25:05.344 EOF 00:25:05.344 )") 00:25:05.344 21:42:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:25:05.344 21:42:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:05.344 21:42:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:05.344 { 00:25:05.344 "params": { 00:25:05.344 "name": "Nvme$subsystem", 00:25:05.344 "trtype": "$TEST_TRANSPORT", 00:25:05.344 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:05.344 "adrfam": "ipv4", 00:25:05.344 "trsvcid": "$NVMF_PORT", 00:25:05.344 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:05.344 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:05.344 "hdgst": ${hdgst:-false}, 00:25:05.344 "ddgst": ${ddgst:-false} 00:25:05.344 }, 00:25:05.344 "method": "bdev_nvme_attach_controller" 00:25:05.344 } 00:25:05.344 EOF 00:25:05.344 )") 00:25:05.344 21:42:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:25:05.344 21:42:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:05.344 21:42:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:05.344 { 00:25:05.344 "params": { 00:25:05.344 "name": "Nvme$subsystem", 00:25:05.344 "trtype": "$TEST_TRANSPORT", 00:25:05.344 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:05.344 "adrfam": "ipv4", 00:25:05.344 "trsvcid": "$NVMF_PORT", 00:25:05.344 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:05.344 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:05.344 "hdgst": ${hdgst:-false}, 00:25:05.344 "ddgst": ${ddgst:-false} 00:25:05.344 }, 00:25:05.344 "method": "bdev_nvme_attach_controller" 00:25:05.344 } 00:25:05.344 EOF 00:25:05.344 )") 00:25:05.344 21:42:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:25:05.344 [2024-06-07 21:42:05.435031] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:25:05.344 [2024-06-07 21:42:05.435082] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1529890 ] 00:25:05.344 21:42:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:05.344 21:42:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:05.344 { 00:25:05.344 "params": { 00:25:05.344 "name": "Nvme$subsystem", 00:25:05.344 "trtype": "$TEST_TRANSPORT", 00:25:05.344 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:05.344 "adrfam": "ipv4", 00:25:05.344 "trsvcid": "$NVMF_PORT", 00:25:05.344 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:05.344 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:05.344 "hdgst": ${hdgst:-false}, 00:25:05.344 "ddgst": ${ddgst:-false} 00:25:05.344 }, 00:25:05.344 "method": "bdev_nvme_attach_controller" 00:25:05.344 } 00:25:05.344 EOF 00:25:05.344 )") 00:25:05.344 21:42:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:25:05.344 21:42:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:05.344 21:42:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:05.344 { 00:25:05.344 "params": { 00:25:05.344 "name": "Nvme$subsystem", 00:25:05.344 "trtype": "$TEST_TRANSPORT", 00:25:05.344 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:05.344 "adrfam": "ipv4", 00:25:05.344 "trsvcid": "$NVMF_PORT", 00:25:05.344 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:05.344 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:05.344 "hdgst": ${hdgst:-false}, 00:25:05.344 "ddgst": ${ddgst:-false} 00:25:05.344 }, 00:25:05.344 "method": "bdev_nvme_attach_controller" 00:25:05.344 } 00:25:05.344 EOF 00:25:05.344 )") 00:25:05.344 21:42:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:25:05.345 21:42:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:05.345 21:42:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:05.345 { 00:25:05.345 "params": { 00:25:05.345 "name": "Nvme$subsystem", 00:25:05.345 "trtype": "$TEST_TRANSPORT", 00:25:05.345 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:05.345 "adrfam": "ipv4", 00:25:05.345 "trsvcid": "$NVMF_PORT", 00:25:05.345 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:05.345 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:05.345 "hdgst": ${hdgst:-false}, 00:25:05.345 "ddgst": ${ddgst:-false} 00:25:05.345 }, 00:25:05.345 "method": "bdev_nvme_attach_controller" 00:25:05.345 } 00:25:05.345 EOF 00:25:05.345 )") 00:25:05.345 21:42:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:25:05.345 21:42:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:05.345 21:42:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:05.345 { 00:25:05.345 "params": { 00:25:05.345 "name": "Nvme$subsystem", 00:25:05.345 "trtype": "$TEST_TRANSPORT", 00:25:05.345 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:05.345 "adrfam": "ipv4", 00:25:05.345 "trsvcid": "$NVMF_PORT", 00:25:05.345 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:05.345 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:05.345 "hdgst": ${hdgst:-false}, 00:25:05.345 "ddgst": ${ddgst:-false} 00:25:05.345 }, 00:25:05.345 "method": "bdev_nvme_attach_controller" 00:25:05.345 } 00:25:05.345 EOF 00:25:05.345 )") 00:25:05.345 21:42:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:25:05.345 EAL: No free 2048 kB hugepages reported on node 1 00:25:05.345 21:42:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:25:05.345 21:42:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:25:05.345 21:42:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:05.345 "params": { 00:25:05.345 "name": "Nvme1", 00:25:05.345 "trtype": "tcp", 00:25:05.345 "traddr": "10.0.0.2", 00:25:05.345 "adrfam": "ipv4", 00:25:05.345 "trsvcid": "4420", 00:25:05.345 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:05.345 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:05.345 "hdgst": false, 00:25:05.345 "ddgst": false 00:25:05.345 }, 00:25:05.345 "method": "bdev_nvme_attach_controller" 00:25:05.345 },{ 00:25:05.345 "params": { 00:25:05.345 "name": "Nvme2", 00:25:05.345 "trtype": "tcp", 00:25:05.345 "traddr": "10.0.0.2", 00:25:05.345 "adrfam": "ipv4", 00:25:05.345 "trsvcid": "4420", 00:25:05.345 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:05.345 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:05.345 "hdgst": false, 00:25:05.345 "ddgst": false 00:25:05.345 }, 00:25:05.345 "method": "bdev_nvme_attach_controller" 00:25:05.345 },{ 00:25:05.345 "params": { 00:25:05.345 "name": "Nvme3", 00:25:05.345 "trtype": "tcp", 00:25:05.345 "traddr": "10.0.0.2", 00:25:05.345 "adrfam": "ipv4", 00:25:05.345 "trsvcid": "4420", 00:25:05.345 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:05.345 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:05.345 "hdgst": false, 00:25:05.345 "ddgst": false 00:25:05.345 }, 00:25:05.345 "method": "bdev_nvme_attach_controller" 00:25:05.345 },{ 00:25:05.345 "params": { 00:25:05.345 "name": "Nvme4", 00:25:05.345 "trtype": "tcp", 00:25:05.345 "traddr": "10.0.0.2", 00:25:05.345 "adrfam": "ipv4", 00:25:05.345 "trsvcid": "4420", 00:25:05.345 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:05.345 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:05.345 "hdgst": false, 00:25:05.345 "ddgst": false 00:25:05.345 }, 00:25:05.345 "method": "bdev_nvme_attach_controller" 00:25:05.345 },{ 00:25:05.345 "params": { 00:25:05.345 "name": "Nvme5", 00:25:05.345 "trtype": "tcp", 00:25:05.345 "traddr": "10.0.0.2", 00:25:05.345 "adrfam": "ipv4", 00:25:05.345 "trsvcid": "4420", 00:25:05.345 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:05.345 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:05.345 "hdgst": false, 00:25:05.345 "ddgst": false 00:25:05.345 }, 00:25:05.345 "method": "bdev_nvme_attach_controller" 00:25:05.345 },{ 00:25:05.345 "params": { 00:25:05.345 "name": "Nvme6", 00:25:05.345 "trtype": "tcp", 00:25:05.345 "traddr": "10.0.0.2", 00:25:05.345 "adrfam": "ipv4", 00:25:05.345 "trsvcid": "4420", 00:25:05.345 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:05.345 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:05.345 "hdgst": false, 00:25:05.345 "ddgst": false 00:25:05.345 }, 00:25:05.345 "method": "bdev_nvme_attach_controller" 00:25:05.345 },{ 00:25:05.345 "params": { 00:25:05.345 "name": "Nvme7", 00:25:05.345 "trtype": "tcp", 00:25:05.345 "traddr": "10.0.0.2", 00:25:05.345 "adrfam": "ipv4", 00:25:05.345 "trsvcid": "4420", 00:25:05.345 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:05.345 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:05.345 "hdgst": false, 00:25:05.345 "ddgst": false 00:25:05.345 }, 00:25:05.345 "method": "bdev_nvme_attach_controller" 00:25:05.345 },{ 00:25:05.345 "params": { 00:25:05.345 "name": "Nvme8", 00:25:05.345 "trtype": "tcp", 00:25:05.345 "traddr": "10.0.0.2", 00:25:05.345 "adrfam": "ipv4", 00:25:05.345 "trsvcid": "4420", 00:25:05.345 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:05.345 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:05.345 "hdgst": false, 00:25:05.345 "ddgst": false 00:25:05.345 }, 00:25:05.345 "method": "bdev_nvme_attach_controller" 00:25:05.345 },{ 00:25:05.345 "params": { 00:25:05.345 "name": "Nvme9", 00:25:05.345 "trtype": "tcp", 00:25:05.345 "traddr": "10.0.0.2", 00:25:05.345 "adrfam": "ipv4", 00:25:05.345 "trsvcid": "4420", 00:25:05.345 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:05.345 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:05.345 "hdgst": false, 00:25:05.345 "ddgst": false 00:25:05.345 }, 00:25:05.345 "method": "bdev_nvme_attach_controller" 00:25:05.345 },{ 00:25:05.345 "params": { 00:25:05.345 "name": "Nvme10", 00:25:05.345 "trtype": "tcp", 00:25:05.345 "traddr": "10.0.0.2", 00:25:05.345 "adrfam": "ipv4", 00:25:05.345 "trsvcid": "4420", 00:25:05.345 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:05.345 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:05.345 "hdgst": false, 00:25:05.345 "ddgst": false 00:25:05.345 }, 00:25:05.345 "method": "bdev_nvme_attach_controller" 00:25:05.345 }' 00:25:05.345 [2024-06-07 21:42:05.510006] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.345 [2024-06-07 21:42:05.596522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:07.249 Running I/O for 10 seconds... 00:25:07.249 21:42:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:07.249 21:42:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@863 -- # return 0 00:25:07.249 21:42:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:07.249 21:42:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:07.249 21:42:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:07.249 21:42:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:07.249 21:42:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:07.249 21:42:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:07.249 21:42:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:07.249 21:42:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:25:07.249 21:42:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:25:07.249 21:42:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:25:07.249 21:42:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:25:07.249 21:42:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:07.249 21:42:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:07.249 21:42:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:07.249 21:42:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:07.249 21:42:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:07.249 21:42:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:07.249 21:42:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:25:07.249 21:42:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:25:07.249 21:42:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:25:07.508 21:42:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:25:07.508 21:42:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:07.508 21:42:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:07.508 21:42:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:07.508 21:42:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:07.508 21:42:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:07.768 21:42:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:07.768 21:42:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:25:07.768 21:42:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:25:07.768 21:42:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:25:08.055 21:42:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:25:08.055 21:42:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:08.055 21:42:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:08.055 21:42:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:08.055 21:42:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:08.055 21:42:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:08.055 21:42:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:08.055 21:42:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:25:08.055 21:42:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:25:08.055 21:42:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:25:08.055 21:42:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:25:08.055 21:42:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:25:08.055 21:42:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1529543 00:25:08.055 21:42:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@949 -- # '[' -z 1529543 ']' 00:25:08.055 21:42:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # kill -0 1529543 00:25:08.055 21:42:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # uname 00:25:08.055 21:42:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:08.055 21:42:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1529543 00:25:08.055 21:42:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:25:08.055 21:42:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:25:08.055 21:42:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1529543' 00:25:08.055 killing process with pid 1529543 00:25:08.055 21:42:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # kill 1529543 00:25:08.055 21:42:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # wait 1529543 00:25:08.055 [2024-06-07 21:42:08.160716] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.055 [2024-06-07 21:42:08.160757] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.055 [2024-06-07 21:42:08.160765] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.055 [2024-06-07 21:42:08.160771] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.055 [2024-06-07 21:42:08.160777] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.055 [2024-06-07 21:42:08.160783] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.055 [2024-06-07 21:42:08.160789] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.055 [2024-06-07 21:42:08.160796] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.055 [2024-06-07 21:42:08.160801] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.055 [2024-06-07 21:42:08.160807] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.055 [2024-06-07 21:42:08.160812] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.055 [2024-06-07 21:42:08.160818] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.055 [2024-06-07 21:42:08.160824] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.055 [2024-06-07 21:42:08.160830] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.055 [2024-06-07 21:42:08.160836] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.055 [2024-06-07 21:42:08.160841] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.055 [2024-06-07 21:42:08.160847] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.055 [2024-06-07 21:42:08.160857] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.055 [2024-06-07 21:42:08.160864] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.055 [2024-06-07 21:42:08.160869] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.055 [2024-06-07 21:42:08.160875] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.055 [2024-06-07 21:42:08.160881] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.055 [2024-06-07 21:42:08.160886] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.055 [2024-06-07 21:42:08.160892] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.055 [2024-06-07 21:42:08.160897] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.055 [2024-06-07 21:42:08.160903] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.055 [2024-06-07 21:42:08.160908] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.055 [2024-06-07 21:42:08.160913] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.055 [2024-06-07 21:42:08.160919] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.055 [2024-06-07 21:42:08.160924] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.055 [2024-06-07 21:42:08.160930] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.055 [2024-06-07 21:42:08.160935] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.055 [2024-06-07 21:42:08.160941] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.055 [2024-06-07 21:42:08.160946] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.055 [2024-06-07 21:42:08.160952] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.055 [2024-06-07 21:42:08.160958] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.055 [2024-06-07 21:42:08.160964] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.055 [2024-06-07 21:42:08.160970] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.055 [2024-06-07 21:42:08.160975] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.055 [2024-06-07 21:42:08.160980] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.055 [2024-06-07 21:42:08.160986] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.055 [2024-06-07 21:42:08.160991] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.055 [2024-06-07 21:42:08.160996] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.055 [2024-06-07 21:42:08.161002] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.055 [2024-06-07 21:42:08.161008] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.055 [2024-06-07 21:42:08.161014] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.055 [2024-06-07 21:42:08.161019] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.161028] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.161034] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.161040] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.161045] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.161050] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.161056] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.161061] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.161067] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.161073] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.161078] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.161084] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.161089] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.161095] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.161100] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.161105] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.161110] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157fa10 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162456] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162489] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162499] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162509] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162518] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162527] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162536] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162546] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162559] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162568] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162578] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162586] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162595] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162604] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162613] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162622] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162631] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162640] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162649] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162658] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162667] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162677] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162686] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162695] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162703] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162712] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162721] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162730] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162739] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162748] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162756] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162765] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162774] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162784] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162793] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162804] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162813] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162823] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162832] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162841] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162850] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162858] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162867] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162876] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162885] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162894] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162902] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162911] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162922] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162931] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162940] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162949] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162957] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162966] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162975] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162984] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.162993] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.163001] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.163010] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.163020] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.163034] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.163043] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.163054] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1581020 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.164708] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.164722] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.164732] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.056 [2024-06-07 21:42:08.164741] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.164751] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.164760] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.164769] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.164779] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.164788] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.164799] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.164810] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.164820] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.164829] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.164838] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.164847] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.164856] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.164865] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.164874] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.164883] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.164892] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.164900] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.164909] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.164918] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.164927] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.164936] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.164945] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.164957] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.164967] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.164975] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.164985] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.164994] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.165003] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.165011] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.165020] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.165035] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.165044] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.165053] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.165062] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.165071] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.165080] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.165089] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.165098] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.165107] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.165116] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.165125] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.165134] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.165143] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.165152] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.165161] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.165169] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.165178] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.165187] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.165196] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.165207] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.165216] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.165224] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.165233] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.165242] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.165251] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.165260] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.165269] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.165278] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.165287] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157feb0 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.166393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.057 [2024-06-07 21:42:08.166431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.057 [2024-06-07 21:42:08.166444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.057 [2024-06-07 21:42:08.166455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.057 [2024-06-07 21:42:08.166466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.057 [2024-06-07 21:42:08.166476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.057 [2024-06-07 21:42:08.166487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.057 [2024-06-07 21:42:08.166497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.057 [2024-06-07 21:42:08.166507] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f5200 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.166563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.057 [2024-06-07 21:42:08.166575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.057 [2024-06-07 21:42:08.166586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.057 [2024-06-07 21:42:08.166595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.057 [2024-06-07 21:42:08.166606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.057 [2024-06-07 21:42:08.166616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.057 [2024-06-07 21:42:08.166627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.057 [2024-06-07 21:42:08.166636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.057 [2024-06-07 21:42:08.166650] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a8e50 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.169406] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.169437] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.169448] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.169457] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with the state(5) to be set 00:25:08.057 [2024-06-07 21:42:08.169467] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with the state(5) to be set 00:25:08.058 [2024-06-07 21:42:08.169477] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with the state(5) to be set 00:25:08.058 [2024-06-07 21:42:08.169485] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with the state(5) to be set 00:25:08.058 [2024-06-07 21:42:08.169496] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with the state(5) to be set 00:25:08.058 [2024-06-07 21:42:08.169506] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with the state(5) to be set 00:25:08.058 [2024-06-07 21:42:08.169515] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with the state(5) to be set 00:25:08.058 [2024-06-07 21:42:08.169524] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with the state(5) to be set 00:25:08.058 [2024-06-07 21:42:08.169532] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with the state(5) to be set 00:25:08.058 [2024-06-07 21:42:08.169541] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with the state(5) to be set 00:25:08.058 [2024-06-07 21:42:08.169550] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with the state(5) to be set 00:25:08.058 [2024-06-07 21:42:08.169559] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with the state(5) to be set 00:25:08.058 [2024-06-07 21:42:08.169568] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with the state(5) to be set 00:25:08.058 [2024-06-07 21:42:08.169577] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with [2024-06-07 21:42:08.169567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:1the state(5) to be set 00:25:08.058 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.058 [2024-06-07 21:42:08.169591] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with the state(5) to be set 00:25:08.058 [2024-06-07 21:42:08.169598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.058 [2024-06-07 21:42:08.169601] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with the state(5) to be set 00:25:08.058 [2024-06-07 21:42:08.169612] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with the state(5) to be set 00:25:08.058 [2024-06-07 21:42:08.169621] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with [2024-06-07 21:42:08.169621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:1the state(5) to be set 00:25:08.058 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.058 [2024-06-07 21:42:08.169633] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with [2024-06-07 21:42:08.169635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:25:08.058 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.058 [2024-06-07 21:42:08.169649] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with the state(5) to be set 00:25:08.058 [2024-06-07 21:42:08.169653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.058 [2024-06-07 21:42:08.169659] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with the state(5) to be set 00:25:08.058 [2024-06-07 21:42:08.169664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.058 [2024-06-07 21:42:08.169669] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with the state(5) to be set 00:25:08.058 [2024-06-07 21:42:08.169678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:1[2024-06-07 21:42:08.169679] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.058 the state(5) to be set 00:25:08.058 [2024-06-07 21:42:08.169691] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with [2024-06-07 21:42:08.169691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:25:08.058 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.058 [2024-06-07 21:42:08.169703] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with the state(5) to be set 00:25:08.058 [2024-06-07 21:42:08.169707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.058 [2024-06-07 21:42:08.169713] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with the state(5) to be set 00:25:08.058 [2024-06-07 21:42:08.169718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.058 [2024-06-07 21:42:08.169723] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with the state(5) to be set 00:25:08.058 [2024-06-07 21:42:08.169731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:1[2024-06-07 21:42:08.169733] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.058 the state(5) to be set 00:25:08.058 [2024-06-07 21:42:08.169744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-07 21:42:08.169744] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.058 the state(5) to be set 00:25:08.058 [2024-06-07 21:42:08.169758] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with the state(5) to be set 00:25:08.058 [2024-06-07 21:42:08.169760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.058 [2024-06-07 21:42:08.169768] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with the state(5) to be set 00:25:08.058 [2024-06-07 21:42:08.169771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.058 [2024-06-07 21:42:08.169778] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with the state(5) to be set 00:25:08.058 [2024-06-07 21:42:08.169785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.058 [2024-06-07 21:42:08.169788] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with the state(5) to be set 00:25:08.058 [2024-06-07 21:42:08.169796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.058 [2024-06-07 21:42:08.169801] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with the state(5) to be set 00:25:08.058 [2024-06-07 21:42:08.169810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:1[2024-06-07 21:42:08.169811] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.058 the state(5) to be set 00:25:08.058 [2024-06-07 21:42:08.169822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-07 21:42:08.169823] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.058 the state(5) to be set 00:25:08.058 [2024-06-07 21:42:08.169835] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with the state(5) to be set 00:25:08.058 [2024-06-07 21:42:08.169838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.058 [2024-06-07 21:42:08.169844] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with the state(5) to be set 00:25:08.058 [2024-06-07 21:42:08.169848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.058 [2024-06-07 21:42:08.169854] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with the state(5) to be set 00:25:08.058 [2024-06-07 21:42:08.169861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.058 [2024-06-07 21:42:08.169865] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with the state(5) to be set 00:25:08.058 [2024-06-07 21:42:08.169872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.058 [2024-06-07 21:42:08.169875] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with the state(5) to be set 00:25:08.058 [2024-06-07 21:42:08.169885] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with [2024-06-07 21:42:08.169885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:1the state(5) to be set 00:25:08.058 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.058 [2024-06-07 21:42:08.169897] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with the state(5) to be set 00:25:08.058 [2024-06-07 21:42:08.169898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.058 [2024-06-07 21:42:08.169907] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with the state(5) to be set 00:25:08.058 [2024-06-07 21:42:08.169912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.058 [2024-06-07 21:42:08.169918] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with the state(5) to be set 00:25:08.058 [2024-06-07 21:42:08.169923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.058 [2024-06-07 21:42:08.169927] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with the state(5) to be set 00:25:08.058 [2024-06-07 21:42:08.169935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.058 [2024-06-07 21:42:08.169937] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with the state(5) to be set 00:25:08.058 [2024-06-07 21:42:08.169946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-07 21:42:08.169948] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.058 the state(5) to be set 00:25:08.058 [2024-06-07 21:42:08.169967] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with the state(5) to be set 00:25:08.058 [2024-06-07 21:42:08.169969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.058 [2024-06-07 21:42:08.169976] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with the state(5) to be set 00:25:08.058 [2024-06-07 21:42:08.169980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.058 [2024-06-07 21:42:08.169986] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with the state(5) to be set 00:25:08.058 [2024-06-07 21:42:08.169994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:1[2024-06-07 21:42:08.169995] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.058 the state(5) to be set 00:25:08.059 [2024-06-07 21:42:08.170006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-07 21:42:08.170007] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.059 the state(5) to be set 00:25:08.059 [2024-06-07 21:42:08.170019] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with the state(5) to be set 00:25:08.059 [2024-06-07 21:42:08.170022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.059 [2024-06-07 21:42:08.170037] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with the state(5) to be set 00:25:08.059 [2024-06-07 21:42:08.170042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.059 [2024-06-07 21:42:08.170047] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with the state(5) to be set 00:25:08.059 [2024-06-07 21:42:08.170056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:1[2024-06-07 21:42:08.170057] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.059 the state(5) to be set 00:25:08.059 [2024-06-07 21:42:08.170069] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with [2024-06-07 21:42:08.170068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:25:08.059 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.059 [2024-06-07 21:42:08.170082] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with the state(5) to be set 00:25:08.059 [2024-06-07 21:42:08.170086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.059 [2024-06-07 21:42:08.170092] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580810 is same with the state(5) to be set 00:25:08.059 [2024-06-07 21:42:08.170099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.059 [2024-06-07 21:42:08.170114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.059 [2024-06-07 21:42:08.170124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.059 [2024-06-07 21:42:08.170136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.059 [2024-06-07 21:42:08.170149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.059 [2024-06-07 21:42:08.170161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.059 [2024-06-07 21:42:08.170170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.059 [2024-06-07 21:42:08.170183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.059 [2024-06-07 21:42:08.170192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.059 [2024-06-07 21:42:08.170204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.059 [2024-06-07 21:42:08.170215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.059 [2024-06-07 21:42:08.170227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.059 [2024-06-07 21:42:08.170237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.059 [2024-06-07 21:42:08.170249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.059 [2024-06-07 21:42:08.170259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.059 [2024-06-07 21:42:08.170271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.059 [2024-06-07 21:42:08.170281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.059 [2024-06-07 21:42:08.170293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.059 [2024-06-07 21:42:08.170303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.059 [2024-06-07 21:42:08.170315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.059 [2024-06-07 21:42:08.170324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.059 [2024-06-07 21:42:08.170336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.059 [2024-06-07 21:42:08.170346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.059 [2024-06-07 21:42:08.170358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.059 [2024-06-07 21:42:08.170368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.059 [2024-06-07 21:42:08.170380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.059 [2024-06-07 21:42:08.170389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.059 [2024-06-07 21:42:08.170402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.059 [2024-06-07 21:42:08.170411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.059 [2024-06-07 21:42:08.170425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.059 [2024-06-07 21:42:08.170435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.059 [2024-06-07 21:42:08.170447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.059 [2024-06-07 21:42:08.170456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.059 [2024-06-07 21:42:08.170469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.059 [2024-06-07 21:42:08.170479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.059 [2024-06-07 21:42:08.170490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.059 [2024-06-07 21:42:08.170500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.059 [2024-06-07 21:42:08.170512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.059 [2024-06-07 21:42:08.170521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.059 [2024-06-07 21:42:08.170535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.059 [2024-06-07 21:42:08.170545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.059 [2024-06-07 21:42:08.170557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.059 [2024-06-07 21:42:08.170571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.059 [2024-06-07 21:42:08.170583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.059 [2024-06-07 21:42:08.170593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.060 [2024-06-07 21:42:08.170605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.060 [2024-06-07 21:42:08.170615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.060 [2024-06-07 21:42:08.170627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.060 [2024-06-07 21:42:08.170637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.060 [2024-06-07 21:42:08.170650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.060 [2024-06-07 21:42:08.170660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.060 [2024-06-07 21:42:08.170672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.060 [2024-06-07 21:42:08.170682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.060 [2024-06-07 21:42:08.170693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.060 [2024-06-07 21:42:08.170706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.060 [2024-06-07 21:42:08.170718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.060 [2024-06-07 21:42:08.170728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.060 [2024-06-07 21:42:08.170739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.060 [2024-06-07 21:42:08.170750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.060 [2024-06-07 21:42:08.170762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.060 [2024-06-07 21:42:08.170772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.060 [2024-06-07 21:42:08.170783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.060 [2024-06-07 21:42:08.170793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.060 [2024-06-07 21:42:08.170806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.060 [2024-06-07 21:42:08.170815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.060 [2024-06-07 21:42:08.170827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.060 [2024-06-07 21:42:08.170837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.060 [2024-06-07 21:42:08.170849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.060 [2024-06-07 21:42:08.170859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.060 [2024-06-07 21:42:08.170871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.060 [2024-06-07 21:42:08.170880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.060 [2024-06-07 21:42:08.170892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.060 [2024-06-07 21:42:08.170902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.060 [2024-06-07 21:42:08.170915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.060 [2024-06-07 21:42:08.170926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.060 [2024-06-07 21:42:08.170938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.060 [2024-06-07 21:42:08.170948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.060 [2024-06-07 21:42:08.170961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.060 [2024-06-07 21:42:08.170970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.060 [2024-06-07 21:42:08.170984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.060 [2024-06-07 21:42:08.170994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.060 [2024-06-07 21:42:08.170994] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.060 [2024-06-07 21:42:08.171007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.060 [2024-06-07 21:42:08.171012] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.060 [2024-06-07 21:42:08.171019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-07 21:42:08.171020] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.060 the state(5) to be set 00:25:08.060 [2024-06-07 21:42:08.171035] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.060 [2024-06-07 21:42:08.171039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:12[2024-06-07 21:42:08.171041] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.060 the state(5) to be set 00:25:08.060 [2024-06-07 21:42:08.171050] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.060 [2024-06-07 21:42:08.171052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.060 [2024-06-07 21:42:08.171057] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.060 [2024-06-07 21:42:08.171063] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.060 [2024-06-07 21:42:08.171065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.060 [2024-06-07 21:42:08.171068] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.060 [2024-06-07 21:42:08.171076] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.060 [2024-06-07 21:42:08.171076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.060 [2024-06-07 21:42:08.171082] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.060 [2024-06-07 21:42:08.171089] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.060 [2024-06-07 21:42:08.171089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.060 [2024-06-07 21:42:08.171095] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.060 [2024-06-07 21:42:08.171101] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with [2024-06-07 21:42:08.171100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:25:08.060 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.060 [2024-06-07 21:42:08.171110] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.060 [2024-06-07 21:42:08.171116] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with [2024-06-07 21:42:08.171115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:12the state(5) to be set 00:25:08.060 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.060 [2024-06-07 21:42:08.171127] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.060 [2024-06-07 21:42:08.171131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.060 [2024-06-07 21:42:08.171134] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.060 [2024-06-07 21:42:08.171141] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.060 [2024-06-07 21:42:08.171146] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.060 [2024-06-07 21:42:08.171152] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.060 [2024-06-07 21:42:08.171157] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.060 [2024-06-07 21:42:08.171163] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.060 [2024-06-07 21:42:08.171168] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.060 [2024-06-07 21:42:08.171173] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.060 [2024-06-07 21:42:08.171179] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.060 [2024-06-07 21:42:08.171184] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.060 [2024-06-07 21:42:08.171190] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.060 [2024-06-07 21:42:08.171195] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.060 [2024-06-07 21:42:08.171201] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.060 [2024-06-07 21:42:08.171206] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with [2024-06-07 21:42:08.171204] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2195050 was disconnected and frthe state(5) to be set 00:25:08.060 eed. reset controller. 00:25:08.060 [2024-06-07 21:42:08.171215] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.060 [2024-06-07 21:42:08.171221] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.060 [2024-06-07 21:42:08.171236] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.171242] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.171248] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.171253] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.171258] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.171264] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.171269] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.171276] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.171282] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.171287] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.171292] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.171298] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.171303] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.171309] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.171314] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.171320] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.171325] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.171330] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.171336] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.171341] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.171347] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.171352] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.171357] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.171363] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.171368] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.171374] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.171379] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.171385] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.171390] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.171395] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.171401] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580cb0 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.172316] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16039a0 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.172342] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16039a0 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.173730] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.173765] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.173778] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.173788] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.173797] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.173807] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.173800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.061 [2024-06-07 21:42:08.173817] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.173827] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with [2024-06-07 21:42:08.173826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:25:08.061 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.061 [2024-06-07 21:42:08.173839] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.173848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.061 [2024-06-07 21:42:08.173851] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.173861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-07 21:42:08.173862] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.061 the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.173874] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.173877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.061 [2024-06-07 21:42:08.173886] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.173889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.061 [2024-06-07 21:42:08.173896] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.173902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.061 [2024-06-07 21:42:08.173905] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.173914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.061 [2024-06-07 21:42:08.173917] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.173927] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with [2024-06-07 21:42:08.173928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:1the state(5) to be set 00:25:08.061 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.061 [2024-06-07 21:42:08.173939] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with [2024-06-07 21:42:08.173941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:25:08.061 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.061 [2024-06-07 21:42:08.173956] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.173960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.061 [2024-06-07 21:42:08.173966] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.173973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.061 [2024-06-07 21:42:08.173978] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.173987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.061 [2024-06-07 21:42:08.173990] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.173998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.061 [2024-06-07 21:42:08.174001] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.174010] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.174012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.061 [2024-06-07 21:42:08.174022] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.174032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.061 [2024-06-07 21:42:08.174039] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.174045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.061 [2024-06-07 21:42:08.174049] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.174057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.061 [2024-06-07 21:42:08.174059] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.174071] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.174072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.061 [2024-06-07 21:42:08.174081] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with the state(5) to be set 00:25:08.061 [2024-06-07 21:42:08.174084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.061 [2024-06-07 21:42:08.174091] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with the state(5) to be set 00:25:08.062 [2024-06-07 21:42:08.174097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.062 [2024-06-07 21:42:08.174101] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with the state(5) to be set 00:25:08.062 [2024-06-07 21:42:08.174111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.062 [2024-06-07 21:42:08.174115] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with the state(5) to be set 00:25:08.062 [2024-06-07 21:42:08.174125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.062 [2024-06-07 21:42:08.174136] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with [2024-06-07 21:42:08.174137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:25:08.062 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.062 [2024-06-07 21:42:08.174151] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with the state(5) to be set 00:25:08.062 [2024-06-07 21:42:08.174155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.062 [2024-06-07 21:42:08.174162] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with the state(5) to be set 00:25:08.062 [2024-06-07 21:42:08.174166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.062 [2024-06-07 21:42:08.174172] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with the state(5) to be set 00:25:08.062 [2024-06-07 21:42:08.174179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.062 [2024-06-07 21:42:08.174183] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with the state(5) to be set 00:25:08.062 [2024-06-07 21:42:08.174191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.062 [2024-06-07 21:42:08.174194] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with the state(5) to be set 00:25:08.062 [2024-06-07 21:42:08.174205] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with [2024-06-07 21:42:08.174205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:1the state(5) to be set 00:25:08.062 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.062 [2024-06-07 21:42:08.174216] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with the state(5) to be set 00:25:08.062 [2024-06-07 21:42:08.174218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.062 [2024-06-07 21:42:08.174227] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with the state(5) to be set 00:25:08.062 [2024-06-07 21:42:08.174231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.062 [2024-06-07 21:42:08.174237] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with the state(5) to be set 00:25:08.062 [2024-06-07 21:42:08.174242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.062 [2024-06-07 21:42:08.174247] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with the state(5) to be set 00:25:08.062 [2024-06-07 21:42:08.174256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:1[2024-06-07 21:42:08.174257] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.062 the state(5) to be set 00:25:08.062 [2024-06-07 21:42:08.174269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-07 21:42:08.174269] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.062 the state(5) to be set 00:25:08.062 [2024-06-07 21:42:08.174285] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with the state(5) to be set 00:25:08.062 [2024-06-07 21:42:08.174287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.062 [2024-06-07 21:42:08.174294] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with the state(5) to be set 00:25:08.062 [2024-06-07 21:42:08.174298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.062 [2024-06-07 21:42:08.174304] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with the state(5) to be set 00:25:08.062 [2024-06-07 21:42:08.174311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.062 [2024-06-07 21:42:08.174314] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with the state(5) to be set 00:25:08.062 [2024-06-07 21:42:08.174322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.062 [2024-06-07 21:42:08.174324] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with the state(5) to be set 00:25:08.062 [2024-06-07 21:42:08.174335] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with [2024-06-07 21:42:08.174335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:1the state(5) to be set 00:25:08.062 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.062 [2024-06-07 21:42:08.174347] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with the state(5) to be set 00:25:08.062 [2024-06-07 21:42:08.174348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.062 [2024-06-07 21:42:08.174357] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with the state(5) to be set 00:25:08.062 [2024-06-07 21:42:08.174362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.062 [2024-06-07 21:42:08.174367] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with the state(5) to be set 00:25:08.062 [2024-06-07 21:42:08.174373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.062 [2024-06-07 21:42:08.174376] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with the state(5) to be set 00:25:08.062 [2024-06-07 21:42:08.174386] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with [2024-06-07 21:42:08.174386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:1the state(5) to be set 00:25:08.062 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.062 [2024-06-07 21:42:08.174399] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with the state(5) to be set 00:25:08.062 [2024-06-07 21:42:08.174400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.062 [2024-06-07 21:42:08.174409] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with the state(5) to be set 00:25:08.062 [2024-06-07 21:42:08.174414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.062 [2024-06-07 21:42:08.174418] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with the state(5) to be set 00:25:08.062 [2024-06-07 21:42:08.174424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.062 [2024-06-07 21:42:08.174428] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with the state(5) to be set 00:25:08.062 [2024-06-07 21:42:08.174441] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with [2024-06-07 21:42:08.174441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:1the state(5) to be set 00:25:08.062 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.062 [2024-06-07 21:42:08.174455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-07 21:42:08.174455] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1603e40 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.062 the state(5) to be set 00:25:08.062 [2024-06-07 21:42:08.174470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.062 [2024-06-07 21:42:08.174481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.062 [2024-06-07 21:42:08.174494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.062 [2024-06-07 21:42:08.174504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.062 [2024-06-07 21:42:08.174516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.062 [2024-06-07 21:42:08.174526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.062 [2024-06-07 21:42:08.174538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.062 [2024-06-07 21:42:08.174548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.062 [2024-06-07 21:42:08.174560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.062 [2024-06-07 21:42:08.174569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.062 [2024-06-07 21:42:08.174581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.062 [2024-06-07 21:42:08.174591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.062 [2024-06-07 21:42:08.174603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.062 [2024-06-07 21:42:08.174613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.062 [2024-06-07 21:42:08.174624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.062 [2024-06-07 21:42:08.174634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.062 [2024-06-07 21:42:08.174646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.062 [2024-06-07 21:42:08.174656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.062 [2024-06-07 21:42:08.174668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.062 [2024-06-07 21:42:08.174678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.062 [2024-06-07 21:42:08.174691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.062 [2024-06-07 21:42:08.174701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.063 [2024-06-07 21:42:08.174713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.063 [2024-06-07 21:42:08.174723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.063 [2024-06-07 21:42:08.174735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.063 [2024-06-07 21:42:08.174744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.063 [2024-06-07 21:42:08.174756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.063 [2024-06-07 21:42:08.174766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.063 [2024-06-07 21:42:08.174778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.063 [2024-06-07 21:42:08.174788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.063 [2024-06-07 21:42:08.174800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.063 [2024-06-07 21:42:08.174812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.063 [2024-06-07 21:42:08.174824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.063 [2024-06-07 21:42:08.174834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.063 [2024-06-07 21:42:08.174845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.063 [2024-06-07 21:42:08.174856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.063 [2024-06-07 21:42:08.174868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.063 [2024-06-07 21:42:08.174878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.063 [2024-06-07 21:42:08.174890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.063 [2024-06-07 21:42:08.174900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.063 [2024-06-07 21:42:08.174912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.063 [2024-06-07 21:42:08.174922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.063 [2024-06-07 21:42:08.174934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.063 [2024-06-07 21:42:08.174943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.063 [2024-06-07 21:42:08.174955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.063 [2024-06-07 21:42:08.174968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.063 [2024-06-07 21:42:08.174980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.063 [2024-06-07 21:42:08.174991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.063 [2024-06-07 21:42:08.175002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.063 [2024-06-07 21:42:08.175012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.063 [2024-06-07 21:42:08.175028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.063 [2024-06-07 21:42:08.175039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.063 [2024-06-07 21:42:08.175051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.063 [2024-06-07 21:42:08.175061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.063 [2024-06-07 21:42:08.175073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.063 [2024-06-07 21:42:08.175082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.063 [2024-06-07 21:42:08.175094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.063 [2024-06-07 21:42:08.175104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.063 [2024-06-07 21:42:08.175116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.063 [2024-06-07 21:42:08.175126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.063 [2024-06-07 21:42:08.175138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.063 [2024-06-07 21:42:08.175147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.063 [2024-06-07 21:42:08.175163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.063 [2024-06-07 21:42:08.175173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.063 [2024-06-07 21:42:08.175185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.063 [2024-06-07 21:42:08.175195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.063 [2024-06-07 21:42:08.175206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.063 [2024-06-07 21:42:08.175216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.063 [2024-06-07 21:42:08.175228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.063 [2024-06-07 21:42:08.175238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.063 [2024-06-07 21:42:08.175252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.063 [2024-06-07 21:42:08.175262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.063 [2024-06-07 21:42:08.175274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.063 [2024-06-07 21:42:08.175284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.063 [2024-06-07 21:42:08.175296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.063 [2024-06-07 21:42:08.175305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.063 [2024-06-07 21:42:08.175317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.063 [2024-06-07 21:42:08.175327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.063 [2024-06-07 21:42:08.175339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.063 [2024-06-07 21:42:08.175349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.063 [2024-06-07 21:42:08.175418] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20a2800 was disconnected and freed. reset controller. 00:25:08.063 [2024-06-07 21:42:08.175520] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.063 [2024-06-07 21:42:08.175538] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.063 [2024-06-07 21:42:08.175548] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.063 [2024-06-07 21:42:08.175556] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.063 [2024-06-07 21:42:08.175565] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.063 [2024-06-07 21:42:08.175574] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.063 [2024-06-07 21:42:08.175583] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.063 [2024-06-07 21:42:08.175591] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.063 [2024-06-07 21:42:08.175600] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.063 [2024-06-07 21:42:08.175608] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.063 [2024-06-07 21:42:08.175617] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.063 [2024-06-07 21:42:08.175625] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.063 [2024-06-07 21:42:08.175630] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:08.063 [2024-06-07 21:42:08.175634] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.063 [2024-06-07 21:42:08.175645] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.063 [2024-06-07 21:42:08.175653] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controlle[2024-06-07 21:42:08.175654] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with r 00:25:08.063 the state(5) to be set 00:25:08.063 [2024-06-07 21:42:08.175670] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.063 [2024-06-07 21:42:08.175680] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.063 [2024-06-07 21:42:08.175689] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.063 [2024-06-07 21:42:08.175698] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.063 [2024-06-07 21:42:08.175706] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.064 [2024-06-07 21:42:08.175714] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.064 [2024-06-07 21:42:08.175715] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b7820 (9): Bad file descriptor 00:25:08.064 [2024-06-07 21:42:08.175723] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.064 [2024-06-07 21:42:08.175733] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.064 [2024-06-07 21:42:08.175742] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.064 [2024-06-07 21:42:08.175750] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.064 [2024-06-07 21:42:08.175759] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.064 [2024-06-07 21:42:08.175769] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.064 [2024-06-07 21:42:08.175774] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:08.064 [2024-06-07 21:42:08.175778] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.064 [2024-06-07 21:42:08.175787] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.064 [2024-06-07 21:42:08.175796] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.064 [2024-06-07 21:42:08.175804] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.064 [2024-06-07 21:42:08.175813] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.064 [2024-06-07 21:42:08.175822] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.064 [2024-06-07 21:42:08.175830] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.064 [2024-06-07 21:42:08.175840] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.064 [2024-06-07 21:42:08.175848] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.064 [2024-06-07 21:42:08.175857] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.064 [2024-06-07 21:42:08.175866] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.064 [2024-06-07 21:42:08.175874] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.175886] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.175895] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.175904] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.175913] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.175921] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.175933] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.175942] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.175950] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.175959] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.175971] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.175980] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.175989] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.175997] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.176006] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.176014] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.176023] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.176038] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.176047] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.176056] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.176065] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.176074] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.176082] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.176091] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.176099] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1604300 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.176841] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.176856] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.176862] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.176870] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.176875] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.176881] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.176886] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.176892] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.176897] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.176902] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.176908] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.176913] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.176919] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.176924] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.176929] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.176935] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.176940] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.176945] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.176950] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.176955] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.176961] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.176966] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.176971] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.176977] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.176982] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.176987] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.176992] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.176997] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.177003] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.177008] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.177014] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.177020] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.177032] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.177040] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.177046] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.177051] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.177056] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.177062] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.177067] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.177072] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.177078] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.177083] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.177088] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.177093] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.177099] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.177104] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.177109] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.177115] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.065 [2024-06-07 21:42:08.177120] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.066 [2024-06-07 21:42:08.177126] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.066 [2024-06-07 21:42:08.177131] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.066 [2024-06-07 21:42:08.177136] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.066 [2024-06-07 21:42:08.177142] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.066 [2024-06-07 21:42:08.177148] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.066 [2024-06-07 21:42:08.177153] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.066 [2024-06-07 21:42:08.177159] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.066 [2024-06-07 21:42:08.177165] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.066 [2024-06-07 21:42:08.177170] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.066 [2024-06-07 21:42:08.177177] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.066 [2024-06-07 21:42:08.177182] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.066 [2024-06-07 21:42:08.177187] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.066 [2024-06-07 21:42:08.177193] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.066 [2024-06-07 21:42:08.177198] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16047a0 is same with the state(5) to be set 00:25:08.066 [2024-06-07 21:42:08.177451] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:25:08.066 [2024-06-07 21:42:08.177507] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2272cd0 (9): Bad file descriptor 00:25:08.066 [2024-06-07 21:42:08.177560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.066 [2024-06-07 21:42:08.177574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.066 [2024-06-07 21:42:08.177585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.066 [2024-06-07 21:42:08.177595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.066 [2024-06-07 21:42:08.177606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.066 [2024-06-07 21:42:08.177616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.066 [2024-06-07 21:42:08.177627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.066 [2024-06-07 21:42:08.177637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.066 [2024-06-07 21:42:08.177647] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94b10 is same with the state(5) to be set 00:25:08.066 [2024-06-07 21:42:08.177683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.066 [2024-06-07 21:42:08.177695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.066 [2024-06-07 21:42:08.177706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.066 [2024-06-07 21:42:08.177716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.066 [2024-06-07 21:42:08.177727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.066 [2024-06-07 21:42:08.177737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.066 [2024-06-07 21:42:08.177748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.066 [2024-06-07 21:42:08.177758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.066 [2024-06-07 21:42:08.177768] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baa610 is same with the state(5) to be set 00:25:08.066 [2024-06-07 21:42:08.177789] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f5200 (9): Bad file descriptor 00:25:08.066 [2024-06-07 21:42:08.177829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.066 [2024-06-07 21:42:08.177841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.066 [2024-06-07 21:42:08.177852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.066 [2024-06-07 21:42:08.177862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.066 [2024-06-07 21:42:08.177872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.066 [2024-06-07 21:42:08.177882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.066 [2024-06-07 21:42:08.177894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.066 [2024-06-07 21:42:08.177903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.066 [2024-06-07 21:42:08.177913] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ca4b0 is same with the state(5) to be set 00:25:08.066 [2024-06-07 21:42:08.177933] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a8e50 (9): Bad file descriptor 00:25:08.066 [2024-06-07 21:42:08.177966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.066 [2024-06-07 21:42:08.177978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.066 [2024-06-07 21:42:08.177991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.066 [2024-06-07 21:42:08.178001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.066 [2024-06-07 21:42:08.178011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.066 [2024-06-07 21:42:08.178021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.066 [2024-06-07 21:42:08.178040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.066 [2024-06-07 21:42:08.178050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.066 [2024-06-07 21:42:08.178060] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264af0 is same with the state(5) to be set 00:25:08.066 [2024-06-07 21:42:08.178095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.066 [2024-06-07 21:42:08.178107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.066 [2024-06-07 21:42:08.178118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.066 [2024-06-07 21:42:08.178128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.066 [2024-06-07 21:42:08.178138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.066 [2024-06-07 21:42:08.178148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.066 [2024-06-07 21:42:08.178161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.066 [2024-06-07 21:42:08.178171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.066 [2024-06-07 21:42:08.178180] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2273690 is same with the state(5) to be set 00:25:08.066 [2024-06-07 21:42:08.178211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.066 [2024-06-07 21:42:08.178223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.066 [2024-06-07 21:42:08.178233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.066 [2024-06-07 21:42:08.178243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.066 [2024-06-07 21:42:08.178255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.066 [2024-06-07 21:42:08.178265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.066 [2024-06-07 21:42:08.178277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.066 [2024-06-07 21:42:08.178287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.066 [2024-06-07 21:42:08.178296] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f5000 is same with the state(5) to be set 00:25:08.066 [2024-06-07 21:42:08.178384] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:08.066 [2024-06-07 21:42:08.179330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.066 [2024-06-07 21:42:08.179357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20b7820 with addr=10.0.0.2, port=4420 00:25:08.066 [2024-06-07 21:42:08.179368] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7820 is same with the state(5) to be set 00:25:08.066 [2024-06-07 21:42:08.179460] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:08.066 [2024-06-07 21:42:08.180397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.066 [2024-06-07 21:42:08.180423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2272cd0 with addr=10.0.0.2, port=4420 00:25:08.066 [2024-06-07 21:42:08.180433] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272cd0 is same with the state(5) to be set 00:25:08.066 [2024-06-07 21:42:08.180447] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b7820 (9): Bad file descriptor 00:25:08.066 [2024-06-07 21:42:08.180535] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:08.066 [2024-06-07 21:42:08.180603] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:08.067 [2024-06-07 21:42:08.180806] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2272cd0 (9): Bad file descriptor 00:25:08.067 [2024-06-07 21:42:08.180825] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:25:08.067 [2024-06-07 21:42:08.180834] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:25:08.067 [2024-06-07 21:42:08.180845] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:25:08.067 [2024-06-07 21:42:08.180951] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:08.067 [2024-06-07 21:42:08.181036] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:08.067 [2024-06-07 21:42:08.181053] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:25:08.067 [2024-06-07 21:42:08.181062] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:25:08.067 [2024-06-07 21:42:08.181071] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:25:08.067 [2024-06-07 21:42:08.181126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.067 [2024-06-07 21:42:08.181139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.067 [2024-06-07 21:42:08.181155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.067 [2024-06-07 21:42:08.181165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.067 [2024-06-07 21:42:08.181177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.067 [2024-06-07 21:42:08.181187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.067 [2024-06-07 21:42:08.181199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.067 [2024-06-07 21:42:08.181209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.067 [2024-06-07 21:42:08.181221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.067 [2024-06-07 21:42:08.181231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.067 [2024-06-07 21:42:08.181243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.067 [2024-06-07 21:42:08.181252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.067 [2024-06-07 21:42:08.181264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.067 [2024-06-07 21:42:08.181274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.067 [2024-06-07 21:42:08.181286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.067 [2024-06-07 21:42:08.181297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.067 [2024-06-07 21:42:08.181309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.067 [2024-06-07 21:42:08.181319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.067 [2024-06-07 21:42:08.181331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.067 [2024-06-07 21:42:08.181340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.067 [2024-06-07 21:42:08.181352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.067 [2024-06-07 21:42:08.181362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.067 [2024-06-07 21:42:08.181373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.067 [2024-06-07 21:42:08.181388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.067 [2024-06-07 21:42:08.181400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.067 [2024-06-07 21:42:08.181410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.067 [2024-06-07 21:42:08.181421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.067 [2024-06-07 21:42:08.181431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.067 [2024-06-07 21:42:08.181443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.067 [2024-06-07 21:42:08.181452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.067 [2024-06-07 21:42:08.181464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.067 [2024-06-07 21:42:08.181473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.067 [2024-06-07 21:42:08.181485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.067 [2024-06-07 21:42:08.181495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.067 [2024-06-07 21:42:08.181506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.067 [2024-06-07 21:42:08.181515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.067 [2024-06-07 21:42:08.181527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.067 [2024-06-07 21:42:08.181537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.067 [2024-06-07 21:42:08.181548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.067 [2024-06-07 21:42:08.181558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.067 [2024-06-07 21:42:08.181570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.067 [2024-06-07 21:42:08.181579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.067 [2024-06-07 21:42:08.181591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.067 [2024-06-07 21:42:08.181601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.067 [2024-06-07 21:42:08.181613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.067 [2024-06-07 21:42:08.181624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.067 [2024-06-07 21:42:08.181635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.067 [2024-06-07 21:42:08.181645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.067 [2024-06-07 21:42:08.181659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.067 [2024-06-07 21:42:08.181668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.067 [2024-06-07 21:42:08.181680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.067 [2024-06-07 21:42:08.181690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.067 [2024-06-07 21:42:08.181701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.067 [2024-06-07 21:42:08.181711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.067 [2024-06-07 21:42:08.181723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.067 [2024-06-07 21:42:08.181733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.067 [2024-06-07 21:42:08.181745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.067 [2024-06-07 21:42:08.181755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.067 [2024-06-07 21:42:08.181767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.067 [2024-06-07 21:42:08.181776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.067 [2024-06-07 21:42:08.181788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.067 [2024-06-07 21:42:08.181798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.067 [2024-06-07 21:42:08.181809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.067 [2024-06-07 21:42:08.181819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.067 [2024-06-07 21:42:08.181830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.067 [2024-06-07 21:42:08.181840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.067 [2024-06-07 21:42:08.181851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.067 [2024-06-07 21:42:08.181861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.067 [2024-06-07 21:42:08.192861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.067 [2024-06-07 21:42:08.192884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.067 [2024-06-07 21:42:08.192897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.067 [2024-06-07 21:42:08.192908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.068 [2024-06-07 21:42:08.192920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.068 [2024-06-07 21:42:08.192935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.068 [2024-06-07 21:42:08.192949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.068 [2024-06-07 21:42:08.192961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.068 [2024-06-07 21:42:08.192974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.068 [2024-06-07 21:42:08.192984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.068 [2024-06-07 21:42:08.192997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.068 [2024-06-07 21:42:08.193007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.068 [2024-06-07 21:42:08.193020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.068 [2024-06-07 21:42:08.193035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.068 [2024-06-07 21:42:08.193048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.068 [2024-06-07 21:42:08.193058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.068 [2024-06-07 21:42:08.193070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.068 [2024-06-07 21:42:08.193080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.068 [2024-06-07 21:42:08.193093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.068 [2024-06-07 21:42:08.193103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.068 [2024-06-07 21:42:08.193115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.068 [2024-06-07 21:42:08.193126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.068 [2024-06-07 21:42:08.193138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.068 [2024-06-07 21:42:08.193148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.068 [2024-06-07 21:42:08.193160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.068 [2024-06-07 21:42:08.193170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.068 [2024-06-07 21:42:08.193183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.068 [2024-06-07 21:42:08.193194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.068 [2024-06-07 21:42:08.193206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.068 [2024-06-07 21:42:08.193216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.068 [2024-06-07 21:42:08.193230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.068 [2024-06-07 21:42:08.193241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.068 [2024-06-07 21:42:08.193253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.068 [2024-06-07 21:42:08.193263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.068 [2024-06-07 21:42:08.193276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.068 [2024-06-07 21:42:08.193286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.068 [2024-06-07 21:42:08.193298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.068 [2024-06-07 21:42:08.193308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.068 [2024-06-07 21:42:08.193320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.068 [2024-06-07 21:42:08.193331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.068 [2024-06-07 21:42:08.193343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.068 [2024-06-07 21:42:08.193353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.068 [2024-06-07 21:42:08.193366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.068 [2024-06-07 21:42:08.193376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.068 [2024-06-07 21:42:08.193390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.068 [2024-06-07 21:42:08.193400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.068 [2024-06-07 21:42:08.193413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.068 [2024-06-07 21:42:08.193423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.068 [2024-06-07 21:42:08.193436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.068 [2024-06-07 21:42:08.193445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.068 [2024-06-07 21:42:08.193458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.068 [2024-06-07 21:42:08.193468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.068 [2024-06-07 21:42:08.193481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.068 [2024-06-07 21:42:08.193491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.068 [2024-06-07 21:42:08.193503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.068 [2024-06-07 21:42:08.193516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.068 [2024-06-07 21:42:08.193528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.068 [2024-06-07 21:42:08.193539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.068 [2024-06-07 21:42:08.193551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.068 [2024-06-07 21:42:08.193561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.068 [2024-06-07 21:42:08.193573] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a5240 is same with the state(5) to be set 00:25:08.068 [2024-06-07 21:42:08.193675] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20a5240 was disconnected and freed. reset controller. 00:25:08.068 [2024-06-07 21:42:08.193747] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:08.068 [2024-06-07 21:42:08.193773] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:08.068 [2024-06-07 21:42:08.193818] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c94b10 (9): Bad file descriptor 00:25:08.068 [2024-06-07 21:42:08.193842] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baa610 (9): Bad file descriptor 00:25:08.068 [2024-06-07 21:42:08.193870] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ca4b0 (9): Bad file descriptor 00:25:08.068 [2024-06-07 21:42:08.193898] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2264af0 (9): Bad file descriptor 00:25:08.068 [2024-06-07 21:42:08.193915] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2273690 (9): Bad file descriptor 00:25:08.068 [2024-06-07 21:42:08.193935] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f5000 (9): Bad file descriptor 00:25:08.068 [2024-06-07 21:42:08.195557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.068 [2024-06-07 21:42:08.195582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.068 [2024-06-07 21:42:08.195598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.068 [2024-06-07 21:42:08.195609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.068 [2024-06-07 21:42:08.195623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.068 [2024-06-07 21:42:08.195633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.068 [2024-06-07 21:42:08.195646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.068 [2024-06-07 21:42:08.195656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.068 [2024-06-07 21:42:08.195669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.068 [2024-06-07 21:42:08.195680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.068 [2024-06-07 21:42:08.195693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.068 [2024-06-07 21:42:08.195702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.068 [2024-06-07 21:42:08.195721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.068 [2024-06-07 21:42:08.195733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.068 [2024-06-07 21:42:08.195746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.069 [2024-06-07 21:42:08.195755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.069 [2024-06-07 21:42:08.195768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.069 [2024-06-07 21:42:08.195778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.069 [2024-06-07 21:42:08.195790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.069 [2024-06-07 21:42:08.195800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.069 [2024-06-07 21:42:08.195812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.069 [2024-06-07 21:42:08.195822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.069 [2024-06-07 21:42:08.195835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.069 [2024-06-07 21:42:08.195845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.069 [2024-06-07 21:42:08.195857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.069 [2024-06-07 21:42:08.195867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.069 [2024-06-07 21:42:08.195879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.069 [2024-06-07 21:42:08.195889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.069 [2024-06-07 21:42:08.195902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.069 [2024-06-07 21:42:08.195912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.069 [2024-06-07 21:42:08.195925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.069 [2024-06-07 21:42:08.195935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.069 [2024-06-07 21:42:08.195947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.069 [2024-06-07 21:42:08.195958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.069 [2024-06-07 21:42:08.195970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.069 [2024-06-07 21:42:08.195980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.069 [2024-06-07 21:42:08.195993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.069 [2024-06-07 21:42:08.196005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.069 [2024-06-07 21:42:08.196018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.069 [2024-06-07 21:42:08.196033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.069 [2024-06-07 21:42:08.196046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.069 [2024-06-07 21:42:08.196056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.069 [2024-06-07 21:42:08.196069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.069 [2024-06-07 21:42:08.196079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.069 [2024-06-07 21:42:08.196091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.069 [2024-06-07 21:42:08.196102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.069 [2024-06-07 21:42:08.196114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.069 [2024-06-07 21:42:08.196124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.069 [2024-06-07 21:42:08.196136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.069 [2024-06-07 21:42:08.196146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.069 [2024-06-07 21:42:08.196159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.069 [2024-06-07 21:42:08.196168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.069 [2024-06-07 21:42:08.196181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.069 [2024-06-07 21:42:08.196191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.069 [2024-06-07 21:42:08.196203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.069 [2024-06-07 21:42:08.196214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.069 [2024-06-07 21:42:08.196225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.069 [2024-06-07 21:42:08.196235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.069 [2024-06-07 21:42:08.196248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.069 [2024-06-07 21:42:08.196258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.069 [2024-06-07 21:42:08.196270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.069 [2024-06-07 21:42:08.196281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.069 [2024-06-07 21:42:08.196297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.069 [2024-06-07 21:42:08.196307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.069 [2024-06-07 21:42:08.196319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.069 [2024-06-07 21:42:08.196330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.069 [2024-06-07 21:42:08.196343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.069 [2024-06-07 21:42:08.196353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.069 [2024-06-07 21:42:08.196365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.069 [2024-06-07 21:42:08.196375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.069 [2024-06-07 21:42:08.196387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.069 [2024-06-07 21:42:08.196397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.069 [2024-06-07 21:42:08.196409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.069 [2024-06-07 21:42:08.196419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.069 [2024-06-07 21:42:08.196432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.069 [2024-06-07 21:42:08.196442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.069 [2024-06-07 21:42:08.196455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.069 [2024-06-07 21:42:08.196466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.069 [2024-06-07 21:42:08.196478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.069 [2024-06-07 21:42:08.196488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.069 [2024-06-07 21:42:08.196500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.069 [2024-06-07 21:42:08.196510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.070 [2024-06-07 21:42:08.196523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.070 [2024-06-07 21:42:08.196533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.070 [2024-06-07 21:42:08.196545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.070 [2024-06-07 21:42:08.196556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.070 [2024-06-07 21:42:08.196568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.070 [2024-06-07 21:42:08.196581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.070 [2024-06-07 21:42:08.196593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.070 [2024-06-07 21:42:08.196603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.070 [2024-06-07 21:42:08.196615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.070 [2024-06-07 21:42:08.196626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.070 [2024-06-07 21:42:08.196639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.070 [2024-06-07 21:42:08.196649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.070 [2024-06-07 21:42:08.196661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.070 [2024-06-07 21:42:08.196671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.070 [2024-06-07 21:42:08.196683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.070 [2024-06-07 21:42:08.196694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.070 [2024-06-07 21:42:08.196706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.070 [2024-06-07 21:42:08.196716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.070 [2024-06-07 21:42:08.196729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.070 [2024-06-07 21:42:08.196739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.070 [2024-06-07 21:42:08.196751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.070 [2024-06-07 21:42:08.196761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.070 [2024-06-07 21:42:08.196787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.070 [2024-06-07 21:42:08.196803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.070 [2024-06-07 21:42:08.196821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.070 [2024-06-07 21:42:08.196835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.070 [2024-06-07 21:42:08.196853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.070 [2024-06-07 21:42:08.196868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.070 [2024-06-07 21:42:08.196884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.070 [2024-06-07 21:42:08.196898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.070 [2024-06-07 21:42:08.196918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.070 [2024-06-07 21:42:08.196932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.070 [2024-06-07 21:42:08.196950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.070 [2024-06-07 21:42:08.196964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.070 [2024-06-07 21:42:08.196981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.070 [2024-06-07 21:42:08.196995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.070 [2024-06-07 21:42:08.197011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.070 [2024-06-07 21:42:08.197032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.070 [2024-06-07 21:42:08.197050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.070 [2024-06-07 21:42:08.197064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.070 [2024-06-07 21:42:08.197081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.070 [2024-06-07 21:42:08.197095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.070 [2024-06-07 21:42:08.197113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.070 [2024-06-07 21:42:08.197127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.070 [2024-06-07 21:42:08.197144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.070 [2024-06-07 21:42:08.197158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.070 [2024-06-07 21:42:08.197174] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2192800 is same with the state(5) to be set 00:25:08.070 [2024-06-07 21:42:08.199367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.070 [2024-06-07 21:42:08.199396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.070 [2024-06-07 21:42:08.199417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.070 [2024-06-07 21:42:08.199432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.070 [2024-06-07 21:42:08.199450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.070 [2024-06-07 21:42:08.199464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.070 [2024-06-07 21:42:08.199482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.070 [2024-06-07 21:42:08.199496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.070 [2024-06-07 21:42:08.199519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.070 [2024-06-07 21:42:08.199534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.070 [2024-06-07 21:42:08.199553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.070 [2024-06-07 21:42:08.199567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.070 [2024-06-07 21:42:08.199587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.070 [2024-06-07 21:42:08.199601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.070 [2024-06-07 21:42:08.199620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.070 [2024-06-07 21:42:08.199634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.070 [2024-06-07 21:42:08.199653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.070 [2024-06-07 21:42:08.199668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.070 [2024-06-07 21:42:08.199687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.070 [2024-06-07 21:42:08.199701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.070 [2024-06-07 21:42:08.199721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.070 [2024-06-07 21:42:08.199736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.070 [2024-06-07 21:42:08.199756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.070 [2024-06-07 21:42:08.199771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.070 [2024-06-07 21:42:08.199790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.070 [2024-06-07 21:42:08.199805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.070 [2024-06-07 21:42:08.199825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.070 [2024-06-07 21:42:08.199838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.070 [2024-06-07 21:42:08.199859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.070 [2024-06-07 21:42:08.199874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.070 [2024-06-07 21:42:08.199891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.070 [2024-06-07 21:42:08.199907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.070 [2024-06-07 21:42:08.199924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.071 [2024-06-07 21:42:08.199943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.071 [2024-06-07 21:42:08.199961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.071 [2024-06-07 21:42:08.199978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.071 [2024-06-07 21:42:08.199996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.071 [2024-06-07 21:42:08.200010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.071 [2024-06-07 21:42:08.200035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.071 [2024-06-07 21:42:08.200050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.071 [2024-06-07 21:42:08.200068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.071 [2024-06-07 21:42:08.200082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.071 [2024-06-07 21:42:08.200100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.071 [2024-06-07 21:42:08.200115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.071 [2024-06-07 21:42:08.200132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.071 [2024-06-07 21:42:08.200146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.071 [2024-06-07 21:42:08.200163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.071 [2024-06-07 21:42:08.200177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.071 [2024-06-07 21:42:08.200195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.071 [2024-06-07 21:42:08.200210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.071 [2024-06-07 21:42:08.200227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.071 [2024-06-07 21:42:08.200242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.071 [2024-06-07 21:42:08.200260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.071 [2024-06-07 21:42:08.200274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.071 [2024-06-07 21:42:08.200292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.071 [2024-06-07 21:42:08.200307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.071 [2024-06-07 21:42:08.200324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.071 [2024-06-07 21:42:08.200339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.071 [2024-06-07 21:42:08.200359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.071 [2024-06-07 21:42:08.200373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.071 [2024-06-07 21:42:08.200390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.071 [2024-06-07 21:42:08.200405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.071 [2024-06-07 21:42:08.200422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.071 [2024-06-07 21:42:08.200436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.071 [2024-06-07 21:42:08.200454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.071 [2024-06-07 21:42:08.200468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.071 [2024-06-07 21:42:08.200485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.071 [2024-06-07 21:42:08.200500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.071 [2024-06-07 21:42:08.200517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.071 [2024-06-07 21:42:08.200532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.071 [2024-06-07 21:42:08.200549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.071 [2024-06-07 21:42:08.200563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.071 [2024-06-07 21:42:08.200582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.071 [2024-06-07 21:42:08.200596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.071 [2024-06-07 21:42:08.200613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.071 [2024-06-07 21:42:08.200628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.071 [2024-06-07 21:42:08.200645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.071 [2024-06-07 21:42:08.200660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.071 [2024-06-07 21:42:08.200678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.071 [2024-06-07 21:42:08.200692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.071 [2024-06-07 21:42:08.200710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.071 [2024-06-07 21:42:08.200724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.071 [2024-06-07 21:42:08.200742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.071 [2024-06-07 21:42:08.200759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.071 [2024-06-07 21:42:08.200776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.071 [2024-06-07 21:42:08.200791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.071 [2024-06-07 21:42:08.200808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.071 [2024-06-07 21:42:08.200823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.071 [2024-06-07 21:42:08.200841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.071 [2024-06-07 21:42:08.200855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.071 [2024-06-07 21:42:08.200872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.071 [2024-06-07 21:42:08.200887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.071 [2024-06-07 21:42:08.200904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.071 [2024-06-07 21:42:08.200918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.071 [2024-06-07 21:42:08.200936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.071 [2024-06-07 21:42:08.200950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.071 [2024-06-07 21:42:08.200969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.071 [2024-06-07 21:42:08.200983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.071 [2024-06-07 21:42:08.201000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.071 [2024-06-07 21:42:08.201015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.071 [2024-06-07 21:42:08.201037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.071 [2024-06-07 21:42:08.201053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.071 [2024-06-07 21:42:08.201071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.071 [2024-06-07 21:42:08.201084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.071 [2024-06-07 21:42:08.201102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.071 [2024-06-07 21:42:08.201116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.071 [2024-06-07 21:42:08.201134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.071 [2024-06-07 21:42:08.201149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.071 [2024-06-07 21:42:08.201169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.071 [2024-06-07 21:42:08.201183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.071 [2024-06-07 21:42:08.201200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.071 [2024-06-07 21:42:08.201215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.071 [2024-06-07 21:42:08.201233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.072 [2024-06-07 21:42:08.201247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.072 [2024-06-07 21:42:08.201264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.072 [2024-06-07 21:42:08.201279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.072 [2024-06-07 21:42:08.201297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.072 [2024-06-07 21:42:08.201311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.072 [2024-06-07 21:42:08.201329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.072 [2024-06-07 21:42:08.201343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.072 [2024-06-07 21:42:08.201360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.072 [2024-06-07 21:42:08.201374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.072 [2024-06-07 21:42:08.201392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.072 [2024-06-07 21:42:08.201406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.072 [2024-06-07 21:42:08.201424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.072 [2024-06-07 21:42:08.201438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.072 [2024-06-07 21:42:08.201455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.072 [2024-06-07 21:42:08.201469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.072 [2024-06-07 21:42:08.201486] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218adb0 is same with the state(5) to be set 00:25:08.072 [2024-06-07 21:42:08.203976] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:25:08.072 [2024-06-07 21:42:08.204021] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:25:08.072 [2024-06-07 21:42:08.204050] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:25:08.072 [2024-06-07 21:42:08.204068] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:08.072 [2024-06-07 21:42:08.204177] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:08.072 [2024-06-07 21:42:08.204771] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:25:08.072 [2024-06-07 21:42:08.205183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.072 [2024-06-07 21:42:08.205212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2264af0 with addr=10.0.0.2, port=4420 00:25:08.072 [2024-06-07 21:42:08.205228] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264af0 is same with the state(5) to be set 00:25:08.072 [2024-06-07 21:42:08.205473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.072 [2024-06-07 21:42:08.205495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20b7820 with addr=10.0.0.2, port=4420 00:25:08.072 [2024-06-07 21:42:08.205509] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7820 is same with the state(5) to be set 00:25:08.072 [2024-06-07 21:42:08.205750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.072 [2024-06-07 21:42:08.205770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2272cd0 with addr=10.0.0.2, port=4420 00:25:08.072 [2024-06-07 21:42:08.205786] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272cd0 is same with the state(5) to be set 00:25:08.072 [2024-06-07 21:42:08.206033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.072 [2024-06-07 21:42:08.206055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a8e50 with addr=10.0.0.2, port=4420 00:25:08.072 [2024-06-07 21:42:08.206069] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a8e50 is same with the state(5) to be set 00:25:08.072 [2024-06-07 21:42:08.206593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.072 [2024-06-07 21:42:08.206614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.072 [2024-06-07 21:42:08.206636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.072 [2024-06-07 21:42:08.206651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.072 [2024-06-07 21:42:08.206669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.072 [2024-06-07 21:42:08.206683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.072 [2024-06-07 21:42:08.206701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.072 [2024-06-07 21:42:08.206716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.072 [2024-06-07 21:42:08.206733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.072 [2024-06-07 21:42:08.206747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.072 [2024-06-07 21:42:08.206765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.072 [2024-06-07 21:42:08.206780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.072 [2024-06-07 21:42:08.206797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.072 [2024-06-07 21:42:08.206811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.072 [2024-06-07 21:42:08.206829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.072 [2024-06-07 21:42:08.206849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.072 [2024-06-07 21:42:08.206867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.072 [2024-06-07 21:42:08.206881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.072 [2024-06-07 21:42:08.206898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.072 [2024-06-07 21:42:08.206923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.072 [2024-06-07 21:42:08.206936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.072 [2024-06-07 21:42:08.206947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.072 [2024-06-07 21:42:08.206960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.072 [2024-06-07 21:42:08.206971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.072 [2024-06-07 21:42:08.206984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.072 [2024-06-07 21:42:08.206995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.072 [2024-06-07 21:42:08.207008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.072 [2024-06-07 21:42:08.207019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.072 [2024-06-07 21:42:08.207039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.072 [2024-06-07 21:42:08.207050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.072 [2024-06-07 21:42:08.207063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.072 [2024-06-07 21:42:08.207073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.072 [2024-06-07 21:42:08.207086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.072 [2024-06-07 21:42:08.207097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.072 [2024-06-07 21:42:08.207110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.072 [2024-06-07 21:42:08.207121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.072 [2024-06-07 21:42:08.207134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.072 [2024-06-07 21:42:08.207145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.072 [2024-06-07 21:42:08.207157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.072 [2024-06-07 21:42:08.207168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.072 [2024-06-07 21:42:08.207184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.072 [2024-06-07 21:42:08.207195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.072 [2024-06-07 21:42:08.207208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.072 [2024-06-07 21:42:08.207219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.072 [2024-06-07 21:42:08.207232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.072 [2024-06-07 21:42:08.207243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.072 [2024-06-07 21:42:08.207256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.072 [2024-06-07 21:42:08.207266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.073 [2024-06-07 21:42:08.207280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.073 [2024-06-07 21:42:08.207290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.073 [2024-06-07 21:42:08.207303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.073 [2024-06-07 21:42:08.207314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.073 [2024-06-07 21:42:08.207327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.073 [2024-06-07 21:42:08.207338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.073 [2024-06-07 21:42:08.207350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.073 [2024-06-07 21:42:08.207361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.073 [2024-06-07 21:42:08.207375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.073 [2024-06-07 21:42:08.207386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.073 [2024-06-07 21:42:08.207399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.073 [2024-06-07 21:42:08.207409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.073 [2024-06-07 21:42:08.207422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.073 [2024-06-07 21:42:08.207434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.073 [2024-06-07 21:42:08.207447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.073 [2024-06-07 21:42:08.207457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.073 [2024-06-07 21:42:08.207470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.073 [2024-06-07 21:42:08.207483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.073 [2024-06-07 21:42:08.207496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.073 [2024-06-07 21:42:08.207507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.073 [2024-06-07 21:42:08.207520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.073 [2024-06-07 21:42:08.207531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.073 [2024-06-07 21:42:08.207544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.073 [2024-06-07 21:42:08.207555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.073 [2024-06-07 21:42:08.207568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.073 [2024-06-07 21:42:08.207580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.073 [2024-06-07 21:42:08.207593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.073 [2024-06-07 21:42:08.207603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.073 [2024-06-07 21:42:08.207617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.073 [2024-06-07 21:42:08.207627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.073 [2024-06-07 21:42:08.207640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.073 [2024-06-07 21:42:08.207651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.073 [2024-06-07 21:42:08.207664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.073 [2024-06-07 21:42:08.207675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.073 [2024-06-07 21:42:08.207689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.073 [2024-06-07 21:42:08.207700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.073 [2024-06-07 21:42:08.207713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.073 [2024-06-07 21:42:08.207723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.073 [2024-06-07 21:42:08.207736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.073 [2024-06-07 21:42:08.207747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.073 [2024-06-07 21:42:08.207760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.073 [2024-06-07 21:42:08.207770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.073 [2024-06-07 21:42:08.207785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.073 [2024-06-07 21:42:08.207796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.073 [2024-06-07 21:42:08.207809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.073 [2024-06-07 21:42:08.207820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.073 [2024-06-07 21:42:08.207833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.073 [2024-06-07 21:42:08.207844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.073 [2024-06-07 21:42:08.207856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.073 [2024-06-07 21:42:08.207867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.073 [2024-06-07 21:42:08.207880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.073 [2024-06-07 21:42:08.207891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.073 [2024-06-07 21:42:08.207904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.073 [2024-06-07 21:42:08.207915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.073 [2024-06-07 21:42:08.207928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.073 [2024-06-07 21:42:08.207939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.073 [2024-06-07 21:42:08.207951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.073 [2024-06-07 21:42:08.207962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.073 [2024-06-07 21:42:08.207975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.073 [2024-06-07 21:42:08.207986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.073 [2024-06-07 21:42:08.207999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.073 [2024-06-07 21:42:08.208010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.073 [2024-06-07 21:42:08.208022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.073 [2024-06-07 21:42:08.208040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.073 [2024-06-07 21:42:08.208053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.073 [2024-06-07 21:42:08.208064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.073 [2024-06-07 21:42:08.208077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.073 [2024-06-07 21:42:08.208089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.073 [2024-06-07 21:42:08.208102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.073 [2024-06-07 21:42:08.208113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.073 [2024-06-07 21:42:08.208126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.073 [2024-06-07 21:42:08.208136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.073 [2024-06-07 21:42:08.208149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.073 [2024-06-07 21:42:08.208161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.074 [2024-06-07 21:42:08.208175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.074 [2024-06-07 21:42:08.208188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.074 [2024-06-07 21:42:08.208201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.074 [2024-06-07 21:42:08.208212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.074 [2024-06-07 21:42:08.208224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.074 [2024-06-07 21:42:08.208235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.074 [2024-06-07 21:42:08.208247] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2193b50 is same with the state(5) to be set 00:25:08.074 [2024-06-07 21:42:08.209858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.074 [2024-06-07 21:42:08.209879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.074 [2024-06-07 21:42:08.209894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.074 [2024-06-07 21:42:08.209906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.074 [2024-06-07 21:42:08.209919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.074 [2024-06-07 21:42:08.209929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.074 [2024-06-07 21:42:08.209943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.074 [2024-06-07 21:42:08.209954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.074 [2024-06-07 21:42:08.209967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.074 [2024-06-07 21:42:08.209978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.074 [2024-06-07 21:42:08.209991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.074 [2024-06-07 21:42:08.210006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.074 [2024-06-07 21:42:08.210020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.074 [2024-06-07 21:42:08.210037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.074 [2024-06-07 21:42:08.210050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.074 [2024-06-07 21:42:08.210061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.074 [2024-06-07 21:42:08.210074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.074 [2024-06-07 21:42:08.210086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.074 [2024-06-07 21:42:08.210100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.074 [2024-06-07 21:42:08.210111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.074 [2024-06-07 21:42:08.210125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.074 [2024-06-07 21:42:08.210136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.074 [2024-06-07 21:42:08.210149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.074 [2024-06-07 21:42:08.210162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.074 [2024-06-07 21:42:08.210175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.074 [2024-06-07 21:42:08.210185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.074 [2024-06-07 21:42:08.210201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.074 [2024-06-07 21:42:08.210212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.074 [2024-06-07 21:42:08.210225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.074 [2024-06-07 21:42:08.210237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.074 [2024-06-07 21:42:08.210250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.074 [2024-06-07 21:42:08.210261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.074 [2024-06-07 21:42:08.210275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.074 [2024-06-07 21:42:08.210285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.074 [2024-06-07 21:42:08.210300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.074 [2024-06-07 21:42:08.210310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.074 [2024-06-07 21:42:08.210326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.074 [2024-06-07 21:42:08.210338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.074 [2024-06-07 21:42:08.210351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.074 [2024-06-07 21:42:08.210364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.074 [2024-06-07 21:42:08.210377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.074 [2024-06-07 21:42:08.210388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.074 [2024-06-07 21:42:08.210402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.074 [2024-06-07 21:42:08.210413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.074 [2024-06-07 21:42:08.210428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.074 [2024-06-07 21:42:08.210439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.074 [2024-06-07 21:42:08.210452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.074 [2024-06-07 21:42:08.210463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.074 [2024-06-07 21:42:08.210479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.074 [2024-06-07 21:42:08.210489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.074 [2024-06-07 21:42:08.210502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.074 [2024-06-07 21:42:08.210514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.074 [2024-06-07 21:42:08.210527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.074 [2024-06-07 21:42:08.210538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.074 [2024-06-07 21:42:08.210553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.074 [2024-06-07 21:42:08.210563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.074 [2024-06-07 21:42:08.210576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.074 [2024-06-07 21:42:08.210587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.074 [2024-06-07 21:42:08.210600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.074 [2024-06-07 21:42:08.210611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.074 [2024-06-07 21:42:08.210625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.074 [2024-06-07 21:42:08.210637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.074 [2024-06-07 21:42:08.210651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.074 [2024-06-07 21:42:08.210661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.074 [2024-06-07 21:42:08.210674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.074 [2024-06-07 21:42:08.210684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.074 [2024-06-07 21:42:08.210698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.075 [2024-06-07 21:42:08.210708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.075 [2024-06-07 21:42:08.210722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.075 [2024-06-07 21:42:08.210732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.075 [2024-06-07 21:42:08.210746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.075 [2024-06-07 21:42:08.210756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.075 [2024-06-07 21:42:08.210769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.075 [2024-06-07 21:42:08.210781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.075 [2024-06-07 21:42:08.210794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.075 [2024-06-07 21:42:08.210805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.075 [2024-06-07 21:42:08.210818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.075 [2024-06-07 21:42:08.210829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.075 [2024-06-07 21:42:08.210842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.075 [2024-06-07 21:42:08.210853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.075 [2024-06-07 21:42:08.210866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.075 [2024-06-07 21:42:08.210876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.075 [2024-06-07 21:42:08.210890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.075 [2024-06-07 21:42:08.210900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.075 [2024-06-07 21:42:08.210914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.075 [2024-06-07 21:42:08.210924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.075 [2024-06-07 21:42:08.210939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.075 [2024-06-07 21:42:08.210950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.075 [2024-06-07 21:42:08.210963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.075 [2024-06-07 21:42:08.210974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.075 [2024-06-07 21:42:08.210987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.075 [2024-06-07 21:42:08.210997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.075 [2024-06-07 21:42:08.211010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.075 [2024-06-07 21:42:08.211021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.075 [2024-06-07 21:42:08.211039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.075 [2024-06-07 21:42:08.211050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.075 [2024-06-07 21:42:08.211063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.075 [2024-06-07 21:42:08.211074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.075 [2024-06-07 21:42:08.211087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.075 [2024-06-07 21:42:08.211099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.075 [2024-06-07 21:42:08.211112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.075 [2024-06-07 21:42:08.211123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.075 [2024-06-07 21:42:08.211135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.075 [2024-06-07 21:42:08.211146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.075 [2024-06-07 21:42:08.211160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.075 [2024-06-07 21:42:08.211171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.075 [2024-06-07 21:42:08.211184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.075 [2024-06-07 21:42:08.211194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.075 [2024-06-07 21:42:08.211207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.075 [2024-06-07 21:42:08.211218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.075 [2024-06-07 21:42:08.211231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.075 [2024-06-07 21:42:08.211244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.075 [2024-06-07 21:42:08.211257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.075 [2024-06-07 21:42:08.211268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.075 [2024-06-07 21:42:08.211281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.075 [2024-06-07 21:42:08.211292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.075 [2024-06-07 21:42:08.211305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.075 [2024-06-07 21:42:08.211315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.075 [2024-06-07 21:42:08.211328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.075 [2024-06-07 21:42:08.211339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.075 [2024-06-07 21:42:08.211352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.075 [2024-06-07 21:42:08.211362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.075 [2024-06-07 21:42:08.211375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.075 [2024-06-07 21:42:08.211386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.075 [2024-06-07 21:42:08.211399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.075 [2024-06-07 21:42:08.211410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.075 [2024-06-07 21:42:08.211423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.075 [2024-06-07 21:42:08.211434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.075 [2024-06-07 21:42:08.211446] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21965b0 is same with the state(5) to be set 00:25:08.075 [2024-06-07 21:42:08.213043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.075 [2024-06-07 21:42:08.213060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.075 [2024-06-07 21:42:08.213076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.075 [2024-06-07 21:42:08.213087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.075 [2024-06-07 21:42:08.213101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.075 [2024-06-07 21:42:08.213111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.075 [2024-06-07 21:42:08.213124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.075 [2024-06-07 21:42:08.213138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.075 [2024-06-07 21:42:08.213151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.075 [2024-06-07 21:42:08.213162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.075 [2024-06-07 21:42:08.213175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.075 [2024-06-07 21:42:08.213186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.075 [2024-06-07 21:42:08.213199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.075 [2024-06-07 21:42:08.213210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.075 [2024-06-07 21:42:08.213223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.075 [2024-06-07 21:42:08.213234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.075 [2024-06-07 21:42:08.213247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.076 [2024-06-07 21:42:08.213257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.076 [2024-06-07 21:42:08.213270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.076 [2024-06-07 21:42:08.213281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.076 [2024-06-07 21:42:08.213294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.076 [2024-06-07 21:42:08.213304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.076 [2024-06-07 21:42:08.213317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.076 [2024-06-07 21:42:08.213328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.076 [2024-06-07 21:42:08.213341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.076 [2024-06-07 21:42:08.213351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.076 [2024-06-07 21:42:08.213364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.076 [2024-06-07 21:42:08.213375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.076 [2024-06-07 21:42:08.213388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.076 [2024-06-07 21:42:08.213398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.076 [2024-06-07 21:42:08.213411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.076 [2024-06-07 21:42:08.213422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.076 [2024-06-07 21:42:08.213437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.076 [2024-06-07 21:42:08.213448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.076 [2024-06-07 21:42:08.213461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.076 [2024-06-07 21:42:08.213472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.076 [2024-06-07 21:42:08.213486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.076 [2024-06-07 21:42:08.213496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.076 [2024-06-07 21:42:08.213509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.076 [2024-06-07 21:42:08.213520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.076 [2024-06-07 21:42:08.213533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.076 [2024-06-07 21:42:08.213543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.076 [2024-06-07 21:42:08.213557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.076 [2024-06-07 21:42:08.213567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.076 [2024-06-07 21:42:08.213580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.076 [2024-06-07 21:42:08.213591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.076 [2024-06-07 21:42:08.213604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.076 [2024-06-07 21:42:08.213615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.076 [2024-06-07 21:42:08.213628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.076 [2024-06-07 21:42:08.213639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.076 [2024-06-07 21:42:08.213652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.076 [2024-06-07 21:42:08.213662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.076 [2024-06-07 21:42:08.213675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.076 [2024-06-07 21:42:08.213686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.076 [2024-06-07 21:42:08.213699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.076 [2024-06-07 21:42:08.213710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.076 [2024-06-07 21:42:08.213723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.076 [2024-06-07 21:42:08.213735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.076 [2024-06-07 21:42:08.213749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.076 [2024-06-07 21:42:08.213759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.076 [2024-06-07 21:42:08.213773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.076 [2024-06-07 21:42:08.213784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.076 [2024-06-07 21:42:08.213797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.076 [2024-06-07 21:42:08.213809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.076 [2024-06-07 21:42:08.213822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.076 [2024-06-07 21:42:08.213832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.076 [2024-06-07 21:42:08.213846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.076 [2024-06-07 21:42:08.213856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.076 [2024-06-07 21:42:08.213870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.076 [2024-06-07 21:42:08.213880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.076 [2024-06-07 21:42:08.213893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.076 [2024-06-07 21:42:08.213904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.076 [2024-06-07 21:42:08.213917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.076 [2024-06-07 21:42:08.213927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.076 [2024-06-07 21:42:08.213941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.076 [2024-06-07 21:42:08.213951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.076 [2024-06-07 21:42:08.213965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.076 [2024-06-07 21:42:08.213975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.076 [2024-06-07 21:42:08.213988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.076 [2024-06-07 21:42:08.213999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.076 [2024-06-07 21:42:08.214012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.076 [2024-06-07 21:42:08.214023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.076 [2024-06-07 21:42:08.214048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.076 [2024-06-07 21:42:08.214059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.076 [2024-06-07 21:42:08.214072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.076 [2024-06-07 21:42:08.214083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.076 [2024-06-07 21:42:08.214096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.076 [2024-06-07 21:42:08.214106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.076 [2024-06-07 21:42:08.214120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.076 [2024-06-07 21:42:08.214130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.076 [2024-06-07 21:42:08.214143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.076 [2024-06-07 21:42:08.214154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.076 [2024-06-07 21:42:08.214167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.076 [2024-06-07 21:42:08.214178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.076 [2024-06-07 21:42:08.214191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.076 [2024-06-07 21:42:08.214202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.076 [2024-06-07 21:42:08.214215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.077 [2024-06-07 21:42:08.214225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.077 [2024-06-07 21:42:08.214239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.077 [2024-06-07 21:42:08.214249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.077 [2024-06-07 21:42:08.214262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.077 [2024-06-07 21:42:08.214273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.077 [2024-06-07 21:42:08.214286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.077 [2024-06-07 21:42:08.214296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.077 [2024-06-07 21:42:08.214309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.077 [2024-06-07 21:42:08.214320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.077 [2024-06-07 21:42:08.214333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.077 [2024-06-07 21:42:08.214345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.077 [2024-06-07 21:42:08.214359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.077 [2024-06-07 21:42:08.214369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.077 [2024-06-07 21:42:08.214382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.077 [2024-06-07 21:42:08.214393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.077 [2024-06-07 21:42:08.214406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.077 [2024-06-07 21:42:08.214416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.077 [2024-06-07 21:42:08.214429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.077 [2024-06-07 21:42:08.214440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.077 [2024-06-07 21:42:08.214453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.077 [2024-06-07 21:42:08.214464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.077 [2024-06-07 21:42:08.214476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.077 [2024-06-07 21:42:08.214487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.077 [2024-06-07 21:42:08.214500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.077 [2024-06-07 21:42:08.214511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.077 [2024-06-07 21:42:08.214524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.077 [2024-06-07 21:42:08.214534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.077 [2024-06-07 21:42:08.214548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.077 [2024-06-07 21:42:08.214558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.077 [2024-06-07 21:42:08.214571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.077 [2024-06-07 21:42:08.214582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.077 [2024-06-07 21:42:08.214594] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a12e0 is same with the state(5) to be set 00:25:08.077 [2024-06-07 21:42:08.216207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.077 [2024-06-07 21:42:08.216225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.077 [2024-06-07 21:42:08.216242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.077 [2024-06-07 21:42:08.216253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.077 [2024-06-07 21:42:08.216272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.077 [2024-06-07 21:42:08.216283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.077 [2024-06-07 21:42:08.216297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.077 [2024-06-07 21:42:08.216309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.077 [2024-06-07 21:42:08.216322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.077 [2024-06-07 21:42:08.216335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.077 [2024-06-07 21:42:08.216349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.077 [2024-06-07 21:42:08.216360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.077 [2024-06-07 21:42:08.216374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.077 [2024-06-07 21:42:08.216385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.077 [2024-06-07 21:42:08.216400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.077 [2024-06-07 21:42:08.216411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.077 [2024-06-07 21:42:08.216424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.077 [2024-06-07 21:42:08.216437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.077 [2024-06-07 21:42:08.216450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.077 [2024-06-07 21:42:08.216461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.077 [2024-06-07 21:42:08.216475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.077 [2024-06-07 21:42:08.216486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.077 [2024-06-07 21:42:08.216499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.077 [2024-06-07 21:42:08.216511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.077 [2024-06-07 21:42:08.216524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.077 [2024-06-07 21:42:08.216535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.077 [2024-06-07 21:42:08.216552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.077 [2024-06-07 21:42:08.216562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.077 [2024-06-07 21:42:08.216576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.077 [2024-06-07 21:42:08.216590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.077 [2024-06-07 21:42:08.216603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.077 [2024-06-07 21:42:08.216616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.077 [2024-06-07 21:42:08.216630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.077 [2024-06-07 21:42:08.216641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.077 [2024-06-07 21:42:08.216654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.077 [2024-06-07 21:42:08.216665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.077 [2024-06-07 21:42:08.216678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.077 [2024-06-07 21:42:08.216688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.077 [2024-06-07 21:42:08.216702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.077 [2024-06-07 21:42:08.216713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.077 [2024-06-07 21:42:08.216725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.077 [2024-06-07 21:42:08.216736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.077 [2024-06-07 21:42:08.216749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.077 [2024-06-07 21:42:08.216760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.077 [2024-06-07 21:42:08.216773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.077 [2024-06-07 21:42:08.216784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.077 [2024-06-07 21:42:08.216797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.077 [2024-06-07 21:42:08.216808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.078 [2024-06-07 21:42:08.216821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.078 [2024-06-07 21:42:08.216832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.078 [2024-06-07 21:42:08.216845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.078 [2024-06-07 21:42:08.216856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.078 [2024-06-07 21:42:08.216871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.078 [2024-06-07 21:42:08.216882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.078 [2024-06-07 21:42:08.216896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.078 [2024-06-07 21:42:08.216907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.078 [2024-06-07 21:42:08.216920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.078 [2024-06-07 21:42:08.216932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.078 [2024-06-07 21:42:08.216944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.078 [2024-06-07 21:42:08.216956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.078 [2024-06-07 21:42:08.216969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.078 [2024-06-07 21:42:08.216980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.078 [2024-06-07 21:42:08.216993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.078 [2024-06-07 21:42:08.217004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.078 [2024-06-07 21:42:08.217017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.078 [2024-06-07 21:42:08.217035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.078 [2024-06-07 21:42:08.217048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.078 [2024-06-07 21:42:08.217060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.078 [2024-06-07 21:42:08.217074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.078 [2024-06-07 21:42:08.217096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.078 [2024-06-07 21:42:08.217108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.078 [2024-06-07 21:42:08.217118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.078 [2024-06-07 21:42:08.217129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.078 [2024-06-07 21:42:08.217139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.078 [2024-06-07 21:42:08.217151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.078 [2024-06-07 21:42:08.217160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.078 [2024-06-07 21:42:08.217173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.078 [2024-06-07 21:42:08.217182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.078 [2024-06-07 21:42:08.217194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.078 [2024-06-07 21:42:08.217206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.078 [2024-06-07 21:42:08.217217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.078 [2024-06-07 21:42:08.217227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.078 [2024-06-07 21:42:08.217239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.078 [2024-06-07 21:42:08.217249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.078 [2024-06-07 21:42:08.217261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.078 [2024-06-07 21:42:08.217271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.078 [2024-06-07 21:42:08.217283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.078 [2024-06-07 21:42:08.217293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.078 [2024-06-07 21:42:08.217304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.078 [2024-06-07 21:42:08.217314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.078 [2024-06-07 21:42:08.217326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.078 [2024-06-07 21:42:08.217336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.078 [2024-06-07 21:42:08.217348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.078 [2024-06-07 21:42:08.217358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.078 [2024-06-07 21:42:08.217369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.078 [2024-06-07 21:42:08.217379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.078 [2024-06-07 21:42:08.217391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.078 [2024-06-07 21:42:08.217400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.078 [2024-06-07 21:42:08.217412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.078 [2024-06-07 21:42:08.217422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.078 [2024-06-07 21:42:08.217435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.078 [2024-06-07 21:42:08.217445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.078 [2024-06-07 21:42:08.217457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.078 [2024-06-07 21:42:08.217466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.078 [2024-06-07 21:42:08.217480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.078 [2024-06-07 21:42:08.217490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.078 [2024-06-07 21:42:08.217502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.078 [2024-06-07 21:42:08.217511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.078 [2024-06-07 21:42:08.217523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.078 [2024-06-07 21:42:08.217533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.078 [2024-06-07 21:42:08.217544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.078 [2024-06-07 21:42:08.217554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.078 [2024-06-07 21:42:08.217565] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a3d20 is same with the state(5) to be set 00:25:08.078 [2024-06-07 21:42:08.218999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.078 [2024-06-07 21:42:08.219017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.078 [2024-06-07 21:42:08.219036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.078 [2024-06-07 21:42:08.219047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.079 [2024-06-07 21:42:08.219059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.079 [2024-06-07 21:42:08.219069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.079 [2024-06-07 21:42:08.219081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.079 [2024-06-07 21:42:08.219092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.079 [2024-06-07 21:42:08.219103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.079 [2024-06-07 21:42:08.219114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.079 [2024-06-07 21:42:08.219126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.079 [2024-06-07 21:42:08.219136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.079 [2024-06-07 21:42:08.219148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.079 [2024-06-07 21:42:08.219158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.079 [2024-06-07 21:42:08.219170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.079 [2024-06-07 21:42:08.219179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.079 [2024-06-07 21:42:08.219194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.079 [2024-06-07 21:42:08.219204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.079 [2024-06-07 21:42:08.219216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.079 [2024-06-07 21:42:08.219226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.079 [2024-06-07 21:42:08.219238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.079 [2024-06-07 21:42:08.219249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.079 [2024-06-07 21:42:08.219261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.079 [2024-06-07 21:42:08.219272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.079 [2024-06-07 21:42:08.219284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.079 [2024-06-07 21:42:08.219294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.079 [2024-06-07 21:42:08.219306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.079 [2024-06-07 21:42:08.219316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.079 [2024-06-07 21:42:08.219328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.079 [2024-06-07 21:42:08.219338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.079 [2024-06-07 21:42:08.219351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.079 [2024-06-07 21:42:08.219360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.079 [2024-06-07 21:42:08.219372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.079 [2024-06-07 21:42:08.219382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.079 [2024-06-07 21:42:08.219394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.079 [2024-06-07 21:42:08.219404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.079 [2024-06-07 21:42:08.219416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.079 [2024-06-07 21:42:08.219425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.079 [2024-06-07 21:42:08.219437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.079 [2024-06-07 21:42:08.219447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.079 [2024-06-07 21:42:08.219459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.079 [2024-06-07 21:42:08.219472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.079 [2024-06-07 21:42:08.219483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.079 [2024-06-07 21:42:08.219493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.079 [2024-06-07 21:42:08.219505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.079 [2024-06-07 21:42:08.219515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.079 [2024-06-07 21:42:08.219527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.079 [2024-06-07 21:42:08.219537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.079 [2024-06-07 21:42:08.219548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.079 [2024-06-07 21:42:08.219560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.079 [2024-06-07 21:42:08.219572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.079 [2024-06-07 21:42:08.219582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.079 [2024-06-07 21:42:08.219593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.079 [2024-06-07 21:42:08.219603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.079 [2024-06-07 21:42:08.219615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.079 [2024-06-07 21:42:08.219625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.079 [2024-06-07 21:42:08.219637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.079 [2024-06-07 21:42:08.219648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.079 [2024-06-07 21:42:08.219660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.079 [2024-06-07 21:42:08.219670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.079 [2024-06-07 21:42:08.219682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.079 [2024-06-07 21:42:08.219692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.079 [2024-06-07 21:42:08.219705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.079 [2024-06-07 21:42:08.219715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.079 [2024-06-07 21:42:08.219727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.079 [2024-06-07 21:42:08.219737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.079 [2024-06-07 21:42:08.219751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.079 [2024-06-07 21:42:08.219762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.079 [2024-06-07 21:42:08.219775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.079 [2024-06-07 21:42:08.219785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.079 [2024-06-07 21:42:08.219798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.079 [2024-06-07 21:42:08.219808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.079 [2024-06-07 21:42:08.219820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.079 [2024-06-07 21:42:08.219830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.079 [2024-06-07 21:42:08.219842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.079 [2024-06-07 21:42:08.219852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.079 [2024-06-07 21:42:08.219864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.079 [2024-06-07 21:42:08.219874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.079 [2024-06-07 21:42:08.219886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.079 [2024-06-07 21:42:08.219896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.079 [2024-06-07 21:42:08.219908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.079 [2024-06-07 21:42:08.219918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.079 [2024-06-07 21:42:08.219929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.080 [2024-06-07 21:42:08.219939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.080 [2024-06-07 21:42:08.219951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.080 [2024-06-07 21:42:08.219961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.080 [2024-06-07 21:42:08.219973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.080 [2024-06-07 21:42:08.219983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.080 [2024-06-07 21:42:08.219995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.080 [2024-06-07 21:42:08.220004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.080 [2024-06-07 21:42:08.220017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.080 [2024-06-07 21:42:08.220035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.080 [2024-06-07 21:42:08.220048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.080 [2024-06-07 21:42:08.220058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.080 [2024-06-07 21:42:08.220071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.080 [2024-06-07 21:42:08.220081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.080 [2024-06-07 21:42:08.220094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.080 [2024-06-07 21:42:08.220104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.080 [2024-06-07 21:42:08.220116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.080 [2024-06-07 21:42:08.220125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.080 [2024-06-07 21:42:08.220138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.080 [2024-06-07 21:42:08.220147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.080 [2024-06-07 21:42:08.220159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.080 [2024-06-07 21:42:08.220169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.080 [2024-06-07 21:42:08.220181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.080 [2024-06-07 21:42:08.220191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.080 [2024-06-07 21:42:08.220203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.080 [2024-06-07 21:42:08.220213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.080 [2024-06-07 21:42:08.220226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.080 [2024-06-07 21:42:08.220236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.080 [2024-06-07 21:42:08.220249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.080 [2024-06-07 21:42:08.220259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.080 [2024-06-07 21:42:08.220272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.080 [2024-06-07 21:42:08.220282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.080 [2024-06-07 21:42:08.220294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.080 [2024-06-07 21:42:08.220304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.080 [2024-06-07 21:42:08.220318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.080 [2024-06-07 21:42:08.220328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.080 [2024-06-07 21:42:08.220340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.080 [2024-06-07 21:42:08.220350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.080 [2024-06-07 21:42:08.220362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.080 [2024-06-07 21:42:08.220372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.080 [2024-06-07 21:42:08.220384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.080 [2024-06-07 21:42:08.220394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.080 [2024-06-07 21:42:08.220406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.080 [2024-06-07 21:42:08.220415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.080 [2024-06-07 21:42:08.220428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.080 [2024-06-07 21:42:08.220437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.080 [2024-06-07 21:42:08.220448] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x221c380 is same with the state(5) to be set 00:25:08.080 [2024-06-07 21:42:08.223136] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:25:08.080 [2024-06-07 21:42:08.223170] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:25:08.080 [2024-06-07 21:42:08.223185] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:25:08.080 [2024-06-07 21:42:08.223199] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:25:08.080 [2024-06-07 21:42:08.223447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.080 [2024-06-07 21:42:08.223467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f5200 with addr=10.0.0.2, port=4420 00:25:08.080 [2024-06-07 21:42:08.223478] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f5200 is same with the state(5) to be set 00:25:08.080 [2024-06-07 21:42:08.223494] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2264af0 (9): Bad file descriptor 00:25:08.080 [2024-06-07 21:42:08.223508] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b7820 (9): Bad file descriptor 00:25:08.080 [2024-06-07 21:42:08.223520] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2272cd0 (9): Bad file descriptor 00:25:08.080 [2024-06-07 21:42:08.223533] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a8e50 (9): Bad file descriptor 00:25:08.080 [2024-06-07 21:42:08.223571] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:08.080 [2024-06-07 21:42:08.223586] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:08.080 [2024-06-07 21:42:08.223601] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:08.080 [2024-06-07 21:42:08.223618] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:08.080 [2024-06-07 21:42:08.223632] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:08.080 [2024-06-07 21:42:08.223645] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f5200 (9): Bad file descriptor 00:25:08.080 task offset: 29824 on job bdev=Nvme3n1 fails 00:25:08.080 00:25:08.080 Latency(us) 00:25:08.080 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:08.080 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:08.080 Job: Nvme1n1 ended in about 1.12 seconds with error 00:25:08.080 Verification LBA range: start 0x0 length 0x400 00:25:08.080 Nvme1n1 : 1.12 113.90 7.12 56.95 0.00 370312.84 27525.12 312666.30 00:25:08.080 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:08.080 Job: Nvme2n1 ended in about 1.13 seconds with error 00:25:08.080 Verification LBA range: start 0x0 length 0x400 00:25:08.080 Nvme2n1 : 1.13 112.80 7.05 56.40 0.00 366049.59 41704.73 293601.28 00:25:08.080 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:08.080 Job: Nvme3n1 ended in about 1.10 seconds with error 00:25:08.080 Verification LBA range: start 0x0 length 0x400 00:25:08.080 Nvme3n1 : 1.10 174.74 10.92 58.25 0.00 259563.81 4081.11 306946.79 00:25:08.080 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:08.080 Job: Nvme4n1 ended in about 1.14 seconds with error 00:25:08.080 Verification LBA range: start 0x0 length 0x400 00:25:08.080 Nvme4n1 : 1.14 117.76 7.36 56.24 0.00 340680.08 19899.11 303133.79 00:25:08.080 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:08.080 Job: Nvme5n1 ended in about 1.14 seconds with error 00:25:08.080 Verification LBA range: start 0x0 length 0x400 00:25:08.080 Nvme5n1 : 1.14 168.27 10.52 56.09 0.00 258351.94 42419.67 285975.27 00:25:08.080 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:08.080 Job: Nvme6n1 ended in about 1.10 seconds with error 00:25:08.080 Verification LBA range: start 0x0 length 0x400 00:25:08.080 Nvme6n1 : 1.10 174.16 10.89 58.05 0.00 242850.27 4379.00 314572.80 00:25:08.080 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:08.080 Job: Nvme7n1 ended in about 1.14 seconds with error 00:25:08.080 Verification LBA range: start 0x0 length 0x400 00:25:08.080 Nvme7n1 : 1.14 118.89 7.43 48.95 0.00 328344.98 22878.02 314572.80 00:25:08.080 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:08.080 Job: Nvme8n1 ended in about 1.12 seconds with error 00:25:08.080 Verification LBA range: start 0x0 length 0x400 00:25:08.080 Nvme8n1 : 1.12 174.92 10.93 57.12 0.00 231923.77 14179.61 285975.27 00:25:08.081 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:08.081 Job: Nvme9n1 ended in about 1.15 seconds with error 00:25:08.081 Verification LBA range: start 0x0 length 0x400 00:25:08.081 Nvme9n1 : 1.15 111.61 6.98 55.81 0.00 315130.72 18230.92 316479.30 00:25:08.081 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:08.081 Job: Nvme10n1 ended in about 1.13 seconds with error 00:25:08.081 Verification LBA range: start 0x0 length 0x400 00:25:08.081 Nvme10n1 : 1.13 113.46 7.09 56.73 0.00 301172.98 29193.31 346983.33 00:25:08.081 =================================================================================================================== 00:25:08.081 Total : 1380.53 86.28 560.60 0.00 295180.73 4081.11 346983.33 00:25:08.081 [2024-06-07 21:42:08.252466] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:08.081 [2024-06-07 21:42:08.252510] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:25:08.081 [2024-06-07 21:42:08.252759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.081 [2024-06-07 21:42:08.252787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2273690 with addr=10.0.0.2, port=4420 00:25:08.081 [2024-06-07 21:42:08.252800] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2273690 is same with the state(5) to be set 00:25:08.081 [2024-06-07 21:42:08.253104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.081 [2024-06-07 21:42:08.253120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c94b10 with addr=10.0.0.2, port=4420 00:25:08.081 [2024-06-07 21:42:08.253130] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94b10 is same with the state(5) to be set 00:25:08.081 [2024-06-07 21:42:08.253300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.081 [2024-06-07 21:42:08.253315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ca4b0 with addr=10.0.0.2, port=4420 00:25:08.081 [2024-06-07 21:42:08.253325] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ca4b0 is same with the state(5) to be set 00:25:08.081 [2024-06-07 21:42:08.253501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.081 [2024-06-07 21:42:08.253515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baa610 with addr=10.0.0.2, port=4420 00:25:08.081 [2024-06-07 21:42:08.253525] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baa610 is same with the state(5) to be set 00:25:08.081 [2024-06-07 21:42:08.253538] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:25:08.081 [2024-06-07 21:42:08.253547] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:25:08.081 [2024-06-07 21:42:08.253559] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:25:08.081 [2024-06-07 21:42:08.253578] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:25:08.081 [2024-06-07 21:42:08.253587] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:25:08.081 [2024-06-07 21:42:08.253596] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:25:08.081 [2024-06-07 21:42:08.253614] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:25:08.081 [2024-06-07 21:42:08.253623] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:25:08.081 [2024-06-07 21:42:08.253632] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:25:08.081 [2024-06-07 21:42:08.253646] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:08.081 [2024-06-07 21:42:08.253655] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:08.081 [2024-06-07 21:42:08.253664] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:08.081 [2024-06-07 21:42:08.255315] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:08.081 [2024-06-07 21:42:08.255331] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:08.081 [2024-06-07 21:42:08.255340] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:08.081 [2024-06-07 21:42:08.255348] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:08.081 [2024-06-07 21:42:08.255599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.081 [2024-06-07 21:42:08.255616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f5000 with addr=10.0.0.2, port=4420 00:25:08.081 [2024-06-07 21:42:08.255626] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f5000 is same with the state(5) to be set 00:25:08.081 [2024-06-07 21:42:08.255645] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2273690 (9): Bad file descriptor 00:25:08.081 [2024-06-07 21:42:08.255659] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c94b10 (9): Bad file descriptor 00:25:08.081 [2024-06-07 21:42:08.255671] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ca4b0 (9): Bad file descriptor 00:25:08.081 [2024-06-07 21:42:08.255683] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baa610 (9): Bad file descriptor 00:25:08.081 [2024-06-07 21:42:08.255694] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:25:08.081 [2024-06-07 21:42:08.255703] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:25:08.081 [2024-06-07 21:42:08.255713] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:25:08.081 [2024-06-07 21:42:08.255774] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:08.081 [2024-06-07 21:42:08.255790] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:08.081 [2024-06-07 21:42:08.255803] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:08.081 [2024-06-07 21:42:08.255815] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:08.081 [2024-06-07 21:42:08.255829] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:08.081 [2024-06-07 21:42:08.256494] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:08.081 [2024-06-07 21:42:08.256539] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f5000 (9): Bad file descriptor 00:25:08.081 [2024-06-07 21:42:08.256552] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:25:08.081 [2024-06-07 21:42:08.256561] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:25:08.081 [2024-06-07 21:42:08.256571] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:08.081 [2024-06-07 21:42:08.256583] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:25:08.081 [2024-06-07 21:42:08.256593] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:25:08.081 [2024-06-07 21:42:08.256602] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:25:08.081 [2024-06-07 21:42:08.256614] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:25:08.081 [2024-06-07 21:42:08.256623] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:25:08.081 [2024-06-07 21:42:08.256632] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:25:08.081 [2024-06-07 21:42:08.256645] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:25:08.081 [2024-06-07 21:42:08.256653] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:25:08.081 [2024-06-07 21:42:08.256662] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:25:08.081 [2024-06-07 21:42:08.256742] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:08.081 [2024-06-07 21:42:08.256757] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:25:08.081 [2024-06-07 21:42:08.256768] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:25:08.081 [2024-06-07 21:42:08.256779] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:25:08.081 [2024-06-07 21:42:08.256794] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:08.081 [2024-06-07 21:42:08.256803] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:08.081 [2024-06-07 21:42:08.256811] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:08.081 [2024-06-07 21:42:08.256847] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:25:08.081 [2024-06-07 21:42:08.256856] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:25:08.081 [2024-06-07 21:42:08.256865] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:25:08.081 [2024-06-07 21:42:08.256889] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:08.081 [2024-06-07 21:42:08.256906] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:08.081 [2024-06-07 21:42:08.257240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.081 [2024-06-07 21:42:08.257259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a8e50 with addr=10.0.0.2, port=4420 00:25:08.081 [2024-06-07 21:42:08.257269] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a8e50 is same with the state(5) to be set 00:25:08.081 [2024-06-07 21:42:08.257512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.081 [2024-06-07 21:42:08.257526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2272cd0 with addr=10.0.0.2, port=4420 00:25:08.081 [2024-06-07 21:42:08.257536] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272cd0 is same with the state(5) to be set 00:25:08.081 [2024-06-07 21:42:08.257696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.081 [2024-06-07 21:42:08.257710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20b7820 with addr=10.0.0.2, port=4420 00:25:08.081 [2024-06-07 21:42:08.257719] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7820 is same with the state(5) to be set 00:25:08.081 [2024-06-07 21:42:08.257996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.081 [2024-06-07 21:42:08.258011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2264af0 with addr=10.0.0.2, port=4420 00:25:08.081 [2024-06-07 21:42:08.258020] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264af0 is same with the state(5) to be set 00:25:08.081 [2024-06-07 21:42:08.258064] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a8e50 (9): Bad file descriptor 00:25:08.081 [2024-06-07 21:42:08.258078] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2272cd0 (9): Bad file descriptor 00:25:08.082 [2024-06-07 21:42:08.258090] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b7820 (9): Bad file descriptor 00:25:08.082 [2024-06-07 21:42:08.258101] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2264af0 (9): Bad file descriptor 00:25:08.082 [2024-06-07 21:42:08.258136] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:08.082 [2024-06-07 21:42:08.258147] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:08.082 [2024-06-07 21:42:08.258156] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:08.082 [2024-06-07 21:42:08.258168] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:25:08.082 [2024-06-07 21:42:08.258177] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:25:08.082 [2024-06-07 21:42:08.258185] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:25:08.082 [2024-06-07 21:42:08.258201] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:25:08.082 [2024-06-07 21:42:08.258210] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:25:08.082 [2024-06-07 21:42:08.258219] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:25:08.082 [2024-06-07 21:42:08.258230] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:25:08.082 [2024-06-07 21:42:08.258239] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:25:08.082 [2024-06-07 21:42:08.258248] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:25:08.082 [2024-06-07 21:42:08.258283] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:08.082 [2024-06-07 21:42:08.258293] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:08.082 [2024-06-07 21:42:08.258301] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:08.082 [2024-06-07 21:42:08.258310] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:08.649 21:42:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:25:08.649 21:42:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:25:09.587 21:42:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1529890 00:25:09.587 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1529890) - No such process 00:25:09.587 21:42:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:25:09.587 21:42:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:25:09.587 21:42:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:09.587 21:42:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:09.587 21:42:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:09.587 21:42:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:09.587 21:42:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:09.587 21:42:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:25:09.587 21:42:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:09.587 21:42:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:25:09.587 21:42:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:09.587 21:42:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:09.587 rmmod nvme_tcp 00:25:09.587 rmmod nvme_fabrics 00:25:09.587 rmmod nvme_keyring 00:25:09.587 21:42:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:09.587 21:42:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:25:09.587 21:42:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:25:09.587 21:42:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:25:09.587 21:42:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:09.587 21:42:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:09.587 21:42:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:09.587 21:42:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:09.587 21:42:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:09.587 21:42:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.588 21:42:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:09.588 21:42:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.552 21:42:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:11.552 00:25:11.552 real 0m8.304s 00:25:11.552 user 0m21.272s 00:25:11.552 sys 0m1.436s 00:25:11.552 21:42:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:11.552 21:42:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:11.552 ************************************ 00:25:11.552 END TEST nvmf_shutdown_tc3 00:25:11.552 ************************************ 00:25:11.552 21:42:11 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:25:11.552 00:25:11.552 real 0m32.808s 00:25:11.552 user 1m21.440s 00:25:11.552 sys 0m9.331s 00:25:11.552 21:42:11 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:11.552 21:42:11 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:11.552 ************************************ 00:25:11.552 END TEST nvmf_shutdown 00:25:11.552 ************************************ 00:25:11.812 21:42:11 nvmf_tcp -- nvmf/nvmf.sh@85 -- # timing_exit target 00:25:11.812 21:42:11 nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:11.812 21:42:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:11.812 21:42:11 nvmf_tcp -- nvmf/nvmf.sh@87 -- # timing_enter host 00:25:11.812 21:42:11 nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:11.812 21:42:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:11.812 21:42:11 nvmf_tcp -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:25:11.812 21:42:11 nvmf_tcp -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:11.812 21:42:11 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:25:11.812 21:42:11 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:11.812 21:42:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:11.812 ************************************ 00:25:11.812 START TEST nvmf_multicontroller 00:25:11.812 ************************************ 00:25:11.812 21:42:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:11.812 * Looking for test storage... 00:25:11.812 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:25:11.812 21:42:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:18.380 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:18.380 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:18.380 Found net devices under 0000:af:00.0: cvl_0_0 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:18.380 Found net devices under 0000:af:00.1: cvl_0_1 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:18.380 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:18.381 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:18.381 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:18.381 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:18.381 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:18.381 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:18.381 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:18.381 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:18.381 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:18.381 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:18.381 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:18.381 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:18.381 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:18.381 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:18.381 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:18.381 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:18.381 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:18.381 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:18.381 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:18.381 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:18.381 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:18.381 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:25:18.381 00:25:18.381 --- 10.0.0.2 ping statistics --- 00:25:18.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:18.381 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:25:18.381 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:18.381 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:18.381 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:25:18.381 00:25:18.381 --- 10.0.0.1 ping statistics --- 00:25:18.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:18.381 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:25:18.381 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:18.381 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:25:18.381 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:18.381 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:18.381 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:18.381 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:18.381 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:18.381 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:18.381 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:18.381 21:42:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:25:18.381 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:18.381 21:42:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:18.381 21:42:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:18.381 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1534961 00:25:18.381 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1534961 00:25:18.381 21:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:18.381 21:42:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@830 -- # '[' -z 1534961 ']' 00:25:18.381 21:42:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:18.381 21:42:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:18.381 21:42:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:18.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:18.381 21:42:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:18.381 21:42:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:18.381 [2024-06-07 21:42:18.577446] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:25:18.381 [2024-06-07 21:42:18.577502] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:18.381 EAL: No free 2048 kB hugepages reported on node 1 00:25:18.640 [2024-06-07 21:42:18.664881] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:18.640 [2024-06-07 21:42:18.754381] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:18.640 [2024-06-07 21:42:18.754423] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:18.640 [2024-06-07 21:42:18.754433] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:18.640 [2024-06-07 21:42:18.754441] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:18.640 [2024-06-07 21:42:18.754449] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:18.640 [2024-06-07 21:42:18.754555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:18.640 [2024-06-07 21:42:18.754678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:18.640 [2024-06-07 21:42:18.754679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@863 -- # return 0 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:19.578 [2024-06-07 21:42:19.568481] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:19.578 Malloc0 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:19.578 [2024-06-07 21:42:19.634226] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:19.578 [2024-06-07 21:42:19.642185] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:19.578 Malloc1 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1535246 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1535246 /var/tmp/bdevperf.sock 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@830 -- # '[' -z 1535246 ']' 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:19.578 21:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:19.579 21:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:19.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:19.579 21:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:19.579 21:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:19.838 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:19.838 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@863 -- # return 0 00:25:19.838 21:42:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:25:19.838 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:19.838 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:20.097 NVMe0n1 00:25:20.097 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:20.097 21:42:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:20.097 21:42:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:25:20.097 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:20.097 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:20.097 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:20.097 1 00:25:20.097 21:42:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:25:20.097 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:25:20.097 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:25:20.097 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:25:20.097 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:20.097 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:25:20.097 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:20.097 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:25:20.097 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:20.097 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:20.097 request: 00:25:20.097 { 00:25:20.097 "name": "NVMe0", 00:25:20.097 "trtype": "tcp", 00:25:20.097 "traddr": "10.0.0.2", 00:25:20.097 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:25:20.097 "hostaddr": "10.0.0.2", 00:25:20.097 "hostsvcid": "60000", 00:25:20.097 "adrfam": "ipv4", 00:25:20.097 "trsvcid": "4420", 00:25:20.097 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:20.097 "method": "bdev_nvme_attach_controller", 00:25:20.097 "req_id": 1 00:25:20.097 } 00:25:20.097 Got JSON-RPC error response 00:25:20.097 response: 00:25:20.097 { 00:25:20.097 "code": -114, 00:25:20.097 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:25:20.097 } 00:25:20.097 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:25:20.097 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:25:20.097 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:25:20.097 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:25:20.097 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:25:20.097 21:42:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:25:20.097 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:25:20.097 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:25:20.097 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:25:20.097 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:20.097 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:25:20.097 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:20.097 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:25:20.097 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:20.097 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:20.097 request: 00:25:20.097 { 00:25:20.097 "name": "NVMe0", 00:25:20.097 "trtype": "tcp", 00:25:20.098 "traddr": "10.0.0.2", 00:25:20.098 "hostaddr": "10.0.0.2", 00:25:20.098 "hostsvcid": "60000", 00:25:20.098 "adrfam": "ipv4", 00:25:20.098 "trsvcid": "4420", 00:25:20.098 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:20.098 "method": "bdev_nvme_attach_controller", 00:25:20.098 "req_id": 1 00:25:20.098 } 00:25:20.098 Got JSON-RPC error response 00:25:20.098 response: 00:25:20.098 { 00:25:20.098 "code": -114, 00:25:20.098 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:25:20.098 } 00:25:20.098 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:25:20.098 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:25:20.098 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:25:20.098 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:25:20.098 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:25:20.098 21:42:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:25:20.098 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:25:20.098 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:25:20.098 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:25:20.098 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:20.098 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:25:20.098 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:20.098 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:25:20.098 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:20.098 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:20.098 request: 00:25:20.098 { 00:25:20.098 "name": "NVMe0", 00:25:20.098 "trtype": "tcp", 00:25:20.098 "traddr": "10.0.0.2", 00:25:20.098 "hostaddr": "10.0.0.2", 00:25:20.098 "hostsvcid": "60000", 00:25:20.098 "adrfam": "ipv4", 00:25:20.098 "trsvcid": "4420", 00:25:20.098 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:20.098 "multipath": "disable", 00:25:20.098 "method": "bdev_nvme_attach_controller", 00:25:20.098 "req_id": 1 00:25:20.098 } 00:25:20.098 Got JSON-RPC error response 00:25:20.098 response: 00:25:20.098 { 00:25:20.098 "code": -114, 00:25:20.098 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:25:20.098 } 00:25:20.098 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:25:20.098 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:25:20.098 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:25:20.098 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:25:20.098 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:25:20.098 21:42:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:25:20.098 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:25:20.098 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:25:20.098 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:25:20.098 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:20.098 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:25:20.098 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:20.098 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:25:20.098 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:20.098 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:20.098 request: 00:25:20.098 { 00:25:20.098 "name": "NVMe0", 00:25:20.098 "trtype": "tcp", 00:25:20.098 "traddr": "10.0.0.2", 00:25:20.098 "hostaddr": "10.0.0.2", 00:25:20.098 "hostsvcid": "60000", 00:25:20.098 "adrfam": "ipv4", 00:25:20.098 "trsvcid": "4420", 00:25:20.098 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:20.098 "multipath": "failover", 00:25:20.098 "method": "bdev_nvme_attach_controller", 00:25:20.098 "req_id": 1 00:25:20.098 } 00:25:20.098 Got JSON-RPC error response 00:25:20.098 response: 00:25:20.098 { 00:25:20.098 "code": -114, 00:25:20.098 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:25:20.098 } 00:25:20.098 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:25:20.098 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:25:20.098 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:25:20.098 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:25:20.098 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:25:20.098 21:42:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:20.098 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:20.098 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:20.357 00:25:20.357 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:20.357 21:42:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:20.357 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:20.357 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:20.357 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:20.357 21:42:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:25:20.357 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:20.357 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:20.357 00:25:20.357 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:20.357 21:42:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:20.357 21:42:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:25:20.357 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:20.357 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:20.357 21:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:20.357 21:42:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:25:20.357 21:42:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:21.736 0 00:25:21.736 21:42:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:25:21.736 21:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:21.736 21:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:21.736 21:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:21.736 21:42:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1535246 00:25:21.736 21:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@949 -- # '[' -z 1535246 ']' 00:25:21.736 21:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # kill -0 1535246 00:25:21.736 21:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # uname 00:25:21.736 21:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:21.736 21:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1535246 00:25:21.736 21:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:25:21.736 21:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:25:21.736 21:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1535246' 00:25:21.736 killing process with pid 1535246 00:25:21.736 21:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@968 -- # kill 1535246 00:25:21.736 21:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@973 -- # wait 1535246 00:25:21.736 21:42:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:21.736 21:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:21.736 21:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:21.736 21:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:21.736 21:42:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:21.736 21:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:21.736 21:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:21.736 21:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:21.736 21:42:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:25:21.736 21:42:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:21.736 21:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # read -r file 00:25:21.736 21:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:25:21.736 21:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # sort -u 00:25:21.736 21:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # cat 00:25:21.736 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:21.736 [2024-06-07 21:42:19.744138] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:25:21.736 [2024-06-07 21:42:19.744188] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1535246 ] 00:25:21.736 EAL: No free 2048 kB hugepages reported on node 1 00:25:21.736 [2024-06-07 21:42:19.820789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.736 [2024-06-07 21:42:19.913820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.736 [2024-06-07 21:42:20.479335] bdev.c:4580:bdev_name_add: *ERROR*: Bdev name 023e97ab-2a41-4e5a-a316-a98fc3f3af69 already exists 00:25:21.736 [2024-06-07 21:42:20.479370] bdev.c:7696:bdev_register: *ERROR*: Unable to add uuid:023e97ab-2a41-4e5a-a316-a98fc3f3af69 alias for bdev NVMe1n1 00:25:21.736 [2024-06-07 21:42:20.479383] bdev_nvme.c:4308:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:25:21.736 Running I/O for 1 seconds... 00:25:21.736 00:25:21.736 Latency(us) 00:25:21.736 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:21.736 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:25:21.736 NVMe0n1 : 1.00 16924.54 66.11 0.00 0.00 7550.80 4796.04 14954.12 00:25:21.736 =================================================================================================================== 00:25:21.736 Total : 16924.54 66.11 0.00 0.00 7550.80 4796.04 14954.12 00:25:21.736 Received shutdown signal, test time was about 1.000000 seconds 00:25:21.736 00:25:21.736 Latency(us) 00:25:21.736 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:21.736 =================================================================================================================== 00:25:21.736 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:21.736 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:21.736 21:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1617 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:21.736 21:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # read -r file 00:25:21.736 21:42:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:25:21.736 21:42:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:21.736 21:42:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:25:21.736 21:42:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:21.736 21:42:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:25:21.736 21:42:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:21.736 21:42:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:21.736 rmmod nvme_tcp 00:25:21.736 rmmod nvme_fabrics 00:25:21.736 rmmod nvme_keyring 00:25:21.995 21:42:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:21.995 21:42:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:25:21.995 21:42:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:25:21.995 21:42:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1534961 ']' 00:25:21.995 21:42:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1534961 00:25:21.995 21:42:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@949 -- # '[' -z 1534961 ']' 00:25:21.995 21:42:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # kill -0 1534961 00:25:21.995 21:42:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # uname 00:25:21.995 21:42:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:21.995 21:42:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1534961 00:25:21.996 21:42:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:25:21.996 21:42:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:25:21.996 21:42:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1534961' 00:25:21.996 killing process with pid 1534961 00:25:21.996 21:42:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@968 -- # kill 1534961 00:25:21.996 21:42:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@973 -- # wait 1534961 00:25:22.255 21:42:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:22.255 21:42:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:22.255 21:42:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:22.255 21:42:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:22.255 21:42:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:22.255 21:42:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:22.255 21:42:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:22.255 21:42:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:24.161 21:42:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:24.161 00:25:24.161 real 0m12.459s 00:25:24.161 user 0m14.895s 00:25:24.161 sys 0m5.730s 00:25:24.161 21:42:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:24.161 21:42:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:24.161 ************************************ 00:25:24.161 END TEST nvmf_multicontroller 00:25:24.161 ************************************ 00:25:24.161 21:42:24 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:24.161 21:42:24 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:25:24.161 21:42:24 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:24.161 21:42:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:24.421 ************************************ 00:25:24.421 START TEST nvmf_aer 00:25:24.421 ************************************ 00:25:24.421 21:42:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:24.421 * Looking for test storage... 00:25:24.421 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:24.421 21:42:24 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:24.421 21:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:25:24.421 21:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:24.421 21:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:24.421 21:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:24.421 21:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:24.421 21:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:24.421 21:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:24.421 21:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:24.421 21:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:24.421 21:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:24.421 21:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:24.421 21:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:25:24.421 21:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:25:24.421 21:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:24.421 21:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:24.421 21:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:24.421 21:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:24.421 21:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:24.421 21:42:24 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:24.421 21:42:24 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:24.421 21:42:24 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:24.421 21:42:24 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.421 21:42:24 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.421 21:42:24 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.421 21:42:24 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:25:24.421 21:42:24 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.421 21:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:25:24.421 21:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:24.421 21:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:24.421 21:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:24.421 21:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:24.421 21:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:24.421 21:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:24.421 21:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:24.421 21:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:24.421 21:42:24 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:25:24.421 21:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:24.421 21:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:24.421 21:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:24.421 21:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:24.421 21:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:24.421 21:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:24.421 21:42:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:24.421 21:42:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:24.421 21:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:24.421 21:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:24.421 21:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:25:24.421 21:42:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:30.988 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:30.988 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:30.988 Found net devices under 0000:af:00.0: cvl_0_0 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:30.988 Found net devices under 0000:af:00.1: cvl_0_1 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:30.988 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:30.989 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:30.989 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:30.989 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:30.989 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:30.989 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:30.989 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:30.989 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:30.989 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:30.989 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:30.989 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:30.989 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:30.989 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:30.989 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:30.989 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:30.989 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:30.989 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:30.989 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:30.989 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:30.989 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:30.989 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:30.989 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:30.989 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:25:30.989 00:25:30.989 --- 10.0.0.2 ping statistics --- 00:25:30.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.989 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:25:30.989 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:30.989 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:30.989 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:25:30.989 00:25:30.989 --- 10.0.0.1 ping statistics --- 00:25:30.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.989 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:25:30.989 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:30.989 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:25:30.989 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:30.989 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:30.989 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:30.989 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:30.989 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:30.989 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:30.989 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:30.989 21:42:30 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:25:30.989 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:30.989 21:42:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:30.989 21:42:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:30.989 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1539531 00:25:30.989 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1539531 00:25:30.989 21:42:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:30.989 21:42:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@830 -- # '[' -z 1539531 ']' 00:25:30.989 21:42:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:30.989 21:42:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:30.989 21:42:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:30.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:30.989 21:42:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:30.989 21:42:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:30.989 [2024-06-07 21:42:30.753787] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:25:30.989 [2024-06-07 21:42:30.753848] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:30.989 EAL: No free 2048 kB hugepages reported on node 1 00:25:30.989 [2024-06-07 21:42:30.849954] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:30.989 [2024-06-07 21:42:30.943341] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:30.989 [2024-06-07 21:42:30.943379] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:30.989 [2024-06-07 21:42:30.943390] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:30.989 [2024-06-07 21:42:30.943400] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:30.989 [2024-06-07 21:42:30.943409] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:30.989 [2024-06-07 21:42:30.943461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:30.989 [2024-06-07 21:42:30.943516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:30.989 [2024-06-07 21:42:30.943517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.989 [2024-06-07 21:42:30.943480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:31.556 21:42:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:31.556 21:42:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@863 -- # return 0 00:25:31.556 21:42:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:31.556 21:42:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:31.556 21:42:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:31.556 21:42:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:31.556 21:42:31 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:31.556 21:42:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:31.556 21:42:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:31.556 [2024-06-07 21:42:31.742600] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:31.556 21:42:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:31.556 21:42:31 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:25:31.556 21:42:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:31.556 21:42:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:31.556 Malloc0 00:25:31.556 21:42:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:31.556 21:42:31 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:25:31.556 21:42:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:31.556 21:42:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:31.556 21:42:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:31.556 21:42:31 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:31.556 21:42:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:31.556 21:42:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:31.556 21:42:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:31.556 21:42:31 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:31.556 21:42:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:31.556 21:42:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:31.556 [2024-06-07 21:42:31.798376] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:31.556 21:42:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:31.556 21:42:31 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:25:31.556 21:42:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:31.556 21:42:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:31.556 [ 00:25:31.556 { 00:25:31.556 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:31.556 "subtype": "Discovery", 00:25:31.556 "listen_addresses": [], 00:25:31.556 "allow_any_host": true, 00:25:31.556 "hosts": [] 00:25:31.556 }, 00:25:31.556 { 00:25:31.556 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:31.556 "subtype": "NVMe", 00:25:31.556 "listen_addresses": [ 00:25:31.556 { 00:25:31.556 "trtype": "TCP", 00:25:31.556 "adrfam": "IPv4", 00:25:31.556 "traddr": "10.0.0.2", 00:25:31.556 "trsvcid": "4420" 00:25:31.556 } 00:25:31.556 ], 00:25:31.556 "allow_any_host": true, 00:25:31.556 "hosts": [], 00:25:31.556 "serial_number": "SPDK00000000000001", 00:25:31.556 "model_number": "SPDK bdev Controller", 00:25:31.557 "max_namespaces": 2, 00:25:31.557 "min_cntlid": 1, 00:25:31.557 "max_cntlid": 65519, 00:25:31.557 "namespaces": [ 00:25:31.557 { 00:25:31.557 "nsid": 1, 00:25:31.557 "bdev_name": "Malloc0", 00:25:31.557 "name": "Malloc0", 00:25:31.557 "nguid": "EBF1D7432BC64BD9B1ECD762037A1D66", 00:25:31.557 "uuid": "ebf1d743-2bc6-4bd9-b1ec-d762037a1d66" 00:25:31.557 } 00:25:31.557 ] 00:25:31.557 } 00:25:31.557 ] 00:25:31.557 21:42:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:31.557 21:42:31 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:25:31.557 21:42:31 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:25:31.557 21:42:31 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=1539812 00:25:31.557 21:42:31 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:25:31.557 21:42:31 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:25:31.557 21:42:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # local i=0 00:25:31.557 21:42:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:31.557 21:42:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' 0 -lt 200 ']' 00:25:31.557 21:42:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # i=1 00:25:31.557 21:42:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # sleep 0.1 00:25:31.815 EAL: No free 2048 kB hugepages reported on node 1 00:25:31.815 21:42:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:31.815 21:42:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' 1 -lt 200 ']' 00:25:31.815 21:42:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # i=2 00:25:31.815 21:42:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # sleep 0.1 00:25:31.815 21:42:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:31.815 21:42:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:31.815 21:42:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1275 -- # return 0 00:25:31.815 21:42:32 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:25:31.815 21:42:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:31.815 21:42:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:31.815 Malloc1 00:25:31.815 21:42:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:31.815 21:42:32 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:25:31.815 21:42:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:31.815 21:42:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:31.815 21:42:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:31.815 21:42:32 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:25:31.815 21:42:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:31.815 21:42:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:32.075 Asynchronous Event Request test 00:25:32.075 Attaching to 10.0.0.2 00:25:32.075 Attached to 10.0.0.2 00:25:32.075 Registering asynchronous event callbacks... 00:25:32.075 Starting namespace attribute notice tests for all controllers... 00:25:32.075 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:25:32.075 aer_cb - Changed Namespace 00:25:32.075 Cleaning up... 00:25:32.075 [ 00:25:32.075 { 00:25:32.075 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:32.075 "subtype": "Discovery", 00:25:32.075 "listen_addresses": [], 00:25:32.075 "allow_any_host": true, 00:25:32.075 "hosts": [] 00:25:32.075 }, 00:25:32.075 { 00:25:32.075 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:32.075 "subtype": "NVMe", 00:25:32.075 "listen_addresses": [ 00:25:32.075 { 00:25:32.075 "trtype": "TCP", 00:25:32.075 "adrfam": "IPv4", 00:25:32.075 "traddr": "10.0.0.2", 00:25:32.075 "trsvcid": "4420" 00:25:32.075 } 00:25:32.075 ], 00:25:32.075 "allow_any_host": true, 00:25:32.075 "hosts": [], 00:25:32.075 "serial_number": "SPDK00000000000001", 00:25:32.075 "model_number": "SPDK bdev Controller", 00:25:32.075 "max_namespaces": 2, 00:25:32.075 "min_cntlid": 1, 00:25:32.075 "max_cntlid": 65519, 00:25:32.075 "namespaces": [ 00:25:32.075 { 00:25:32.075 "nsid": 1, 00:25:32.075 "bdev_name": "Malloc0", 00:25:32.075 "name": "Malloc0", 00:25:32.075 "nguid": "EBF1D7432BC64BD9B1ECD762037A1D66", 00:25:32.075 "uuid": "ebf1d743-2bc6-4bd9-b1ec-d762037a1d66" 00:25:32.075 }, 00:25:32.075 { 00:25:32.075 "nsid": 2, 00:25:32.075 "bdev_name": "Malloc1", 00:25:32.075 "name": "Malloc1", 00:25:32.075 "nguid": "AC46E74371784EF5BBFA2F8DA00705F3", 00:25:32.075 "uuid": "ac46e743-7178-4ef5-bbfa-2f8da00705f3" 00:25:32.075 } 00:25:32.075 ] 00:25:32.075 } 00:25:32.075 ] 00:25:32.075 21:42:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:32.075 21:42:32 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 1539812 00:25:32.075 21:42:32 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:25:32.075 21:42:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:32.075 21:42:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:32.075 21:42:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:32.075 21:42:32 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:25:32.075 21:42:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:32.075 21:42:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:32.075 21:42:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:32.075 21:42:32 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:32.075 21:42:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:32.075 21:42:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:32.075 21:42:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:32.075 21:42:32 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:25:32.075 21:42:32 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:25:32.075 21:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:32.075 21:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:25:32.075 21:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:32.075 21:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:25:32.075 21:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:32.075 21:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:32.075 rmmod nvme_tcp 00:25:32.075 rmmod nvme_fabrics 00:25:32.075 rmmod nvme_keyring 00:25:32.075 21:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:32.075 21:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:25:32.075 21:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:25:32.075 21:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1539531 ']' 00:25:32.075 21:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1539531 00:25:32.075 21:42:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@949 -- # '[' -z 1539531 ']' 00:25:32.075 21:42:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # kill -0 1539531 00:25:32.075 21:42:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # uname 00:25:32.075 21:42:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:32.075 21:42:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1539531 00:25:32.075 21:42:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:25:32.075 21:42:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:25:32.075 21:42:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1539531' 00:25:32.075 killing process with pid 1539531 00:25:32.075 21:42:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@968 -- # kill 1539531 00:25:32.075 21:42:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@973 -- # wait 1539531 00:25:32.334 21:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:32.334 21:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:32.334 21:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:32.334 21:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:32.334 21:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:32.334 21:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:32.334 21:42:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:32.334 21:42:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:34.868 21:42:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:34.868 00:25:34.868 real 0m10.085s 00:25:34.868 user 0m7.848s 00:25:34.868 sys 0m5.110s 00:25:34.868 21:42:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:34.868 21:42:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:34.868 ************************************ 00:25:34.868 END TEST nvmf_aer 00:25:34.868 ************************************ 00:25:34.868 21:42:34 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:34.868 21:42:34 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:25:34.868 21:42:34 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:34.868 21:42:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:34.868 ************************************ 00:25:34.868 START TEST nvmf_async_init 00:25:34.868 ************************************ 00:25:34.868 21:42:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:34.868 * Looking for test storage... 00:25:34.868 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:34.868 21:42:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:34.868 21:42:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:25:34.868 21:42:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:34.868 21:42:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:34.869 21:42:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:34.869 21:42:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:34.869 21:42:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:34.869 21:42:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:34.869 21:42:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:34.869 21:42:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:34.869 21:42:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:34.869 21:42:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:34.869 21:42:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:25:34.869 21:42:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:25:34.869 21:42:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:34.869 21:42:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:34.869 21:42:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:34.869 21:42:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:34.869 21:42:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:34.869 21:42:34 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:34.869 21:42:34 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:34.869 21:42:34 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:34.869 21:42:34 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.869 21:42:34 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.869 21:42:34 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.869 21:42:34 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:25:34.869 21:42:34 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.869 21:42:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:25:34.869 21:42:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:34.869 21:42:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:34.869 21:42:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:34.869 21:42:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:34.869 21:42:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:34.869 21:42:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:34.869 21:42:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:34.869 21:42:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:34.869 21:42:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:25:34.869 21:42:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:25:34.869 21:42:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:25:34.869 21:42:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:25:34.869 21:42:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:25:34.869 21:42:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:25:34.869 21:42:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=03f4114445ff4c558e4843fb8ba41cfa 00:25:34.869 21:42:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:25:34.869 21:42:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:34.869 21:42:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:34.869 21:42:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:34.869 21:42:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:34.869 21:42:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:34.869 21:42:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:34.869 21:42:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:34.869 21:42:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:34.869 21:42:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:34.869 21:42:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:34.869 21:42:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:25:34.869 21:42:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:41.437 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:41.437 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:41.437 Found net devices under 0000:af:00.0: cvl_0_0 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:41.437 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:41.437 Found net devices under 0000:af:00.1: cvl_0_1 00:25:41.438 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:41.438 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:41.438 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:25:41.438 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:41.438 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:41.438 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:41.438 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:41.438 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:41.438 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:41.438 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:41.438 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:41.438 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:41.438 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:41.438 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:41.438 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:41.438 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:41.438 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:41.438 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:41.438 21:42:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:41.438 21:42:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:41.438 21:42:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:41.438 21:42:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:41.438 21:42:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:41.438 21:42:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:41.438 21:42:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:41.438 21:42:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:41.438 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:41.438 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:25:41.438 00:25:41.438 --- 10.0.0.2 ping statistics --- 00:25:41.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:41.438 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:25:41.438 21:42:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:41.438 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:41.438 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:25:41.438 00:25:41.438 --- 10.0.0.1 ping statistics --- 00:25:41.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:41.438 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:25:41.438 21:42:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:41.438 21:42:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:25:41.438 21:42:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:41.438 21:42:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:41.438 21:42:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:41.438 21:42:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:41.438 21:42:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:41.438 21:42:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:41.438 21:42:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:41.438 21:42:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:25:41.438 21:42:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:41.438 21:42:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:41.438 21:42:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:41.438 21:42:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1543826 00:25:41.438 21:42:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1543826 00:25:41.438 21:42:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@830 -- # '[' -z 1543826 ']' 00:25:41.438 21:42:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:41.438 21:42:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:41.438 21:42:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:41.438 21:42:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:41.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:41.438 21:42:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:41.438 21:42:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:41.438 [2024-06-07 21:42:41.242114] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:25:41.438 [2024-06-07 21:42:41.242171] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:41.438 EAL: No free 2048 kB hugepages reported on node 1 00:25:41.438 [2024-06-07 21:42:41.337813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.438 [2024-06-07 21:42:41.428671] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:41.438 [2024-06-07 21:42:41.428712] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:41.438 [2024-06-07 21:42:41.428723] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:41.438 [2024-06-07 21:42:41.428732] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:41.438 [2024-06-07 21:42:41.428739] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:41.438 [2024-06-07 21:42:41.428767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:42.006 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:42.006 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@863 -- # return 0 00:25:42.006 21:42:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:42.006 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:42.006 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:42.006 21:42:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:42.006 21:42:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:42.006 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:42.006 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:42.006 [2024-06-07 21:42:42.214006] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:42.006 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:42.006 21:42:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:25:42.006 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:42.006 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:42.006 null0 00:25:42.006 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:42.006 21:42:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:25:42.006 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:42.006 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:42.006 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:42.006 21:42:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:25:42.006 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:42.006 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:42.006 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:42.006 21:42:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 03f4114445ff4c558e4843fb8ba41cfa 00:25:42.006 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:42.006 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:42.006 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:42.006 21:42:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:42.006 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:42.006 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:42.006 [2024-06-07 21:42:42.254227] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:42.006 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:42.006 21:42:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:25:42.006 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:42.006 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:42.265 nvme0n1 00:25:42.265 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:42.265 21:42:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:42.265 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:42.265 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:42.265 [ 00:25:42.265 { 00:25:42.265 "name": "nvme0n1", 00:25:42.265 "aliases": [ 00:25:42.265 "03f41144-45ff-4c55-8e48-43fb8ba41cfa" 00:25:42.265 ], 00:25:42.265 "product_name": "NVMe disk", 00:25:42.265 "block_size": 512, 00:25:42.265 "num_blocks": 2097152, 00:25:42.265 "uuid": "03f41144-45ff-4c55-8e48-43fb8ba41cfa", 00:25:42.265 "assigned_rate_limits": { 00:25:42.265 "rw_ios_per_sec": 0, 00:25:42.265 "rw_mbytes_per_sec": 0, 00:25:42.265 "r_mbytes_per_sec": 0, 00:25:42.265 "w_mbytes_per_sec": 0 00:25:42.265 }, 00:25:42.265 "claimed": false, 00:25:42.265 "zoned": false, 00:25:42.265 "supported_io_types": { 00:25:42.265 "read": true, 00:25:42.265 "write": true, 00:25:42.265 "unmap": false, 00:25:42.265 "write_zeroes": true, 00:25:42.265 "flush": true, 00:25:42.265 "reset": true, 00:25:42.265 "compare": true, 00:25:42.265 "compare_and_write": true, 00:25:42.265 "abort": true, 00:25:42.265 "nvme_admin": true, 00:25:42.265 "nvme_io": true 00:25:42.265 }, 00:25:42.265 "memory_domains": [ 00:25:42.265 { 00:25:42.265 "dma_device_id": "system", 00:25:42.265 "dma_device_type": 1 00:25:42.265 } 00:25:42.265 ], 00:25:42.265 "driver_specific": { 00:25:42.265 "nvme": [ 00:25:42.265 { 00:25:42.265 "trid": { 00:25:42.265 "trtype": "TCP", 00:25:42.265 "adrfam": "IPv4", 00:25:42.265 "traddr": "10.0.0.2", 00:25:42.265 "trsvcid": "4420", 00:25:42.265 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:42.265 }, 00:25:42.265 "ctrlr_data": { 00:25:42.265 "cntlid": 1, 00:25:42.265 "vendor_id": "0x8086", 00:25:42.265 "model_number": "SPDK bdev Controller", 00:25:42.265 "serial_number": "00000000000000000000", 00:25:42.265 "firmware_revision": "24.09", 00:25:42.265 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:42.265 "oacs": { 00:25:42.265 "security": 0, 00:25:42.265 "format": 0, 00:25:42.265 "firmware": 0, 00:25:42.265 "ns_manage": 0 00:25:42.265 }, 00:25:42.265 "multi_ctrlr": true, 00:25:42.265 "ana_reporting": false 00:25:42.265 }, 00:25:42.265 "vs": { 00:25:42.265 "nvme_version": "1.3" 00:25:42.265 }, 00:25:42.265 "ns_data": { 00:25:42.265 "id": 1, 00:25:42.265 "can_share": true 00:25:42.265 } 00:25:42.265 } 00:25:42.265 ], 00:25:42.265 "mp_policy": "active_passive" 00:25:42.265 } 00:25:42.265 } 00:25:42.265 ] 00:25:42.265 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:42.265 21:42:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:25:42.265 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:42.265 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:42.265 [2024-06-07 21:42:42.506882] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:42.265 [2024-06-07 21:42:42.506953] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25043a0 (9): Bad file descriptor 00:25:42.525 [2024-06-07 21:42:42.649143] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:42.525 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:42.525 21:42:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:42.525 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:42.525 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:42.525 [ 00:25:42.525 { 00:25:42.525 "name": "nvme0n1", 00:25:42.525 "aliases": [ 00:25:42.525 "03f41144-45ff-4c55-8e48-43fb8ba41cfa" 00:25:42.525 ], 00:25:42.525 "product_name": "NVMe disk", 00:25:42.525 "block_size": 512, 00:25:42.525 "num_blocks": 2097152, 00:25:42.525 "uuid": "03f41144-45ff-4c55-8e48-43fb8ba41cfa", 00:25:42.525 "assigned_rate_limits": { 00:25:42.525 "rw_ios_per_sec": 0, 00:25:42.525 "rw_mbytes_per_sec": 0, 00:25:42.525 "r_mbytes_per_sec": 0, 00:25:42.525 "w_mbytes_per_sec": 0 00:25:42.525 }, 00:25:42.525 "claimed": false, 00:25:42.525 "zoned": false, 00:25:42.525 "supported_io_types": { 00:25:42.525 "read": true, 00:25:42.525 "write": true, 00:25:42.525 "unmap": false, 00:25:42.525 "write_zeroes": true, 00:25:42.525 "flush": true, 00:25:42.525 "reset": true, 00:25:42.525 "compare": true, 00:25:42.525 "compare_and_write": true, 00:25:42.525 "abort": true, 00:25:42.525 "nvme_admin": true, 00:25:42.525 "nvme_io": true 00:25:42.525 }, 00:25:42.525 "memory_domains": [ 00:25:42.525 { 00:25:42.525 "dma_device_id": "system", 00:25:42.525 "dma_device_type": 1 00:25:42.525 } 00:25:42.525 ], 00:25:42.525 "driver_specific": { 00:25:42.525 "nvme": [ 00:25:42.525 { 00:25:42.525 "trid": { 00:25:42.525 "trtype": "TCP", 00:25:42.525 "adrfam": "IPv4", 00:25:42.525 "traddr": "10.0.0.2", 00:25:42.525 "trsvcid": "4420", 00:25:42.525 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:42.525 }, 00:25:42.525 "ctrlr_data": { 00:25:42.525 "cntlid": 2, 00:25:42.525 "vendor_id": "0x8086", 00:25:42.525 "model_number": "SPDK bdev Controller", 00:25:42.525 "serial_number": "00000000000000000000", 00:25:42.525 "firmware_revision": "24.09", 00:25:42.525 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:42.525 "oacs": { 00:25:42.525 "security": 0, 00:25:42.525 "format": 0, 00:25:42.525 "firmware": 0, 00:25:42.525 "ns_manage": 0 00:25:42.525 }, 00:25:42.525 "multi_ctrlr": true, 00:25:42.525 "ana_reporting": false 00:25:42.525 }, 00:25:42.525 "vs": { 00:25:42.525 "nvme_version": "1.3" 00:25:42.525 }, 00:25:42.525 "ns_data": { 00:25:42.525 "id": 1, 00:25:42.525 "can_share": true 00:25:42.525 } 00:25:42.525 } 00:25:42.525 ], 00:25:42.525 "mp_policy": "active_passive" 00:25:42.525 } 00:25:42.525 } 00:25:42.525 ] 00:25:42.525 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:42.525 21:42:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.525 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:42.525 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:42.525 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:42.525 21:42:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:25:42.525 21:42:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.T82wFsXPqy 00:25:42.525 21:42:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:42.525 21:42:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.T82wFsXPqy 00:25:42.525 21:42:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:25:42.525 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:42.525 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:42.525 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:42.525 21:42:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:25:42.525 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:42.525 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:42.525 [2024-06-07 21:42:42.703536] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:42.525 [2024-06-07 21:42:42.703674] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:42.525 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:42.525 21:42:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.T82wFsXPqy 00:25:42.525 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:42.525 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:42.525 [2024-06-07 21:42:42.711547] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:42.525 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:42.525 21:42:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.T82wFsXPqy 00:25:42.525 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:42.525 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:42.525 [2024-06-07 21:42:42.719574] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:42.525 [2024-06-07 21:42:42.719620] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:42.525 nvme0n1 00:25:42.525 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:42.525 21:42:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:42.525 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:42.785 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:42.785 [ 00:25:42.785 { 00:25:42.785 "name": "nvme0n1", 00:25:42.785 "aliases": [ 00:25:42.785 "03f41144-45ff-4c55-8e48-43fb8ba41cfa" 00:25:42.785 ], 00:25:42.785 "product_name": "NVMe disk", 00:25:42.785 "block_size": 512, 00:25:42.785 "num_blocks": 2097152, 00:25:42.785 "uuid": "03f41144-45ff-4c55-8e48-43fb8ba41cfa", 00:25:42.785 "assigned_rate_limits": { 00:25:42.785 "rw_ios_per_sec": 0, 00:25:42.785 "rw_mbytes_per_sec": 0, 00:25:42.785 "r_mbytes_per_sec": 0, 00:25:42.785 "w_mbytes_per_sec": 0 00:25:42.785 }, 00:25:42.785 "claimed": false, 00:25:42.785 "zoned": false, 00:25:42.785 "supported_io_types": { 00:25:42.785 "read": true, 00:25:42.785 "write": true, 00:25:42.785 "unmap": false, 00:25:42.785 "write_zeroes": true, 00:25:42.785 "flush": true, 00:25:42.785 "reset": true, 00:25:42.785 "compare": true, 00:25:42.785 "compare_and_write": true, 00:25:42.785 "abort": true, 00:25:42.785 "nvme_admin": true, 00:25:42.785 "nvme_io": true 00:25:42.785 }, 00:25:42.785 "memory_domains": [ 00:25:42.785 { 00:25:42.785 "dma_device_id": "system", 00:25:42.785 "dma_device_type": 1 00:25:42.785 } 00:25:42.785 ], 00:25:42.785 "driver_specific": { 00:25:42.785 "nvme": [ 00:25:42.785 { 00:25:42.785 "trid": { 00:25:42.785 "trtype": "TCP", 00:25:42.785 "adrfam": "IPv4", 00:25:42.785 "traddr": "10.0.0.2", 00:25:42.785 "trsvcid": "4421", 00:25:42.785 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:42.785 }, 00:25:42.785 "ctrlr_data": { 00:25:42.785 "cntlid": 3, 00:25:42.785 "vendor_id": "0x8086", 00:25:42.785 "model_number": "SPDK bdev Controller", 00:25:42.785 "serial_number": "00000000000000000000", 00:25:42.785 "firmware_revision": "24.09", 00:25:42.785 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:42.785 "oacs": { 00:25:42.785 "security": 0, 00:25:42.785 "format": 0, 00:25:42.785 "firmware": 0, 00:25:42.785 "ns_manage": 0 00:25:42.785 }, 00:25:42.785 "multi_ctrlr": true, 00:25:42.785 "ana_reporting": false 00:25:42.785 }, 00:25:42.785 "vs": { 00:25:42.785 "nvme_version": "1.3" 00:25:42.785 }, 00:25:42.785 "ns_data": { 00:25:42.785 "id": 1, 00:25:42.785 "can_share": true 00:25:42.785 } 00:25:42.785 } 00:25:42.785 ], 00:25:42.785 "mp_policy": "active_passive" 00:25:42.785 } 00:25:42.785 } 00:25:42.785 ] 00:25:42.785 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:42.785 21:42:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.785 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:42.785 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:42.785 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:42.785 21:42:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.T82wFsXPqy 00:25:42.785 21:42:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:25:42.785 21:42:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:25:42.785 21:42:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:42.785 21:42:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:25:42.785 21:42:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:42.785 21:42:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:25:42.785 21:42:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:42.785 21:42:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:42.785 rmmod nvme_tcp 00:25:42.785 rmmod nvme_fabrics 00:25:42.785 rmmod nvme_keyring 00:25:42.785 21:42:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:42.785 21:42:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:25:42.785 21:42:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:25:42.785 21:42:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1543826 ']' 00:25:42.785 21:42:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1543826 00:25:42.785 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@949 -- # '[' -z 1543826 ']' 00:25:42.785 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # kill -0 1543826 00:25:42.785 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # uname 00:25:42.785 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:42.785 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1543826 00:25:42.785 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:25:42.785 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:25:42.785 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1543826' 00:25:42.785 killing process with pid 1543826 00:25:42.785 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@968 -- # kill 1543826 00:25:42.785 [2024-06-07 21:42:42.944485] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:42.785 [2024-06-07 21:42:42.944516] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:42.785 21:42:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@973 -- # wait 1543826 00:25:43.042 21:42:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:43.042 21:42:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:43.042 21:42:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:43.042 21:42:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:43.042 21:42:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:43.042 21:42:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:43.042 21:42:43 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:43.042 21:42:43 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:44.946 21:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:44.946 00:25:44.946 real 0m10.606s 00:25:44.946 user 0m3.964s 00:25:44.946 sys 0m5.286s 00:25:44.946 21:42:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:44.946 21:42:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:44.946 ************************************ 00:25:44.946 END TEST nvmf_async_init 00:25:44.946 ************************************ 00:25:45.205 21:42:45 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:45.205 21:42:45 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:25:45.205 21:42:45 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:45.205 21:42:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:45.205 ************************************ 00:25:45.205 START TEST dma 00:25:45.205 ************************************ 00:25:45.205 21:42:45 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:45.205 * Looking for test storage... 00:25:45.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:45.205 21:42:45 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:45.205 21:42:45 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:25:45.205 21:42:45 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:45.205 21:42:45 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:45.205 21:42:45 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:45.205 21:42:45 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:45.205 21:42:45 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:45.206 21:42:45 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:45.206 21:42:45 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:45.206 21:42:45 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:45.206 21:42:45 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:45.206 21:42:45 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:45.206 21:42:45 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:25:45.206 21:42:45 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:25:45.206 21:42:45 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:45.206 21:42:45 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:45.206 21:42:45 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:45.206 21:42:45 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:45.206 21:42:45 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:45.206 21:42:45 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:45.206 21:42:45 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:45.206 21:42:45 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:45.206 21:42:45 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.206 21:42:45 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.206 21:42:45 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.206 21:42:45 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:25:45.206 21:42:45 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.206 21:42:45 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:25:45.206 21:42:45 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:45.206 21:42:45 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:45.206 21:42:45 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:45.206 21:42:45 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:45.206 21:42:45 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:45.206 21:42:45 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:45.206 21:42:45 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:45.206 21:42:45 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:45.206 21:42:45 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:25:45.206 21:42:45 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:25:45.206 00:25:45.206 real 0m0.115s 00:25:45.206 user 0m0.044s 00:25:45.206 sys 0m0.079s 00:25:45.206 21:42:45 nvmf_tcp.dma -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:45.206 21:42:45 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:25:45.206 ************************************ 00:25:45.206 END TEST dma 00:25:45.206 ************************************ 00:25:45.206 21:42:45 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:45.206 21:42:45 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:25:45.206 21:42:45 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:45.206 21:42:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:45.206 ************************************ 00:25:45.206 START TEST nvmf_identify 00:25:45.206 ************************************ 00:25:45.206 21:42:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:45.466 * Looking for test storage... 00:25:45.466 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:45.466 21:42:45 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:45.466 21:42:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:25:45.466 21:42:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:45.466 21:42:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:45.466 21:42:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:45.466 21:42:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:45.466 21:42:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:45.466 21:42:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:45.466 21:42:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:45.466 21:42:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:45.466 21:42:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:45.466 21:42:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:45.466 21:42:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:25:45.466 21:42:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:25:45.466 21:42:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:45.466 21:42:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:45.466 21:42:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:45.466 21:42:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:45.466 21:42:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:45.466 21:42:45 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:45.466 21:42:45 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:45.466 21:42:45 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:45.466 21:42:45 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.466 21:42:45 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.466 21:42:45 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.466 21:42:45 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:25:45.466 21:42:45 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.466 21:42:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:25:45.466 21:42:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:45.466 21:42:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:45.466 21:42:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:45.466 21:42:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:45.466 21:42:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:45.466 21:42:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:45.466 21:42:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:45.466 21:42:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:45.466 21:42:45 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:45.466 21:42:45 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:45.466 21:42:45 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:25:45.466 21:42:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:45.466 21:42:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:45.466 21:42:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:45.466 21:42:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:45.466 21:42:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:45.466 21:42:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:45.466 21:42:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:45.466 21:42:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:45.466 21:42:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:45.466 21:42:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:45.466 21:42:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:25:45.466 21:42:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:52.033 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:52.033 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:25:52.033 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:52.033 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:52.033 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:52.033 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:52.033 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:52.033 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:25:52.033 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:52.033 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:25:52.033 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:25:52.033 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:25:52.033 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:25:52.033 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:25:52.033 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:25:52.033 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:52.033 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:52.033 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:52.033 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:52.033 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:52.033 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:52.033 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:52.033 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:52.033 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:52.033 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:52.033 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:52.034 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:52.034 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:52.034 Found net devices under 0000:af:00.0: cvl_0_0 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:52.034 Found net devices under 0000:af:00.1: cvl_0_1 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:52.034 21:42:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:52.034 21:42:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:52.034 21:42:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:52.034 21:42:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:52.034 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:52.034 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:25:52.034 00:25:52.034 --- 10.0.0.2 ping statistics --- 00:25:52.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:52.034 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:25:52.034 21:42:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:52.034 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:52.034 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:25:52.034 00:25:52.034 --- 10.0.0.1 ping statistics --- 00:25:52.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:52.034 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:25:52.034 21:42:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:52.034 21:42:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:25:52.034 21:42:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:52.034 21:42:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:52.034 21:42:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:52.034 21:42:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:52.034 21:42:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:52.034 21:42:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:52.034 21:42:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:52.034 21:42:52 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:25:52.034 21:42:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:52.034 21:42:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:52.034 21:42:52 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1548156 00:25:52.034 21:42:52 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:52.034 21:42:52 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:52.034 21:42:52 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1548156 00:25:52.034 21:42:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@830 -- # '[' -z 1548156 ']' 00:25:52.034 21:42:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:52.034 21:42:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:52.034 21:42:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:52.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:52.034 21:42:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:52.034 21:42:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:52.034 [2024-06-07 21:42:52.167681] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:25:52.034 [2024-06-07 21:42:52.167737] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:52.034 EAL: No free 2048 kB hugepages reported on node 1 00:25:52.034 [2024-06-07 21:42:52.265388] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:52.292 [2024-06-07 21:42:52.360077] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:52.292 [2024-06-07 21:42:52.360116] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:52.292 [2024-06-07 21:42:52.360126] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:52.292 [2024-06-07 21:42:52.360135] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:52.292 [2024-06-07 21:42:52.360143] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:52.292 [2024-06-07 21:42:52.360197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:52.292 [2024-06-07 21:42:52.360306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:52.292 [2024-06-07 21:42:52.360422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:52.292 [2024-06-07 21:42:52.360422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:52.861 21:42:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:52.861 21:42:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@863 -- # return 0 00:25:52.861 21:42:53 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:52.861 21:42:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:52.861 21:42:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:52.861 [2024-06-07 21:42:53.125551] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:53.145 21:42:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:53.145 21:42:53 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:25:53.145 21:42:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:53.145 21:42:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:53.145 21:42:53 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:53.145 21:42:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:53.145 21:42:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:53.145 Malloc0 00:25:53.145 21:42:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:53.145 21:42:53 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:53.145 21:42:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:53.145 21:42:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:53.145 21:42:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:53.145 21:42:53 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:25:53.145 21:42:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:53.145 21:42:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:53.145 21:42:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:53.145 21:42:53 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:53.145 21:42:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:53.145 21:42:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:53.145 [2024-06-07 21:42:53.217680] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:53.145 21:42:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:53.145 21:42:53 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:53.145 21:42:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:53.145 21:42:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:53.145 21:42:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:53.145 21:42:53 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:25:53.145 21:42:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:53.145 21:42:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:53.145 [ 00:25:53.145 { 00:25:53.145 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:53.145 "subtype": "Discovery", 00:25:53.145 "listen_addresses": [ 00:25:53.145 { 00:25:53.145 "trtype": "TCP", 00:25:53.145 "adrfam": "IPv4", 00:25:53.145 "traddr": "10.0.0.2", 00:25:53.145 "trsvcid": "4420" 00:25:53.145 } 00:25:53.145 ], 00:25:53.145 "allow_any_host": true, 00:25:53.145 "hosts": [] 00:25:53.145 }, 00:25:53.145 { 00:25:53.145 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:53.145 "subtype": "NVMe", 00:25:53.145 "listen_addresses": [ 00:25:53.145 { 00:25:53.145 "trtype": "TCP", 00:25:53.145 "adrfam": "IPv4", 00:25:53.145 "traddr": "10.0.0.2", 00:25:53.145 "trsvcid": "4420" 00:25:53.145 } 00:25:53.145 ], 00:25:53.145 "allow_any_host": true, 00:25:53.145 "hosts": [], 00:25:53.145 "serial_number": "SPDK00000000000001", 00:25:53.145 "model_number": "SPDK bdev Controller", 00:25:53.145 "max_namespaces": 32, 00:25:53.145 "min_cntlid": 1, 00:25:53.145 "max_cntlid": 65519, 00:25:53.145 "namespaces": [ 00:25:53.145 { 00:25:53.145 "nsid": 1, 00:25:53.145 "bdev_name": "Malloc0", 00:25:53.145 "name": "Malloc0", 00:25:53.145 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:25:53.145 "eui64": "ABCDEF0123456789", 00:25:53.145 "uuid": "57ba4638-e361-4e30-921c-dd75707b2d66" 00:25:53.145 } 00:25:53.145 ] 00:25:53.145 } 00:25:53.145 ] 00:25:53.145 21:42:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:53.145 21:42:53 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:25:53.145 [2024-06-07 21:42:53.268085] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:25:53.145 [2024-06-07 21:42:53.268123] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1548440 ] 00:25:53.145 EAL: No free 2048 kB hugepages reported on node 1 00:25:53.145 [2024-06-07 21:42:53.305599] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:25:53.145 [2024-06-07 21:42:53.305653] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:53.145 [2024-06-07 21:42:53.305661] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:53.145 [2024-06-07 21:42:53.305676] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:53.145 [2024-06-07 21:42:53.305687] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:53.145 [2024-06-07 21:42:53.306140] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:25:53.145 [2024-06-07 21:42:53.306177] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1fc6ec0 0 00:25:53.145 [2024-06-07 21:42:53.321038] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:53.145 [2024-06-07 21:42:53.321054] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:53.145 [2024-06-07 21:42:53.321060] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:53.145 [2024-06-07 21:42:53.321064] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:53.145 [2024-06-07 21:42:53.321109] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.145 [2024-06-07 21:42:53.321116] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.146 [2024-06-07 21:42:53.321122] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fc6ec0) 00:25:53.146 [2024-06-07 21:42:53.321138] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:53.146 [2024-06-07 21:42:53.321158] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2049df0, cid 0, qid 0 00:25:53.146 [2024-06-07 21:42:53.329038] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.146 [2024-06-07 21:42:53.329049] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.146 [2024-06-07 21:42:53.329054] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.146 [2024-06-07 21:42:53.329060] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2049df0) on tqpair=0x1fc6ec0 00:25:53.146 [2024-06-07 21:42:53.329073] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:53.146 [2024-06-07 21:42:53.329082] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:25:53.146 [2024-06-07 21:42:53.329088] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:25:53.146 [2024-06-07 21:42:53.329105] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.146 [2024-06-07 21:42:53.329110] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.146 [2024-06-07 21:42:53.329115] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fc6ec0) 00:25:53.146 [2024-06-07 21:42:53.329124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.146 [2024-06-07 21:42:53.329144] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2049df0, cid 0, qid 0 00:25:53.146 [2024-06-07 21:42:53.329376] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.146 [2024-06-07 21:42:53.329385] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.146 [2024-06-07 21:42:53.329390] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.146 [2024-06-07 21:42:53.329395] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2049df0) on tqpair=0x1fc6ec0 00:25:53.146 [2024-06-07 21:42:53.329403] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:25:53.146 [2024-06-07 21:42:53.329413] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:25:53.146 [2024-06-07 21:42:53.329422] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.146 [2024-06-07 21:42:53.329427] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.146 [2024-06-07 21:42:53.329431] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fc6ec0) 00:25:53.146 [2024-06-07 21:42:53.329440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.146 [2024-06-07 21:42:53.329454] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2049df0, cid 0, qid 0 00:25:53.146 [2024-06-07 21:42:53.329577] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.146 [2024-06-07 21:42:53.329585] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.146 [2024-06-07 21:42:53.329589] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.146 [2024-06-07 21:42:53.329594] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2049df0) on tqpair=0x1fc6ec0 00:25:53.146 [2024-06-07 21:42:53.329602] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:25:53.146 [2024-06-07 21:42:53.329613] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:25:53.146 [2024-06-07 21:42:53.329622] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.146 [2024-06-07 21:42:53.329627] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.146 [2024-06-07 21:42:53.329631] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fc6ec0) 00:25:53.146 [2024-06-07 21:42:53.329640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.146 [2024-06-07 21:42:53.329653] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2049df0, cid 0, qid 0 00:25:53.146 [2024-06-07 21:42:53.329760] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.146 [2024-06-07 21:42:53.329768] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.146 [2024-06-07 21:42:53.329772] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.146 [2024-06-07 21:42:53.329777] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2049df0) on tqpair=0x1fc6ec0 00:25:53.146 [2024-06-07 21:42:53.329785] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:53.146 [2024-06-07 21:42:53.329797] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.146 [2024-06-07 21:42:53.329803] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.146 [2024-06-07 21:42:53.329807] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fc6ec0) 00:25:53.146 [2024-06-07 21:42:53.329815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.146 [2024-06-07 21:42:53.329828] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2049df0, cid 0, qid 0 00:25:53.146 [2024-06-07 21:42:53.329946] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.146 [2024-06-07 21:42:53.329954] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.146 [2024-06-07 21:42:53.329961] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.146 [2024-06-07 21:42:53.329966] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2049df0) on tqpair=0x1fc6ec0 00:25:53.146 [2024-06-07 21:42:53.329973] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:25:53.146 [2024-06-07 21:42:53.329979] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:25:53.146 [2024-06-07 21:42:53.329989] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:53.146 [2024-06-07 21:42:53.330097] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:25:53.146 [2024-06-07 21:42:53.330104] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:53.146 [2024-06-07 21:42:53.330114] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.146 [2024-06-07 21:42:53.330119] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.146 [2024-06-07 21:42:53.330123] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fc6ec0) 00:25:53.146 [2024-06-07 21:42:53.330132] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.146 [2024-06-07 21:42:53.330146] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2049df0, cid 0, qid 0 00:25:53.146 [2024-06-07 21:42:53.330260] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.146 [2024-06-07 21:42:53.330268] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.146 [2024-06-07 21:42:53.330273] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.146 [2024-06-07 21:42:53.330277] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2049df0) on tqpair=0x1fc6ec0 00:25:53.146 [2024-06-07 21:42:53.330285] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:53.146 [2024-06-07 21:42:53.330297] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.146 [2024-06-07 21:42:53.330302] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.146 [2024-06-07 21:42:53.330307] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fc6ec0) 00:25:53.146 [2024-06-07 21:42:53.330315] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.146 [2024-06-07 21:42:53.330328] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2049df0, cid 0, qid 0 00:25:53.146 [2024-06-07 21:42:53.330439] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.146 [2024-06-07 21:42:53.330448] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.146 [2024-06-07 21:42:53.330452] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.146 [2024-06-07 21:42:53.330457] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2049df0) on tqpair=0x1fc6ec0 00:25:53.146 [2024-06-07 21:42:53.330463] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:53.146 [2024-06-07 21:42:53.330469] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:25:53.146 [2024-06-07 21:42:53.330479] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:25:53.146 [2024-06-07 21:42:53.330489] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:25:53.146 [2024-06-07 21:42:53.330501] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.146 [2024-06-07 21:42:53.330508] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fc6ec0) 00:25:53.146 [2024-06-07 21:42:53.330517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.146 [2024-06-07 21:42:53.330531] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2049df0, cid 0, qid 0 00:25:53.146 [2024-06-07 21:42:53.330672] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:53.146 [2024-06-07 21:42:53.330680] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:53.146 [2024-06-07 21:42:53.330685] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:53.146 [2024-06-07 21:42:53.330690] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fc6ec0): datao=0, datal=4096, cccid=0 00:25:53.146 [2024-06-07 21:42:53.330696] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2049df0) on tqpair(0x1fc6ec0): expected_datao=0, payload_size=4096 00:25:53.146 [2024-06-07 21:42:53.330702] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.146 [2024-06-07 21:42:53.330773] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:53.146 [2024-06-07 21:42:53.330779] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:53.146 [2024-06-07 21:42:53.371229] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.146 [2024-06-07 21:42:53.371246] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.146 [2024-06-07 21:42:53.371251] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.146 [2024-06-07 21:42:53.371257] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2049df0) on tqpair=0x1fc6ec0 00:25:53.146 [2024-06-07 21:42:53.371269] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:25:53.146 [2024-06-07 21:42:53.371276] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:25:53.146 [2024-06-07 21:42:53.371282] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:25:53.146 [2024-06-07 21:42:53.371293] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:25:53.147 [2024-06-07 21:42:53.371299] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:25:53.147 [2024-06-07 21:42:53.371306] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:25:53.147 [2024-06-07 21:42:53.371318] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:25:53.147 [2024-06-07 21:42:53.371328] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.147 [2024-06-07 21:42:53.371333] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.147 [2024-06-07 21:42:53.371338] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fc6ec0) 00:25:53.147 [2024-06-07 21:42:53.371348] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:53.147 [2024-06-07 21:42:53.371365] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2049df0, cid 0, qid 0 00:25:53.147 [2024-06-07 21:42:53.371497] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.147 [2024-06-07 21:42:53.371505] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.147 [2024-06-07 21:42:53.371510] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.147 [2024-06-07 21:42:53.371515] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2049df0) on tqpair=0x1fc6ec0 00:25:53.147 [2024-06-07 21:42:53.371525] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.147 [2024-06-07 21:42:53.371530] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.147 [2024-06-07 21:42:53.371534] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fc6ec0) 00:25:53.147 [2024-06-07 21:42:53.371546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.147 [2024-06-07 21:42:53.371554] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.147 [2024-06-07 21:42:53.371559] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.147 [2024-06-07 21:42:53.371563] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1fc6ec0) 00:25:53.147 [2024-06-07 21:42:53.371570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.147 [2024-06-07 21:42:53.371578] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.147 [2024-06-07 21:42:53.371583] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.147 [2024-06-07 21:42:53.371587] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1fc6ec0) 00:25:53.147 [2024-06-07 21:42:53.371594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.147 [2024-06-07 21:42:53.371602] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.147 [2024-06-07 21:42:53.371606] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.147 [2024-06-07 21:42:53.371611] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fc6ec0) 00:25:53.147 [2024-06-07 21:42:53.371618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.147 [2024-06-07 21:42:53.371624] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:25:53.147 [2024-06-07 21:42:53.371638] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:53.147 [2024-06-07 21:42:53.371647] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.147 [2024-06-07 21:42:53.371651] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fc6ec0) 00:25:53.147 [2024-06-07 21:42:53.371659] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.147 [2024-06-07 21:42:53.371675] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2049df0, cid 0, qid 0 00:25:53.147 [2024-06-07 21:42:53.371682] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2049f50, cid 1, qid 0 00:25:53.147 [2024-06-07 21:42:53.371688] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x204a0b0, cid 2, qid 0 00:25:53.147 [2024-06-07 21:42:53.371694] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x204a210, cid 3, qid 0 00:25:53.147 [2024-06-07 21:42:53.371700] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x204a370, cid 4, qid 0 00:25:53.147 [2024-06-07 21:42:53.371862] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.147 [2024-06-07 21:42:53.371871] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.147 [2024-06-07 21:42:53.371875] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.147 [2024-06-07 21:42:53.371880] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x204a370) on tqpair=0x1fc6ec0 00:25:53.147 [2024-06-07 21:42:53.371887] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:25:53.147 [2024-06-07 21:42:53.371893] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:25:53.147 [2024-06-07 21:42:53.371908] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.147 [2024-06-07 21:42:53.371913] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fc6ec0) 00:25:53.147 [2024-06-07 21:42:53.371922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.147 [2024-06-07 21:42:53.371942] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x204a370, cid 4, qid 0 00:25:53.147 [2024-06-07 21:42:53.372125] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:53.147 [2024-06-07 21:42:53.372147] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:53.147 [2024-06-07 21:42:53.372152] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:53.147 [2024-06-07 21:42:53.372157] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fc6ec0): datao=0, datal=4096, cccid=4 00:25:53.147 [2024-06-07 21:42:53.372162] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x204a370) on tqpair(0x1fc6ec0): expected_datao=0, payload_size=4096 00:25:53.147 [2024-06-07 21:42:53.372168] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.147 [2024-06-07 21:42:53.372218] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:53.147 [2024-06-07 21:42:53.372224] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:53.147 [2024-06-07 21:42:53.372300] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.147 [2024-06-07 21:42:53.372308] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.147 [2024-06-07 21:42:53.372312] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.147 [2024-06-07 21:42:53.372317] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x204a370) on tqpair=0x1fc6ec0 00:25:53.147 [2024-06-07 21:42:53.372333] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:25:53.147 [2024-06-07 21:42:53.372361] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.147 [2024-06-07 21:42:53.372367] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fc6ec0) 00:25:53.147 [2024-06-07 21:42:53.372376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.147 [2024-06-07 21:42:53.372384] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.147 [2024-06-07 21:42:53.372389] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.147 [2024-06-07 21:42:53.372393] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1fc6ec0) 00:25:53.147 [2024-06-07 21:42:53.372401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.147 [2024-06-07 21:42:53.372420] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x204a370, cid 4, qid 0 00:25:53.147 [2024-06-07 21:42:53.372427] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x204a4d0, cid 5, qid 0 00:25:53.147 [2024-06-07 21:42:53.372633] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:53.147 [2024-06-07 21:42:53.372642] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:53.147 [2024-06-07 21:42:53.372646] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:53.147 [2024-06-07 21:42:53.372651] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fc6ec0): datao=0, datal=1024, cccid=4 00:25:53.147 [2024-06-07 21:42:53.372657] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x204a370) on tqpair(0x1fc6ec0): expected_datao=0, payload_size=1024 00:25:53.147 [2024-06-07 21:42:53.372662] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.147 [2024-06-07 21:42:53.372670] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:53.147 [2024-06-07 21:42:53.372675] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:53.147 [2024-06-07 21:42:53.372682] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.147 [2024-06-07 21:42:53.372689] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.147 [2024-06-07 21:42:53.372693] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.147 [2024-06-07 21:42:53.372698] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x204a4d0) on tqpair=0x1fc6ec0 00:25:53.409 [2024-06-07 21:42:53.413223] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.409 [2024-06-07 21:42:53.413243] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.409 [2024-06-07 21:42:53.413249] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.409 [2024-06-07 21:42:53.413254] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x204a370) on tqpair=0x1fc6ec0 00:25:53.409 [2024-06-07 21:42:53.413276] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.409 [2024-06-07 21:42:53.413282] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fc6ec0) 00:25:53.409 [2024-06-07 21:42:53.413292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.409 [2024-06-07 21:42:53.413314] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x204a370, cid 4, qid 0 00:25:53.409 [2024-06-07 21:42:53.413440] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:53.409 [2024-06-07 21:42:53.413449] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:53.409 [2024-06-07 21:42:53.413453] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:53.409 [2024-06-07 21:42:53.413458] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fc6ec0): datao=0, datal=3072, cccid=4 00:25:53.409 [2024-06-07 21:42:53.413464] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x204a370) on tqpair(0x1fc6ec0): expected_datao=0, payload_size=3072 00:25:53.409 [2024-06-07 21:42:53.413470] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.409 [2024-06-07 21:42:53.413552] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:53.409 [2024-06-07 21:42:53.413558] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:53.409 [2024-06-07 21:42:53.454184] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.409 [2024-06-07 21:42:53.454200] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.409 [2024-06-07 21:42:53.454204] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.409 [2024-06-07 21:42:53.454209] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x204a370) on tqpair=0x1fc6ec0 00:25:53.409 [2024-06-07 21:42:53.454224] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.409 [2024-06-07 21:42:53.454229] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fc6ec0) 00:25:53.409 [2024-06-07 21:42:53.454239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.409 [2024-06-07 21:42:53.454260] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x204a370, cid 4, qid 0 00:25:53.409 [2024-06-07 21:42:53.454432] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:53.409 [2024-06-07 21:42:53.454440] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:53.409 [2024-06-07 21:42:53.454445] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:53.409 [2024-06-07 21:42:53.454450] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fc6ec0): datao=0, datal=8, cccid=4 00:25:53.409 [2024-06-07 21:42:53.454455] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x204a370) on tqpair(0x1fc6ec0): expected_datao=0, payload_size=8 00:25:53.409 [2024-06-07 21:42:53.454461] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.409 [2024-06-07 21:42:53.454469] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:53.409 [2024-06-07 21:42:53.454474] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:53.409 [2024-06-07 21:42:53.499038] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.409 [2024-06-07 21:42:53.499051] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.409 [2024-06-07 21:42:53.499056] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.409 [2024-06-07 21:42:53.499061] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x204a370) on tqpair=0x1fc6ec0 00:25:53.409 ===================================================== 00:25:53.409 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:53.409 ===================================================== 00:25:53.409 Controller Capabilities/Features 00:25:53.409 ================================ 00:25:53.409 Vendor ID: 0000 00:25:53.409 Subsystem Vendor ID: 0000 00:25:53.409 Serial Number: .................... 00:25:53.409 Model Number: ........................................ 00:25:53.409 Firmware Version: 24.09 00:25:53.409 Recommended Arb Burst: 0 00:25:53.409 IEEE OUI Identifier: 00 00 00 00:25:53.409 Multi-path I/O 00:25:53.409 May have multiple subsystem ports: No 00:25:53.409 May have multiple controllers: No 00:25:53.409 Associated with SR-IOV VF: No 00:25:53.409 Max Data Transfer Size: 131072 00:25:53.409 Max Number of Namespaces: 0 00:25:53.409 Max Number of I/O Queues: 1024 00:25:53.409 NVMe Specification Version (VS): 1.3 00:25:53.409 NVMe Specification Version (Identify): 1.3 00:25:53.409 Maximum Queue Entries: 128 00:25:53.409 Contiguous Queues Required: Yes 00:25:53.409 Arbitration Mechanisms Supported 00:25:53.409 Weighted Round Robin: Not Supported 00:25:53.409 Vendor Specific: Not Supported 00:25:53.409 Reset Timeout: 15000 ms 00:25:53.409 Doorbell Stride: 4 bytes 00:25:53.409 NVM Subsystem Reset: Not Supported 00:25:53.409 Command Sets Supported 00:25:53.409 NVM Command Set: Supported 00:25:53.409 Boot Partition: Not Supported 00:25:53.409 Memory Page Size Minimum: 4096 bytes 00:25:53.409 Memory Page Size Maximum: 4096 bytes 00:25:53.409 Persistent Memory Region: Not Supported 00:25:53.409 Optional Asynchronous Events Supported 00:25:53.409 Namespace Attribute Notices: Not Supported 00:25:53.409 Firmware Activation Notices: Not Supported 00:25:53.409 ANA Change Notices: Not Supported 00:25:53.409 PLE Aggregate Log Change Notices: Not Supported 00:25:53.409 LBA Status Info Alert Notices: Not Supported 00:25:53.409 EGE Aggregate Log Change Notices: Not Supported 00:25:53.409 Normal NVM Subsystem Shutdown event: Not Supported 00:25:53.409 Zone Descriptor Change Notices: Not Supported 00:25:53.409 Discovery Log Change Notices: Supported 00:25:53.409 Controller Attributes 00:25:53.409 128-bit Host Identifier: Not Supported 00:25:53.409 Non-Operational Permissive Mode: Not Supported 00:25:53.409 NVM Sets: Not Supported 00:25:53.409 Read Recovery Levels: Not Supported 00:25:53.409 Endurance Groups: Not Supported 00:25:53.409 Predictable Latency Mode: Not Supported 00:25:53.409 Traffic Based Keep ALive: Not Supported 00:25:53.409 Namespace Granularity: Not Supported 00:25:53.409 SQ Associations: Not Supported 00:25:53.410 UUID List: Not Supported 00:25:53.410 Multi-Domain Subsystem: Not Supported 00:25:53.410 Fixed Capacity Management: Not Supported 00:25:53.410 Variable Capacity Management: Not Supported 00:25:53.410 Delete Endurance Group: Not Supported 00:25:53.410 Delete NVM Set: Not Supported 00:25:53.410 Extended LBA Formats Supported: Not Supported 00:25:53.410 Flexible Data Placement Supported: Not Supported 00:25:53.410 00:25:53.410 Controller Memory Buffer Support 00:25:53.410 ================================ 00:25:53.410 Supported: No 00:25:53.410 00:25:53.410 Persistent Memory Region Support 00:25:53.410 ================================ 00:25:53.410 Supported: No 00:25:53.410 00:25:53.410 Admin Command Set Attributes 00:25:53.410 ============================ 00:25:53.410 Security Send/Receive: Not Supported 00:25:53.410 Format NVM: Not Supported 00:25:53.410 Firmware Activate/Download: Not Supported 00:25:53.410 Namespace Management: Not Supported 00:25:53.410 Device Self-Test: Not Supported 00:25:53.410 Directives: Not Supported 00:25:53.410 NVMe-MI: Not Supported 00:25:53.410 Virtualization Management: Not Supported 00:25:53.410 Doorbell Buffer Config: Not Supported 00:25:53.410 Get LBA Status Capability: Not Supported 00:25:53.410 Command & Feature Lockdown Capability: Not Supported 00:25:53.410 Abort Command Limit: 1 00:25:53.410 Async Event Request Limit: 4 00:25:53.410 Number of Firmware Slots: N/A 00:25:53.410 Firmware Slot 1 Read-Only: N/A 00:25:53.410 Firmware Activation Without Reset: N/A 00:25:53.410 Multiple Update Detection Support: N/A 00:25:53.410 Firmware Update Granularity: No Information Provided 00:25:53.410 Per-Namespace SMART Log: No 00:25:53.410 Asymmetric Namespace Access Log Page: Not Supported 00:25:53.410 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:53.410 Command Effects Log Page: Not Supported 00:25:53.410 Get Log Page Extended Data: Supported 00:25:53.410 Telemetry Log Pages: Not Supported 00:25:53.410 Persistent Event Log Pages: Not Supported 00:25:53.410 Supported Log Pages Log Page: May Support 00:25:53.410 Commands Supported & Effects Log Page: Not Supported 00:25:53.410 Feature Identifiers & Effects Log Page:May Support 00:25:53.410 NVMe-MI Commands & Effects Log Page: May Support 00:25:53.410 Data Area 4 for Telemetry Log: Not Supported 00:25:53.410 Error Log Page Entries Supported: 128 00:25:53.410 Keep Alive: Not Supported 00:25:53.410 00:25:53.410 NVM Command Set Attributes 00:25:53.410 ========================== 00:25:53.410 Submission Queue Entry Size 00:25:53.410 Max: 1 00:25:53.410 Min: 1 00:25:53.410 Completion Queue Entry Size 00:25:53.410 Max: 1 00:25:53.410 Min: 1 00:25:53.410 Number of Namespaces: 0 00:25:53.410 Compare Command: Not Supported 00:25:53.410 Write Uncorrectable Command: Not Supported 00:25:53.410 Dataset Management Command: Not Supported 00:25:53.410 Write Zeroes Command: Not Supported 00:25:53.410 Set Features Save Field: Not Supported 00:25:53.410 Reservations: Not Supported 00:25:53.410 Timestamp: Not Supported 00:25:53.410 Copy: Not Supported 00:25:53.410 Volatile Write Cache: Not Present 00:25:53.410 Atomic Write Unit (Normal): 1 00:25:53.410 Atomic Write Unit (PFail): 1 00:25:53.410 Atomic Compare & Write Unit: 1 00:25:53.410 Fused Compare & Write: Supported 00:25:53.410 Scatter-Gather List 00:25:53.410 SGL Command Set: Supported 00:25:53.410 SGL Keyed: Supported 00:25:53.410 SGL Bit Bucket Descriptor: Not Supported 00:25:53.410 SGL Metadata Pointer: Not Supported 00:25:53.410 Oversized SGL: Not Supported 00:25:53.410 SGL Metadata Address: Not Supported 00:25:53.410 SGL Offset: Supported 00:25:53.410 Transport SGL Data Block: Not Supported 00:25:53.410 Replay Protected Memory Block: Not Supported 00:25:53.410 00:25:53.410 Firmware Slot Information 00:25:53.410 ========================= 00:25:53.410 Active slot: 0 00:25:53.410 00:25:53.410 00:25:53.410 Error Log 00:25:53.410 ========= 00:25:53.410 00:25:53.410 Active Namespaces 00:25:53.410 ================= 00:25:53.410 Discovery Log Page 00:25:53.410 ================== 00:25:53.410 Generation Counter: 2 00:25:53.410 Number of Records: 2 00:25:53.410 Record Format: 0 00:25:53.410 00:25:53.410 Discovery Log Entry 0 00:25:53.410 ---------------------- 00:25:53.410 Transport Type: 3 (TCP) 00:25:53.410 Address Family: 1 (IPv4) 00:25:53.410 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:53.410 Entry Flags: 00:25:53.410 Duplicate Returned Information: 1 00:25:53.410 Explicit Persistent Connection Support for Discovery: 1 00:25:53.410 Transport Requirements: 00:25:53.410 Secure Channel: Not Required 00:25:53.410 Port ID: 0 (0x0000) 00:25:53.410 Controller ID: 65535 (0xffff) 00:25:53.410 Admin Max SQ Size: 128 00:25:53.410 Transport Service Identifier: 4420 00:25:53.410 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:53.410 Transport Address: 10.0.0.2 00:25:53.410 Discovery Log Entry 1 00:25:53.410 ---------------------- 00:25:53.410 Transport Type: 3 (TCP) 00:25:53.410 Address Family: 1 (IPv4) 00:25:53.410 Subsystem Type: 2 (NVM Subsystem) 00:25:53.410 Entry Flags: 00:25:53.410 Duplicate Returned Information: 0 00:25:53.410 Explicit Persistent Connection Support for Discovery: 0 00:25:53.410 Transport Requirements: 00:25:53.410 Secure Channel: Not Required 00:25:53.410 Port ID: 0 (0x0000) 00:25:53.410 Controller ID: 65535 (0xffff) 00:25:53.410 Admin Max SQ Size: 128 00:25:53.410 Transport Service Identifier: 4420 00:25:53.410 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:25:53.410 Transport Address: 10.0.0.2 [2024-06-07 21:42:53.499165] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:25:53.410 [2024-06-07 21:42:53.499185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.410 [2024-06-07 21:42:53.499194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.410 [2024-06-07 21:42:53.499202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.410 [2024-06-07 21:42:53.499210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.410 [2024-06-07 21:42:53.499220] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.410 [2024-06-07 21:42:53.499225] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.410 [2024-06-07 21:42:53.499230] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fc6ec0) 00:25:53.410 [2024-06-07 21:42:53.499239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.410 [2024-06-07 21:42:53.499256] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x204a210, cid 3, qid 0 00:25:53.410 [2024-06-07 21:42:53.499436] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.410 [2024-06-07 21:42:53.499444] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.410 [2024-06-07 21:42:53.499448] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.410 [2024-06-07 21:42:53.499453] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x204a210) on tqpair=0x1fc6ec0 00:25:53.410 [2024-06-07 21:42:53.499466] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.410 [2024-06-07 21:42:53.499471] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.410 [2024-06-07 21:42:53.499476] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fc6ec0) 00:25:53.410 [2024-06-07 21:42:53.499485] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.410 [2024-06-07 21:42:53.499503] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x204a210, cid 3, qid 0 00:25:53.410 [2024-06-07 21:42:53.499657] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.410 [2024-06-07 21:42:53.499666] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.410 [2024-06-07 21:42:53.499670] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.410 [2024-06-07 21:42:53.499675] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x204a210) on tqpair=0x1fc6ec0 00:25:53.410 [2024-06-07 21:42:53.499682] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:25:53.410 [2024-06-07 21:42:53.499688] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:25:53.410 [2024-06-07 21:42:53.499700] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.410 [2024-06-07 21:42:53.499706] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.410 [2024-06-07 21:42:53.499710] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fc6ec0) 00:25:53.410 [2024-06-07 21:42:53.499718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.410 [2024-06-07 21:42:53.499732] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x204a210, cid 3, qid 0 00:25:53.410 [2024-06-07 21:42:53.499849] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.410 [2024-06-07 21:42:53.499857] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.410 [2024-06-07 21:42:53.499862] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.410 [2024-06-07 21:42:53.499866] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x204a210) on tqpair=0x1fc6ec0 00:25:53.410 [2024-06-07 21:42:53.499880] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.411 [2024-06-07 21:42:53.499886] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.411 [2024-06-07 21:42:53.499896] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fc6ec0) 00:25:53.411 [2024-06-07 21:42:53.499904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.411 [2024-06-07 21:42:53.499918] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x204a210, cid 3, qid 0 00:25:53.411 [2024-06-07 21:42:53.500039] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.411 [2024-06-07 21:42:53.500048] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.411 [2024-06-07 21:42:53.500052] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.411 [2024-06-07 21:42:53.500057] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x204a210) on tqpair=0x1fc6ec0 00:25:53.411 [2024-06-07 21:42:53.500071] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.411 [2024-06-07 21:42:53.500076] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.411 [2024-06-07 21:42:53.500081] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fc6ec0) 00:25:53.411 [2024-06-07 21:42:53.500089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.411 [2024-06-07 21:42:53.500102] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x204a210, cid 3, qid 0 00:25:53.411 [2024-06-07 21:42:53.500269] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.411 [2024-06-07 21:42:53.500277] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.411 [2024-06-07 21:42:53.500281] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.411 [2024-06-07 21:42:53.500286] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x204a210) on tqpair=0x1fc6ec0 00:25:53.411 [2024-06-07 21:42:53.500299] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.411 [2024-06-07 21:42:53.500304] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.411 [2024-06-07 21:42:53.500309] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fc6ec0) 00:25:53.411 [2024-06-07 21:42:53.500317] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.411 [2024-06-07 21:42:53.500331] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x204a210, cid 3, qid 0 00:25:53.411 [2024-06-07 21:42:53.500441] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.411 [2024-06-07 21:42:53.500449] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.411 [2024-06-07 21:42:53.500454] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.411 [2024-06-07 21:42:53.500458] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x204a210) on tqpair=0x1fc6ec0 00:25:53.411 [2024-06-07 21:42:53.500471] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.411 [2024-06-07 21:42:53.500476] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.411 [2024-06-07 21:42:53.500481] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fc6ec0) 00:25:53.411 [2024-06-07 21:42:53.500489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.411 [2024-06-07 21:42:53.500502] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x204a210, cid 3, qid 0 00:25:53.411 [2024-06-07 21:42:53.500664] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.411 [2024-06-07 21:42:53.500672] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.411 [2024-06-07 21:42:53.500676] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.411 [2024-06-07 21:42:53.500681] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x204a210) on tqpair=0x1fc6ec0 00:25:53.411 [2024-06-07 21:42:53.500694] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.411 [2024-06-07 21:42:53.500699] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.411 [2024-06-07 21:42:53.500704] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fc6ec0) 00:25:53.411 [2024-06-07 21:42:53.500715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.411 [2024-06-07 21:42:53.500728] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x204a210, cid 3, qid 0 00:25:53.411 [2024-06-07 21:42:53.500838] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.411 [2024-06-07 21:42:53.500847] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.411 [2024-06-07 21:42:53.500851] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.411 [2024-06-07 21:42:53.500856] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x204a210) on tqpair=0x1fc6ec0 00:25:53.411 [2024-06-07 21:42:53.500868] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.411 [2024-06-07 21:42:53.500873] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.411 [2024-06-07 21:42:53.500877] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fc6ec0) 00:25:53.411 [2024-06-07 21:42:53.500886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.411 [2024-06-07 21:42:53.500898] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x204a210, cid 3, qid 0 00:25:53.411 [2024-06-07 21:42:53.501021] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.411 [2024-06-07 21:42:53.501035] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.411 [2024-06-07 21:42:53.501039] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.411 [2024-06-07 21:42:53.501044] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x204a210) on tqpair=0x1fc6ec0 00:25:53.411 [2024-06-07 21:42:53.501058] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.411 [2024-06-07 21:42:53.501063] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.411 [2024-06-07 21:42:53.501067] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fc6ec0) 00:25:53.411 [2024-06-07 21:42:53.501076] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.411 [2024-06-07 21:42:53.501089] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x204a210, cid 3, qid 0 00:25:53.411 [2024-06-07 21:42:53.501223] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.411 [2024-06-07 21:42:53.501232] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.411 [2024-06-07 21:42:53.501236] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.411 [2024-06-07 21:42:53.501240] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x204a210) on tqpair=0x1fc6ec0 00:25:53.411 [2024-06-07 21:42:53.501254] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.411 [2024-06-07 21:42:53.501259] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.411 [2024-06-07 21:42:53.501263] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fc6ec0) 00:25:53.411 [2024-06-07 21:42:53.501271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.411 [2024-06-07 21:42:53.501284] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x204a210, cid 3, qid 0 00:25:53.411 [2024-06-07 21:42:53.501468] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.411 [2024-06-07 21:42:53.501476] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.411 [2024-06-07 21:42:53.501480] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.411 [2024-06-07 21:42:53.501485] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x204a210) on tqpair=0x1fc6ec0 00:25:53.411 [2024-06-07 21:42:53.501498] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.411 [2024-06-07 21:42:53.501504] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.411 [2024-06-07 21:42:53.501508] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fc6ec0) 00:25:53.411 [2024-06-07 21:42:53.501516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.411 [2024-06-07 21:42:53.501532] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x204a210, cid 3, qid 0 00:25:53.411 [2024-06-07 21:42:53.501646] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.411 [2024-06-07 21:42:53.501654] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.411 [2024-06-07 21:42:53.501658] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.411 [2024-06-07 21:42:53.501663] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x204a210) on tqpair=0x1fc6ec0 00:25:53.411 [2024-06-07 21:42:53.501676] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.411 [2024-06-07 21:42:53.501681] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.411 [2024-06-07 21:42:53.501686] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fc6ec0) 00:25:53.411 [2024-06-07 21:42:53.501694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.411 [2024-06-07 21:42:53.501707] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x204a210, cid 3, qid 0 00:25:53.411 [2024-06-07 21:42:53.501883] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.411 [2024-06-07 21:42:53.501891] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.411 [2024-06-07 21:42:53.501895] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.411 [2024-06-07 21:42:53.501900] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x204a210) on tqpair=0x1fc6ec0 00:25:53.411 [2024-06-07 21:42:53.501913] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.411 [2024-06-07 21:42:53.501918] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.411 [2024-06-07 21:42:53.501923] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fc6ec0) 00:25:53.411 [2024-06-07 21:42:53.501931] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.411 [2024-06-07 21:42:53.501944] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x204a210, cid 3, qid 0 00:25:53.411 [2024-06-07 21:42:53.502092] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.411 [2024-06-07 21:42:53.502101] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.411 [2024-06-07 21:42:53.502105] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.411 [2024-06-07 21:42:53.502110] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x204a210) on tqpair=0x1fc6ec0 00:25:53.411 [2024-06-07 21:42:53.502123] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.411 [2024-06-07 21:42:53.502128] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.411 [2024-06-07 21:42:53.502133] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fc6ec0) 00:25:53.411 [2024-06-07 21:42:53.502141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.411 [2024-06-07 21:42:53.502154] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x204a210, cid 3, qid 0 00:25:53.411 [2024-06-07 21:42:53.502292] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.411 [2024-06-07 21:42:53.502300] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.411 [2024-06-07 21:42:53.502305] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.411 [2024-06-07 21:42:53.502309] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x204a210) on tqpair=0x1fc6ec0 00:25:53.412 [2024-06-07 21:42:53.502322] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.412 [2024-06-07 21:42:53.502328] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.412 [2024-06-07 21:42:53.502332] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fc6ec0) 00:25:53.412 [2024-06-07 21:42:53.502340] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.412 [2024-06-07 21:42:53.502355] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x204a210, cid 3, qid 0 00:25:53.412 [2024-06-07 21:42:53.502475] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.412 [2024-06-07 21:42:53.502484] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.412 [2024-06-07 21:42:53.502488] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.412 [2024-06-07 21:42:53.502493] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x204a210) on tqpair=0x1fc6ec0 00:25:53.412 [2024-06-07 21:42:53.502505] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.412 [2024-06-07 21:42:53.502510] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.412 [2024-06-07 21:42:53.502514] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fc6ec0) 00:25:53.412 [2024-06-07 21:42:53.502523] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.412 [2024-06-07 21:42:53.502536] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x204a210, cid 3, qid 0 00:25:53.412 [2024-06-07 21:42:53.502645] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.412 [2024-06-07 21:42:53.502653] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.412 [2024-06-07 21:42:53.502657] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.412 [2024-06-07 21:42:53.502662] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x204a210) on tqpair=0x1fc6ec0 00:25:53.412 [2024-06-07 21:42:53.502675] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.412 [2024-06-07 21:42:53.502680] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.412 [2024-06-07 21:42:53.502684] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fc6ec0) 00:25:53.412 [2024-06-07 21:42:53.502693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.412 [2024-06-07 21:42:53.502706] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x204a210, cid 3, qid 0 00:25:53.412 [2024-06-07 21:42:53.502847] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.412 [2024-06-07 21:42:53.502855] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.412 [2024-06-07 21:42:53.502859] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.412 [2024-06-07 21:42:53.502864] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x204a210) on tqpair=0x1fc6ec0 00:25:53.412 [2024-06-07 21:42:53.502877] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.412 [2024-06-07 21:42:53.502882] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.412 [2024-06-07 21:42:53.502887] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fc6ec0) 00:25:53.412 [2024-06-07 21:42:53.502895] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.412 [2024-06-07 21:42:53.502908] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x204a210, cid 3, qid 0 00:25:53.412 [2024-06-07 21:42:53.507035] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.412 [2024-06-07 21:42:53.507048] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.412 [2024-06-07 21:42:53.507052] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.412 [2024-06-07 21:42:53.507057] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x204a210) on tqpair=0x1fc6ec0 00:25:53.412 [2024-06-07 21:42:53.507072] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.412 [2024-06-07 21:42:53.507078] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.412 [2024-06-07 21:42:53.507082] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fc6ec0) 00:25:53.412 [2024-06-07 21:42:53.507091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.412 [2024-06-07 21:42:53.507107] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x204a210, cid 3, qid 0 00:25:53.412 [2024-06-07 21:42:53.507312] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.412 [2024-06-07 21:42:53.507320] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.412 [2024-06-07 21:42:53.507325] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.412 [2024-06-07 21:42:53.507330] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x204a210) on tqpair=0x1fc6ec0 00:25:53.412 [2024-06-07 21:42:53.507341] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:25:53.412 00:25:53.412 21:42:53 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:25:53.412 [2024-06-07 21:42:53.545870] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:25:53.412 [2024-06-07 21:42:53.545919] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1548448 ] 00:25:53.412 EAL: No free 2048 kB hugepages reported on node 1 00:25:53.412 [2024-06-07 21:42:53.583267] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:25:53.412 [2024-06-07 21:42:53.583319] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:53.412 [2024-06-07 21:42:53.583326] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:53.412 [2024-06-07 21:42:53.583339] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:53.412 [2024-06-07 21:42:53.583350] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:53.412 [2024-06-07 21:42:53.583747] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:25:53.412 [2024-06-07 21:42:53.583781] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1ad9ec0 0 00:25:53.412 [2024-06-07 21:42:53.590036] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:53.412 [2024-06-07 21:42:53.590052] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:53.412 [2024-06-07 21:42:53.590058] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:53.412 [2024-06-07 21:42:53.590062] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:53.412 [2024-06-07 21:42:53.590100] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.412 [2024-06-07 21:42:53.590107] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.412 [2024-06-07 21:42:53.590112] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ad9ec0) 00:25:53.412 [2024-06-07 21:42:53.590126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:53.412 [2024-06-07 21:42:53.590147] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5cdf0, cid 0, qid 0 00:25:53.412 [2024-06-07 21:42:53.598036] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.412 [2024-06-07 21:42:53.598047] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.412 [2024-06-07 21:42:53.598052] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.412 [2024-06-07 21:42:53.598057] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5cdf0) on tqpair=0x1ad9ec0 00:25:53.412 [2024-06-07 21:42:53.598070] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:53.412 [2024-06-07 21:42:53.598077] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:25:53.412 [2024-06-07 21:42:53.598088] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:25:53.412 [2024-06-07 21:42:53.598103] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.412 [2024-06-07 21:42:53.598108] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.412 [2024-06-07 21:42:53.598113] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ad9ec0) 00:25:53.412 [2024-06-07 21:42:53.598123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.412 [2024-06-07 21:42:53.598141] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5cdf0, cid 0, qid 0 00:25:53.412 [2024-06-07 21:42:53.598335] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.412 [2024-06-07 21:42:53.598344] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.412 [2024-06-07 21:42:53.598349] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.412 [2024-06-07 21:42:53.598354] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5cdf0) on tqpair=0x1ad9ec0 00:25:53.412 [2024-06-07 21:42:53.598361] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:25:53.412 [2024-06-07 21:42:53.598372] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:25:53.412 [2024-06-07 21:42:53.598380] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.412 [2024-06-07 21:42:53.598386] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.412 [2024-06-07 21:42:53.598390] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ad9ec0) 00:25:53.412 [2024-06-07 21:42:53.598399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.412 [2024-06-07 21:42:53.598415] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5cdf0, cid 0, qid 0 00:25:53.412 [2024-06-07 21:42:53.598516] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.412 [2024-06-07 21:42:53.598524] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.412 [2024-06-07 21:42:53.598528] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.412 [2024-06-07 21:42:53.598533] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5cdf0) on tqpair=0x1ad9ec0 00:25:53.412 [2024-06-07 21:42:53.598541] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:25:53.412 [2024-06-07 21:42:53.598552] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:25:53.412 [2024-06-07 21:42:53.598561] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.412 [2024-06-07 21:42:53.598566] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.412 [2024-06-07 21:42:53.598571] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ad9ec0) 00:25:53.412 [2024-06-07 21:42:53.598579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.412 [2024-06-07 21:42:53.598593] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5cdf0, cid 0, qid 0 00:25:53.412 [2024-06-07 21:42:53.598689] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.413 [2024-06-07 21:42:53.598698] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.413 [2024-06-07 21:42:53.598702] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.413 [2024-06-07 21:42:53.598707] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5cdf0) on tqpair=0x1ad9ec0 00:25:53.413 [2024-06-07 21:42:53.598715] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:53.413 [2024-06-07 21:42:53.598727] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.413 [2024-06-07 21:42:53.598736] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.413 [2024-06-07 21:42:53.598741] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ad9ec0) 00:25:53.413 [2024-06-07 21:42:53.598750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.413 [2024-06-07 21:42:53.598764] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5cdf0, cid 0, qid 0 00:25:53.413 [2024-06-07 21:42:53.598856] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.413 [2024-06-07 21:42:53.598865] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.413 [2024-06-07 21:42:53.598869] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.413 [2024-06-07 21:42:53.598874] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5cdf0) on tqpair=0x1ad9ec0 00:25:53.413 [2024-06-07 21:42:53.598881] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:25:53.413 [2024-06-07 21:42:53.598887] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:25:53.413 [2024-06-07 21:42:53.598898] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:53.413 [2024-06-07 21:42:53.599005] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:25:53.413 [2024-06-07 21:42:53.599010] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:53.413 [2024-06-07 21:42:53.599020] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.413 [2024-06-07 21:42:53.599033] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.413 [2024-06-07 21:42:53.599038] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ad9ec0) 00:25:53.413 [2024-06-07 21:42:53.599047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.413 [2024-06-07 21:42:53.599061] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5cdf0, cid 0, qid 0 00:25:53.413 [2024-06-07 21:42:53.599159] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.413 [2024-06-07 21:42:53.599167] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.413 [2024-06-07 21:42:53.599172] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.413 [2024-06-07 21:42:53.599177] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5cdf0) on tqpair=0x1ad9ec0 00:25:53.413 [2024-06-07 21:42:53.599184] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:53.413 [2024-06-07 21:42:53.599196] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.413 [2024-06-07 21:42:53.599202] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.413 [2024-06-07 21:42:53.599206] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ad9ec0) 00:25:53.413 [2024-06-07 21:42:53.599215] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.413 [2024-06-07 21:42:53.599229] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5cdf0, cid 0, qid 0 00:25:53.413 [2024-06-07 21:42:53.599325] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.413 [2024-06-07 21:42:53.599333] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.413 [2024-06-07 21:42:53.599338] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.413 [2024-06-07 21:42:53.599343] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5cdf0) on tqpair=0x1ad9ec0 00:25:53.413 [2024-06-07 21:42:53.599349] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:53.413 [2024-06-07 21:42:53.599358] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:25:53.413 [2024-06-07 21:42:53.599369] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:25:53.413 [2024-06-07 21:42:53.599379] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:25:53.413 [2024-06-07 21:42:53.599390] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.413 [2024-06-07 21:42:53.599395] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ad9ec0) 00:25:53.413 [2024-06-07 21:42:53.599403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.413 [2024-06-07 21:42:53.599418] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5cdf0, cid 0, qid 0 00:25:53.413 [2024-06-07 21:42:53.599540] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:53.413 [2024-06-07 21:42:53.599549] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:53.413 [2024-06-07 21:42:53.599553] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:53.413 [2024-06-07 21:42:53.599558] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ad9ec0): datao=0, datal=4096, cccid=0 00:25:53.413 [2024-06-07 21:42:53.599564] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b5cdf0) on tqpair(0x1ad9ec0): expected_datao=0, payload_size=4096 00:25:53.413 [2024-06-07 21:42:53.599570] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.413 [2024-06-07 21:42:53.599629] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:53.413 [2024-06-07 21:42:53.599635] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:53.413 [2024-06-07 21:42:53.644035] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.413 [2024-06-07 21:42:53.644048] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.413 [2024-06-07 21:42:53.644052] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.413 [2024-06-07 21:42:53.644057] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5cdf0) on tqpair=0x1ad9ec0 00:25:53.413 [2024-06-07 21:42:53.644068] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:25:53.413 [2024-06-07 21:42:53.644075] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:25:53.413 [2024-06-07 21:42:53.644081] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:25:53.413 [2024-06-07 21:42:53.644090] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:25:53.413 [2024-06-07 21:42:53.644096] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:25:53.413 [2024-06-07 21:42:53.644103] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:25:53.413 [2024-06-07 21:42:53.644114] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:25:53.413 [2024-06-07 21:42:53.644123] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.413 [2024-06-07 21:42:53.644128] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.413 [2024-06-07 21:42:53.644133] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ad9ec0) 00:25:53.413 [2024-06-07 21:42:53.644142] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:53.413 [2024-06-07 21:42:53.644160] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5cdf0, cid 0, qid 0 00:25:53.413 [2024-06-07 21:42:53.644346] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.413 [2024-06-07 21:42:53.644355] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.413 [2024-06-07 21:42:53.644362] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.413 [2024-06-07 21:42:53.644368] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5cdf0) on tqpair=0x1ad9ec0 00:25:53.413 [2024-06-07 21:42:53.644378] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.413 [2024-06-07 21:42:53.644383] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.413 [2024-06-07 21:42:53.644388] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ad9ec0) 00:25:53.413 [2024-06-07 21:42:53.644396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.413 [2024-06-07 21:42:53.644404] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.413 [2024-06-07 21:42:53.644409] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.413 [2024-06-07 21:42:53.644413] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1ad9ec0) 00:25:53.413 [2024-06-07 21:42:53.644421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.413 [2024-06-07 21:42:53.644428] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.413 [2024-06-07 21:42:53.644433] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.413 [2024-06-07 21:42:53.644438] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1ad9ec0) 00:25:53.414 [2024-06-07 21:42:53.644445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.414 [2024-06-07 21:42:53.644452] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.414 [2024-06-07 21:42:53.644457] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.414 [2024-06-07 21:42:53.644462] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ad9ec0) 00:25:53.414 [2024-06-07 21:42:53.644469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.414 [2024-06-07 21:42:53.644475] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:53.414 [2024-06-07 21:42:53.644489] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:53.414 [2024-06-07 21:42:53.644498] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.414 [2024-06-07 21:42:53.644503] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ad9ec0) 00:25:53.414 [2024-06-07 21:42:53.644511] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.414 [2024-06-07 21:42:53.644528] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5cdf0, cid 0, qid 0 00:25:53.414 [2024-06-07 21:42:53.644535] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5cf50, cid 1, qid 0 00:25:53.414 [2024-06-07 21:42:53.644541] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5d0b0, cid 2, qid 0 00:25:53.414 [2024-06-07 21:42:53.644547] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5d210, cid 3, qid 0 00:25:53.414 [2024-06-07 21:42:53.644553] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5d370, cid 4, qid 0 00:25:53.414 [2024-06-07 21:42:53.644674] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.414 [2024-06-07 21:42:53.644682] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.414 [2024-06-07 21:42:53.644687] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.414 [2024-06-07 21:42:53.644692] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5d370) on tqpair=0x1ad9ec0 00:25:53.414 [2024-06-07 21:42:53.644699] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:25:53.414 [2024-06-07 21:42:53.644705] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:25:53.414 [2024-06-07 21:42:53.644720] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:25:53.414 [2024-06-07 21:42:53.644728] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:25:53.414 [2024-06-07 21:42:53.644737] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.414 [2024-06-07 21:42:53.644742] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.414 [2024-06-07 21:42:53.644746] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ad9ec0) 00:25:53.414 [2024-06-07 21:42:53.644754] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:53.414 [2024-06-07 21:42:53.644769] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5d370, cid 4, qid 0 00:25:53.414 [2024-06-07 21:42:53.644919] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.414 [2024-06-07 21:42:53.644927] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.414 [2024-06-07 21:42:53.644932] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.414 [2024-06-07 21:42:53.644937] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5d370) on tqpair=0x1ad9ec0 00:25:53.414 [2024-06-07 21:42:53.645004] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:25:53.414 [2024-06-07 21:42:53.645018] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:25:53.414 [2024-06-07 21:42:53.645034] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.414 [2024-06-07 21:42:53.645039] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ad9ec0) 00:25:53.414 [2024-06-07 21:42:53.645047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.414 [2024-06-07 21:42:53.645063] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5d370, cid 4, qid 0 00:25:53.414 [2024-06-07 21:42:53.645170] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:53.414 [2024-06-07 21:42:53.645179] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:53.414 [2024-06-07 21:42:53.645184] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:53.414 [2024-06-07 21:42:53.645189] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ad9ec0): datao=0, datal=4096, cccid=4 00:25:53.414 [2024-06-07 21:42:53.645195] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b5d370) on tqpair(0x1ad9ec0): expected_datao=0, payload_size=4096 00:25:53.414 [2024-06-07 21:42:53.645201] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.414 [2024-06-07 21:42:53.645260] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:53.414 [2024-06-07 21:42:53.645266] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:53.675 [2024-06-07 21:42:53.686139] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.675 [2024-06-07 21:42:53.686156] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.675 [2024-06-07 21:42:53.686161] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.675 [2024-06-07 21:42:53.686166] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5d370) on tqpair=0x1ad9ec0 00:25:53.675 [2024-06-07 21:42:53.686181] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:25:53.675 [2024-06-07 21:42:53.686200] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:25:53.675 [2024-06-07 21:42:53.686214] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:25:53.675 [2024-06-07 21:42:53.686224] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.675 [2024-06-07 21:42:53.686232] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ad9ec0) 00:25:53.675 [2024-06-07 21:42:53.686242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.675 [2024-06-07 21:42:53.686259] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5d370, cid 4, qid 0 00:25:53.675 [2024-06-07 21:42:53.686433] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:53.675 [2024-06-07 21:42:53.686441] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:53.675 [2024-06-07 21:42:53.686446] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:53.675 [2024-06-07 21:42:53.686450] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ad9ec0): datao=0, datal=4096, cccid=4 00:25:53.675 [2024-06-07 21:42:53.686457] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b5d370) on tqpair(0x1ad9ec0): expected_datao=0, payload_size=4096 00:25:53.675 [2024-06-07 21:42:53.686462] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.675 [2024-06-07 21:42:53.686471] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:53.675 [2024-06-07 21:42:53.686476] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:53.675 [2024-06-07 21:42:53.686547] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.675 [2024-06-07 21:42:53.686555] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.675 [2024-06-07 21:42:53.686559] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.675 [2024-06-07 21:42:53.686564] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5d370) on tqpair=0x1ad9ec0 00:25:53.675 [2024-06-07 21:42:53.686580] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:25:53.675 [2024-06-07 21:42:53.686593] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:25:53.675 [2024-06-07 21:42:53.686603] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.676 [2024-06-07 21:42:53.686608] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ad9ec0) 00:25:53.676 [2024-06-07 21:42:53.686616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.676 [2024-06-07 21:42:53.686631] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5d370, cid 4, qid 0 00:25:53.676 [2024-06-07 21:42:53.686775] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:53.676 [2024-06-07 21:42:53.686785] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:53.676 [2024-06-07 21:42:53.686789] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:53.676 [2024-06-07 21:42:53.686794] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ad9ec0): datao=0, datal=4096, cccid=4 00:25:53.676 [2024-06-07 21:42:53.686800] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b5d370) on tqpair(0x1ad9ec0): expected_datao=0, payload_size=4096 00:25:53.676 [2024-06-07 21:42:53.686806] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.676 [2024-06-07 21:42:53.686815] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:53.676 [2024-06-07 21:42:53.686820] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:53.676 [2024-06-07 21:42:53.731040] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.676 [2024-06-07 21:42:53.731054] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.676 [2024-06-07 21:42:53.731058] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.676 [2024-06-07 21:42:53.731063] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5d370) on tqpair=0x1ad9ec0 00:25:53.676 [2024-06-07 21:42:53.731075] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:25:53.676 [2024-06-07 21:42:53.731090] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:25:53.676 [2024-06-07 21:42:53.731101] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:25:53.676 [2024-06-07 21:42:53.731109] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:25:53.676 [2024-06-07 21:42:53.731116] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:25:53.676 [2024-06-07 21:42:53.731122] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:25:53.676 [2024-06-07 21:42:53.731128] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:25:53.676 [2024-06-07 21:42:53.731135] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:25:53.676 [2024-06-07 21:42:53.731153] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.676 [2024-06-07 21:42:53.731159] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ad9ec0) 00:25:53.676 [2024-06-07 21:42:53.731169] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.676 [2024-06-07 21:42:53.731177] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.676 [2024-06-07 21:42:53.731182] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.676 [2024-06-07 21:42:53.731187] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ad9ec0) 00:25:53.676 [2024-06-07 21:42:53.731195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.676 [2024-06-07 21:42:53.731213] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5d370, cid 4, qid 0 00:25:53.676 [2024-06-07 21:42:53.731221] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5d4d0, cid 5, qid 0 00:25:53.676 [2024-06-07 21:42:53.731327] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.676 [2024-06-07 21:42:53.731336] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.676 [2024-06-07 21:42:53.731341] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.676 [2024-06-07 21:42:53.731346] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5d370) on tqpair=0x1ad9ec0 00:25:53.676 [2024-06-07 21:42:53.731355] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.676 [2024-06-07 21:42:53.731362] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.676 [2024-06-07 21:42:53.731367] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.676 [2024-06-07 21:42:53.731372] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5d4d0) on tqpair=0x1ad9ec0 00:25:53.676 [2024-06-07 21:42:53.731386] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.676 [2024-06-07 21:42:53.731391] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ad9ec0) 00:25:53.676 [2024-06-07 21:42:53.731400] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.676 [2024-06-07 21:42:53.731414] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5d4d0, cid 5, qid 0 00:25:53.676 [2024-06-07 21:42:53.731509] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.676 [2024-06-07 21:42:53.731518] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.676 [2024-06-07 21:42:53.731523] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.676 [2024-06-07 21:42:53.731527] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5d4d0) on tqpair=0x1ad9ec0 00:25:53.676 [2024-06-07 21:42:53.731541] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.676 [2024-06-07 21:42:53.731549] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ad9ec0) 00:25:53.676 [2024-06-07 21:42:53.731557] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.676 [2024-06-07 21:42:53.731571] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5d4d0, cid 5, qid 0 00:25:53.676 [2024-06-07 21:42:53.731672] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.676 [2024-06-07 21:42:53.731681] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.676 [2024-06-07 21:42:53.731685] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.676 [2024-06-07 21:42:53.731690] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5d4d0) on tqpair=0x1ad9ec0 00:25:53.676 [2024-06-07 21:42:53.731703] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.676 [2024-06-07 21:42:53.731708] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ad9ec0) 00:25:53.676 [2024-06-07 21:42:53.731716] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.676 [2024-06-07 21:42:53.731730] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5d4d0, cid 5, qid 0 00:25:53.676 [2024-06-07 21:42:53.731833] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.676 [2024-06-07 21:42:53.731842] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.676 [2024-06-07 21:42:53.731846] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.676 [2024-06-07 21:42:53.731851] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5d4d0) on tqpair=0x1ad9ec0 00:25:53.676 [2024-06-07 21:42:53.731868] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.676 [2024-06-07 21:42:53.731874] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ad9ec0) 00:25:53.676 [2024-06-07 21:42:53.731883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.676 [2024-06-07 21:42:53.731891] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.676 [2024-06-07 21:42:53.731896] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ad9ec0) 00:25:53.676 [2024-06-07 21:42:53.731904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.676 [2024-06-07 21:42:53.731913] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.676 [2024-06-07 21:42:53.731918] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1ad9ec0) 00:25:53.676 [2024-06-07 21:42:53.731926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.676 [2024-06-07 21:42:53.731935] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.676 [2024-06-07 21:42:53.731939] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1ad9ec0) 00:25:53.676 [2024-06-07 21:42:53.731947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.676 [2024-06-07 21:42:53.731962] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5d4d0, cid 5, qid 0 00:25:53.676 [2024-06-07 21:42:53.731969] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5d370, cid 4, qid 0 00:25:53.676 [2024-06-07 21:42:53.731975] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5d630, cid 6, qid 0 00:25:53.676 [2024-06-07 21:42:53.731981] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5d790, cid 7, qid 0 00:25:53.676 [2024-06-07 21:42:53.732214] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:53.676 [2024-06-07 21:42:53.732225] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:53.676 [2024-06-07 21:42:53.732233] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:53.676 [2024-06-07 21:42:53.732238] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ad9ec0): datao=0, datal=8192, cccid=5 00:25:53.676 [2024-06-07 21:42:53.732244] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b5d4d0) on tqpair(0x1ad9ec0): expected_datao=0, payload_size=8192 00:25:53.676 [2024-06-07 21:42:53.732249] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.676 [2024-06-07 21:42:53.732258] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:53.676 [2024-06-07 21:42:53.732263] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:53.676 [2024-06-07 21:42:53.732271] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:53.676 [2024-06-07 21:42:53.732278] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:53.676 [2024-06-07 21:42:53.732283] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:53.676 [2024-06-07 21:42:53.732288] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ad9ec0): datao=0, datal=512, cccid=4 00:25:53.676 [2024-06-07 21:42:53.732293] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b5d370) on tqpair(0x1ad9ec0): expected_datao=0, payload_size=512 00:25:53.676 [2024-06-07 21:42:53.732299] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.676 [2024-06-07 21:42:53.732307] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:53.677 [2024-06-07 21:42:53.732311] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:53.677 [2024-06-07 21:42:53.732318] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:53.677 [2024-06-07 21:42:53.732326] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:53.677 [2024-06-07 21:42:53.732330] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:53.677 [2024-06-07 21:42:53.732334] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ad9ec0): datao=0, datal=512, cccid=6 00:25:53.677 [2024-06-07 21:42:53.732340] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b5d630) on tqpair(0x1ad9ec0): expected_datao=0, payload_size=512 00:25:53.677 [2024-06-07 21:42:53.732346] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.677 [2024-06-07 21:42:53.732354] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:53.677 [2024-06-07 21:42:53.732358] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:53.677 [2024-06-07 21:42:53.732366] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:53.677 [2024-06-07 21:42:53.732373] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:53.677 [2024-06-07 21:42:53.732377] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:53.677 [2024-06-07 21:42:53.732382] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ad9ec0): datao=0, datal=4096, cccid=7 00:25:53.677 [2024-06-07 21:42:53.732387] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b5d790) on tqpair(0x1ad9ec0): expected_datao=0, payload_size=4096 00:25:53.677 [2024-06-07 21:42:53.732393] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.677 [2024-06-07 21:42:53.732401] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:53.677 [2024-06-07 21:42:53.732406] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:53.677 [2024-06-07 21:42:53.732416] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.677 [2024-06-07 21:42:53.732423] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.677 [2024-06-07 21:42:53.732428] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.677 [2024-06-07 21:42:53.732433] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5d4d0) on tqpair=0x1ad9ec0 00:25:53.677 [2024-06-07 21:42:53.732450] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.677 [2024-06-07 21:42:53.732457] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.677 [2024-06-07 21:42:53.732462] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.677 [2024-06-07 21:42:53.732467] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5d370) on tqpair=0x1ad9ec0 00:25:53.677 [2024-06-07 21:42:53.732480] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.677 [2024-06-07 21:42:53.732488] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.677 [2024-06-07 21:42:53.732493] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.677 [2024-06-07 21:42:53.732498] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5d630) on tqpair=0x1ad9ec0 00:25:53.677 [2024-06-07 21:42:53.732509] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.677 [2024-06-07 21:42:53.732517] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.677 [2024-06-07 21:42:53.732522] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.677 [2024-06-07 21:42:53.732526] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5d790) on tqpair=0x1ad9ec0 00:25:53.677 ===================================================== 00:25:53.677 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:53.677 ===================================================== 00:25:53.677 Controller Capabilities/Features 00:25:53.677 ================================ 00:25:53.677 Vendor ID: 8086 00:25:53.677 Subsystem Vendor ID: 8086 00:25:53.677 Serial Number: SPDK00000000000001 00:25:53.677 Model Number: SPDK bdev Controller 00:25:53.677 Firmware Version: 24.09 00:25:53.677 Recommended Arb Burst: 6 00:25:53.677 IEEE OUI Identifier: e4 d2 5c 00:25:53.677 Multi-path I/O 00:25:53.677 May have multiple subsystem ports: Yes 00:25:53.677 May have multiple controllers: Yes 00:25:53.677 Associated with SR-IOV VF: No 00:25:53.677 Max Data Transfer Size: 131072 00:25:53.677 Max Number of Namespaces: 32 00:25:53.677 Max Number of I/O Queues: 127 00:25:53.677 NVMe Specification Version (VS): 1.3 00:25:53.677 NVMe Specification Version (Identify): 1.3 00:25:53.677 Maximum Queue Entries: 128 00:25:53.677 Contiguous Queues Required: Yes 00:25:53.677 Arbitration Mechanisms Supported 00:25:53.677 Weighted Round Robin: Not Supported 00:25:53.677 Vendor Specific: Not Supported 00:25:53.677 Reset Timeout: 15000 ms 00:25:53.677 Doorbell Stride: 4 bytes 00:25:53.677 NVM Subsystem Reset: Not Supported 00:25:53.677 Command Sets Supported 00:25:53.677 NVM Command Set: Supported 00:25:53.677 Boot Partition: Not Supported 00:25:53.677 Memory Page Size Minimum: 4096 bytes 00:25:53.677 Memory Page Size Maximum: 4096 bytes 00:25:53.677 Persistent Memory Region: Not Supported 00:25:53.677 Optional Asynchronous Events Supported 00:25:53.677 Namespace Attribute Notices: Supported 00:25:53.677 Firmware Activation Notices: Not Supported 00:25:53.677 ANA Change Notices: Not Supported 00:25:53.677 PLE Aggregate Log Change Notices: Not Supported 00:25:53.677 LBA Status Info Alert Notices: Not Supported 00:25:53.677 EGE Aggregate Log Change Notices: Not Supported 00:25:53.677 Normal NVM Subsystem Shutdown event: Not Supported 00:25:53.677 Zone Descriptor Change Notices: Not Supported 00:25:53.677 Discovery Log Change Notices: Not Supported 00:25:53.677 Controller Attributes 00:25:53.677 128-bit Host Identifier: Supported 00:25:53.677 Non-Operational Permissive Mode: Not Supported 00:25:53.677 NVM Sets: Not Supported 00:25:53.677 Read Recovery Levels: Not Supported 00:25:53.677 Endurance Groups: Not Supported 00:25:53.677 Predictable Latency Mode: Not Supported 00:25:53.677 Traffic Based Keep ALive: Not Supported 00:25:53.677 Namespace Granularity: Not Supported 00:25:53.677 SQ Associations: Not Supported 00:25:53.677 UUID List: Not Supported 00:25:53.677 Multi-Domain Subsystem: Not Supported 00:25:53.677 Fixed Capacity Management: Not Supported 00:25:53.677 Variable Capacity Management: Not Supported 00:25:53.677 Delete Endurance Group: Not Supported 00:25:53.677 Delete NVM Set: Not Supported 00:25:53.677 Extended LBA Formats Supported: Not Supported 00:25:53.677 Flexible Data Placement Supported: Not Supported 00:25:53.677 00:25:53.677 Controller Memory Buffer Support 00:25:53.677 ================================ 00:25:53.677 Supported: No 00:25:53.677 00:25:53.677 Persistent Memory Region Support 00:25:53.677 ================================ 00:25:53.677 Supported: No 00:25:53.677 00:25:53.677 Admin Command Set Attributes 00:25:53.677 ============================ 00:25:53.677 Security Send/Receive: Not Supported 00:25:53.677 Format NVM: Not Supported 00:25:53.677 Firmware Activate/Download: Not Supported 00:25:53.677 Namespace Management: Not Supported 00:25:53.677 Device Self-Test: Not Supported 00:25:53.677 Directives: Not Supported 00:25:53.677 NVMe-MI: Not Supported 00:25:53.677 Virtualization Management: Not Supported 00:25:53.677 Doorbell Buffer Config: Not Supported 00:25:53.677 Get LBA Status Capability: Not Supported 00:25:53.677 Command & Feature Lockdown Capability: Not Supported 00:25:53.677 Abort Command Limit: 4 00:25:53.677 Async Event Request Limit: 4 00:25:53.677 Number of Firmware Slots: N/A 00:25:53.677 Firmware Slot 1 Read-Only: N/A 00:25:53.677 Firmware Activation Without Reset: N/A 00:25:53.677 Multiple Update Detection Support: N/A 00:25:53.677 Firmware Update Granularity: No Information Provided 00:25:53.677 Per-Namespace SMART Log: No 00:25:53.677 Asymmetric Namespace Access Log Page: Not Supported 00:25:53.677 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:25:53.677 Command Effects Log Page: Supported 00:25:53.677 Get Log Page Extended Data: Supported 00:25:53.677 Telemetry Log Pages: Not Supported 00:25:53.677 Persistent Event Log Pages: Not Supported 00:25:53.677 Supported Log Pages Log Page: May Support 00:25:53.677 Commands Supported & Effects Log Page: Not Supported 00:25:53.677 Feature Identifiers & Effects Log Page:May Support 00:25:53.677 NVMe-MI Commands & Effects Log Page: May Support 00:25:53.677 Data Area 4 for Telemetry Log: Not Supported 00:25:53.677 Error Log Page Entries Supported: 128 00:25:53.677 Keep Alive: Supported 00:25:53.677 Keep Alive Granularity: 10000 ms 00:25:53.677 00:25:53.677 NVM Command Set Attributes 00:25:53.677 ========================== 00:25:53.677 Submission Queue Entry Size 00:25:53.677 Max: 64 00:25:53.677 Min: 64 00:25:53.677 Completion Queue Entry Size 00:25:53.677 Max: 16 00:25:53.677 Min: 16 00:25:53.677 Number of Namespaces: 32 00:25:53.677 Compare Command: Supported 00:25:53.677 Write Uncorrectable Command: Not Supported 00:25:53.677 Dataset Management Command: Supported 00:25:53.677 Write Zeroes Command: Supported 00:25:53.677 Set Features Save Field: Not Supported 00:25:53.677 Reservations: Supported 00:25:53.677 Timestamp: Not Supported 00:25:53.677 Copy: Supported 00:25:53.677 Volatile Write Cache: Present 00:25:53.677 Atomic Write Unit (Normal): 1 00:25:53.677 Atomic Write Unit (PFail): 1 00:25:53.677 Atomic Compare & Write Unit: 1 00:25:53.677 Fused Compare & Write: Supported 00:25:53.677 Scatter-Gather List 00:25:53.677 SGL Command Set: Supported 00:25:53.677 SGL Keyed: Supported 00:25:53.677 SGL Bit Bucket Descriptor: Not Supported 00:25:53.677 SGL Metadata Pointer: Not Supported 00:25:53.677 Oversized SGL: Not Supported 00:25:53.678 SGL Metadata Address: Not Supported 00:25:53.678 SGL Offset: Supported 00:25:53.678 Transport SGL Data Block: Not Supported 00:25:53.678 Replay Protected Memory Block: Not Supported 00:25:53.678 00:25:53.678 Firmware Slot Information 00:25:53.678 ========================= 00:25:53.678 Active slot: 1 00:25:53.678 Slot 1 Firmware Revision: 24.09 00:25:53.678 00:25:53.678 00:25:53.678 Commands Supported and Effects 00:25:53.678 ============================== 00:25:53.678 Admin Commands 00:25:53.678 -------------- 00:25:53.678 Get Log Page (02h): Supported 00:25:53.678 Identify (06h): Supported 00:25:53.678 Abort (08h): Supported 00:25:53.678 Set Features (09h): Supported 00:25:53.678 Get Features (0Ah): Supported 00:25:53.678 Asynchronous Event Request (0Ch): Supported 00:25:53.678 Keep Alive (18h): Supported 00:25:53.678 I/O Commands 00:25:53.678 ------------ 00:25:53.678 Flush (00h): Supported LBA-Change 00:25:53.678 Write (01h): Supported LBA-Change 00:25:53.678 Read (02h): Supported 00:25:53.678 Compare (05h): Supported 00:25:53.678 Write Zeroes (08h): Supported LBA-Change 00:25:53.678 Dataset Management (09h): Supported LBA-Change 00:25:53.678 Copy (19h): Supported LBA-Change 00:25:53.678 Unknown (79h): Supported LBA-Change 00:25:53.678 Unknown (7Ah): Supported 00:25:53.678 00:25:53.678 Error Log 00:25:53.678 ========= 00:25:53.678 00:25:53.678 Arbitration 00:25:53.678 =========== 00:25:53.678 Arbitration Burst: 1 00:25:53.678 00:25:53.678 Power Management 00:25:53.678 ================ 00:25:53.678 Number of Power States: 1 00:25:53.678 Current Power State: Power State #0 00:25:53.678 Power State #0: 00:25:53.678 Max Power: 0.00 W 00:25:53.678 Non-Operational State: Operational 00:25:53.678 Entry Latency: Not Reported 00:25:53.678 Exit Latency: Not Reported 00:25:53.678 Relative Read Throughput: 0 00:25:53.678 Relative Read Latency: 0 00:25:53.678 Relative Write Throughput: 0 00:25:53.678 Relative Write Latency: 0 00:25:53.678 Idle Power: Not Reported 00:25:53.678 Active Power: Not Reported 00:25:53.678 Non-Operational Permissive Mode: Not Supported 00:25:53.678 00:25:53.678 Health Information 00:25:53.678 ================== 00:25:53.678 Critical Warnings: 00:25:53.678 Available Spare Space: OK 00:25:53.678 Temperature: OK 00:25:53.678 Device Reliability: OK 00:25:53.678 Read Only: No 00:25:53.678 Volatile Memory Backup: OK 00:25:53.678 Current Temperature: 0 Kelvin (-273 Celsius) 00:25:53.678 Temperature Threshold: [2024-06-07 21:42:53.732644] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.678 [2024-06-07 21:42:53.732651] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1ad9ec0) 00:25:53.678 [2024-06-07 21:42:53.732661] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.678 [2024-06-07 21:42:53.732678] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5d790, cid 7, qid 0 00:25:53.678 [2024-06-07 21:42:53.732780] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.678 [2024-06-07 21:42:53.732788] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.678 [2024-06-07 21:42:53.732793] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.678 [2024-06-07 21:42:53.732798] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5d790) on tqpair=0x1ad9ec0 00:25:53.678 [2024-06-07 21:42:53.732834] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:25:53.678 [2024-06-07 21:42:53.732850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.678 [2024-06-07 21:42:53.732859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.678 [2024-06-07 21:42:53.732867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.678 [2024-06-07 21:42:53.732875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.678 [2024-06-07 21:42:53.732885] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.678 [2024-06-07 21:42:53.732890] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.678 [2024-06-07 21:42:53.732895] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ad9ec0) 00:25:53.678 [2024-06-07 21:42:53.732904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.678 [2024-06-07 21:42:53.732921] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5d210, cid 3, qid 0 00:25:53.678 [2024-06-07 21:42:53.733017] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.678 [2024-06-07 21:42:53.733034] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.678 [2024-06-07 21:42:53.733039] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.678 [2024-06-07 21:42:53.733044] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5d210) on tqpair=0x1ad9ec0 00:25:53.678 [2024-06-07 21:42:53.733054] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.678 [2024-06-07 21:42:53.733059] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.678 [2024-06-07 21:42:53.733064] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ad9ec0) 00:25:53.678 [2024-06-07 21:42:53.733073] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.678 [2024-06-07 21:42:53.733091] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5d210, cid 3, qid 0 00:25:53.678 [2024-06-07 21:42:53.733244] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.678 [2024-06-07 21:42:53.733253] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.678 [2024-06-07 21:42:53.733257] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.678 [2024-06-07 21:42:53.733262] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5d210) on tqpair=0x1ad9ec0 00:25:53.678 [2024-06-07 21:42:53.733269] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:25:53.678 [2024-06-07 21:42:53.733275] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:25:53.678 [2024-06-07 21:42:53.733287] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.678 [2024-06-07 21:42:53.733292] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.678 [2024-06-07 21:42:53.733297] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ad9ec0) 00:25:53.678 [2024-06-07 21:42:53.733305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.678 [2024-06-07 21:42:53.733319] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5d210, cid 3, qid 0 00:25:53.678 [2024-06-07 21:42:53.733418] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.678 [2024-06-07 21:42:53.733427] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.678 [2024-06-07 21:42:53.733431] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.678 [2024-06-07 21:42:53.733436] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5d210) on tqpair=0x1ad9ec0 00:25:53.678 [2024-06-07 21:42:53.733450] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.678 [2024-06-07 21:42:53.733455] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.678 [2024-06-07 21:42:53.733460] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ad9ec0) 00:25:53.678 [2024-06-07 21:42:53.733469] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.678 [2024-06-07 21:42:53.733482] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5d210, cid 3, qid 0 00:25:53.678 [2024-06-07 21:42:53.733590] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.678 [2024-06-07 21:42:53.733598] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.678 [2024-06-07 21:42:53.733602] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.678 [2024-06-07 21:42:53.733607] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5d210) on tqpair=0x1ad9ec0 00:25:53.678 [2024-06-07 21:42:53.733621] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.678 [2024-06-07 21:42:53.733626] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.678 [2024-06-07 21:42:53.733631] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ad9ec0) 00:25:53.678 [2024-06-07 21:42:53.733640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.678 [2024-06-07 21:42:53.733653] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5d210, cid 3, qid 0 00:25:53.678 [2024-06-07 21:42:53.733752] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.678 [2024-06-07 21:42:53.733761] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.678 [2024-06-07 21:42:53.733765] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.678 [2024-06-07 21:42:53.733770] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5d210) on tqpair=0x1ad9ec0 00:25:53.678 [2024-06-07 21:42:53.733783] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.678 [2024-06-07 21:42:53.733789] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.678 [2024-06-07 21:42:53.733793] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ad9ec0) 00:25:53.678 [2024-06-07 21:42:53.733802] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.678 [2024-06-07 21:42:53.733817] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5d210, cid 3, qid 0 00:25:53.678 [2024-06-07 21:42:53.733912] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.678 [2024-06-07 21:42:53.733921] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.678 [2024-06-07 21:42:53.733925] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.678 [2024-06-07 21:42:53.733930] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5d210) on tqpair=0x1ad9ec0 00:25:53.679 [2024-06-07 21:42:53.733944] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.679 [2024-06-07 21:42:53.733949] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.679 [2024-06-07 21:42:53.733953] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ad9ec0) 00:25:53.679 [2024-06-07 21:42:53.733962] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.679 [2024-06-07 21:42:53.733975] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5d210, cid 3, qid 0 00:25:53.679 [2024-06-07 21:42:53.734072] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.679 [2024-06-07 21:42:53.734081] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.679 [2024-06-07 21:42:53.734085] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.679 [2024-06-07 21:42:53.734090] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5d210) on tqpair=0x1ad9ec0 00:25:53.679 [2024-06-07 21:42:53.734103] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.679 [2024-06-07 21:42:53.734109] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.679 [2024-06-07 21:42:53.734113] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ad9ec0) 00:25:53.679 [2024-06-07 21:42:53.734122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.679 [2024-06-07 21:42:53.734135] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5d210, cid 3, qid 0 00:25:53.679 [2024-06-07 21:42:53.734282] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.679 [2024-06-07 21:42:53.734290] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.679 [2024-06-07 21:42:53.734295] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.679 [2024-06-07 21:42:53.734299] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5d210) on tqpair=0x1ad9ec0 00:25:53.679 [2024-06-07 21:42:53.734312] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.679 [2024-06-07 21:42:53.734318] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.679 [2024-06-07 21:42:53.734322] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ad9ec0) 00:25:53.679 [2024-06-07 21:42:53.734331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.679 [2024-06-07 21:42:53.734344] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5d210, cid 3, qid 0 00:25:53.679 [2024-06-07 21:42:53.734434] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.679 [2024-06-07 21:42:53.734443] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.679 [2024-06-07 21:42:53.734447] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.679 [2024-06-07 21:42:53.734452] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5d210) on tqpair=0x1ad9ec0 00:25:53.679 [2024-06-07 21:42:53.734465] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.679 [2024-06-07 21:42:53.734470] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.679 [2024-06-07 21:42:53.734475] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ad9ec0) 00:25:53.679 [2024-06-07 21:42:53.734483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.679 [2024-06-07 21:42:53.734499] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5d210, cid 3, qid 0 00:25:53.679 [2024-06-07 21:42:53.734590] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.679 [2024-06-07 21:42:53.734598] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.679 [2024-06-07 21:42:53.734603] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.679 [2024-06-07 21:42:53.734608] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5d210) on tqpair=0x1ad9ec0 00:25:53.679 [2024-06-07 21:42:53.734621] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.679 [2024-06-07 21:42:53.734626] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.679 [2024-06-07 21:42:53.734631] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ad9ec0) 00:25:53.679 [2024-06-07 21:42:53.734639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.679 [2024-06-07 21:42:53.734653] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5d210, cid 3, qid 0 00:25:53.679 [2024-06-07 21:42:53.734791] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.679 [2024-06-07 21:42:53.734799] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.679 [2024-06-07 21:42:53.734803] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.679 [2024-06-07 21:42:53.734808] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5d210) on tqpair=0x1ad9ec0 00:25:53.679 [2024-06-07 21:42:53.734822] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.679 [2024-06-07 21:42:53.734827] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.679 [2024-06-07 21:42:53.734832] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ad9ec0) 00:25:53.679 [2024-06-07 21:42:53.734840] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.679 [2024-06-07 21:42:53.734854] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5d210, cid 3, qid 0 00:25:53.679 [2024-06-07 21:42:53.734953] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.679 [2024-06-07 21:42:53.734961] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.679 [2024-06-07 21:42:53.734966] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.679 [2024-06-07 21:42:53.734971] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5d210) on tqpair=0x1ad9ec0 00:25:53.679 [2024-06-07 21:42:53.734984] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.679 [2024-06-07 21:42:53.734989] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.679 [2024-06-07 21:42:53.734994] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ad9ec0) 00:25:53.679 [2024-06-07 21:42:53.735003] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.679 [2024-06-07 21:42:53.735016] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5d210, cid 3, qid 0 00:25:53.679 [2024-06-07 21:42:53.739035] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.679 [2024-06-07 21:42:53.739047] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.679 [2024-06-07 21:42:53.739051] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.679 [2024-06-07 21:42:53.739057] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5d210) on tqpair=0x1ad9ec0 00:25:53.679 [2024-06-07 21:42:53.739071] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:53.679 [2024-06-07 21:42:53.739076] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:53.679 [2024-06-07 21:42:53.739081] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ad9ec0) 00:25:53.679 [2024-06-07 21:42:53.739090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.679 [2024-06-07 21:42:53.739106] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b5d210, cid 3, qid 0 00:25:53.679 [2024-06-07 21:42:53.739222] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:53.679 [2024-06-07 21:42:53.739231] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:53.679 [2024-06-07 21:42:53.739235] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:53.679 [2024-06-07 21:42:53.739241] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b5d210) on tqpair=0x1ad9ec0 00:25:53.679 [2024-06-07 21:42:53.739252] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:25:53.679 0 Kelvin (-273 Celsius) 00:25:53.679 Available Spare: 0% 00:25:53.679 Available Spare Threshold: 0% 00:25:53.679 Life Percentage Used: 0% 00:25:53.679 Data Units Read: 0 00:25:53.679 Data Units Written: 0 00:25:53.679 Host Read Commands: 0 00:25:53.679 Host Write Commands: 0 00:25:53.679 Controller Busy Time: 0 minutes 00:25:53.679 Power Cycles: 0 00:25:53.679 Power On Hours: 0 hours 00:25:53.679 Unsafe Shutdowns: 0 00:25:53.679 Unrecoverable Media Errors: 0 00:25:53.679 Lifetime Error Log Entries: 0 00:25:53.679 Warning Temperature Time: 0 minutes 00:25:53.679 Critical Temperature Time: 0 minutes 00:25:53.679 00:25:53.679 Number of Queues 00:25:53.679 ================ 00:25:53.679 Number of I/O Submission Queues: 127 00:25:53.679 Number of I/O Completion Queues: 127 00:25:53.679 00:25:53.679 Active Namespaces 00:25:53.679 ================= 00:25:53.679 Namespace ID:1 00:25:53.679 Error Recovery Timeout: Unlimited 00:25:53.679 Command Set Identifier: NVM (00h) 00:25:53.679 Deallocate: Supported 00:25:53.679 Deallocated/Unwritten Error: Not Supported 00:25:53.679 Deallocated Read Value: Unknown 00:25:53.679 Deallocate in Write Zeroes: Not Supported 00:25:53.679 Deallocated Guard Field: 0xFFFF 00:25:53.679 Flush: Supported 00:25:53.679 Reservation: Supported 00:25:53.679 Namespace Sharing Capabilities: Multiple Controllers 00:25:53.679 Size (in LBAs): 131072 (0GiB) 00:25:53.679 Capacity (in LBAs): 131072 (0GiB) 00:25:53.679 Utilization (in LBAs): 131072 (0GiB) 00:25:53.679 NGUID: ABCDEF0123456789ABCDEF0123456789 00:25:53.679 EUI64: ABCDEF0123456789 00:25:53.679 UUID: 57ba4638-e361-4e30-921c-dd75707b2d66 00:25:53.679 Thin Provisioning: Not Supported 00:25:53.679 Per-NS Atomic Units: Yes 00:25:53.679 Atomic Boundary Size (Normal): 0 00:25:53.679 Atomic Boundary Size (PFail): 0 00:25:53.679 Atomic Boundary Offset: 0 00:25:53.679 Maximum Single Source Range Length: 65535 00:25:53.679 Maximum Copy Length: 65535 00:25:53.679 Maximum Source Range Count: 1 00:25:53.679 NGUID/EUI64 Never Reused: No 00:25:53.679 Namespace Write Protected: No 00:25:53.679 Number of LBA Formats: 1 00:25:53.679 Current LBA Format: LBA Format #00 00:25:53.679 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:53.679 00:25:53.679 21:42:53 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:25:53.679 21:42:53 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:53.679 21:42:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:53.679 21:42:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:53.679 21:42:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:53.679 21:42:53 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:25:53.680 21:42:53 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:25:53.680 21:42:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:53.680 21:42:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:25:53.680 21:42:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:53.680 21:42:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:25:53.680 21:42:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:53.680 21:42:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:53.680 rmmod nvme_tcp 00:25:53.680 rmmod nvme_fabrics 00:25:53.680 rmmod nvme_keyring 00:25:53.680 21:42:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:53.680 21:42:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:25:53.680 21:42:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:25:53.680 21:42:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1548156 ']' 00:25:53.680 21:42:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1548156 00:25:53.680 21:42:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@949 -- # '[' -z 1548156 ']' 00:25:53.680 21:42:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # kill -0 1548156 00:25:53.680 21:42:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # uname 00:25:53.680 21:42:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:53.680 21:42:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1548156 00:25:53.680 21:42:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:25:53.680 21:42:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:25:53.680 21:42:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1548156' 00:25:53.680 killing process with pid 1548156 00:25:53.680 21:42:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@968 -- # kill 1548156 00:25:53.680 21:42:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@973 -- # wait 1548156 00:25:53.939 21:42:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:53.939 21:42:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:53.939 21:42:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:53.939 21:42:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:53.939 21:42:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:53.939 21:42:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:53.939 21:42:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:53.939 21:42:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:56.525 21:42:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:56.525 00:25:56.525 real 0m10.705s 00:25:56.525 user 0m8.864s 00:25:56.525 sys 0m5.422s 00:25:56.525 21:42:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:56.525 21:42:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:56.525 ************************************ 00:25:56.525 END TEST nvmf_identify 00:25:56.525 ************************************ 00:25:56.525 21:42:56 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:56.525 21:42:56 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:25:56.525 21:42:56 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:56.525 21:42:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:56.525 ************************************ 00:25:56.525 START TEST nvmf_perf 00:25:56.525 ************************************ 00:25:56.525 21:42:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:56.525 * Looking for test storage... 00:25:56.525 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:56.525 21:42:56 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:56.525 21:42:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:25:56.525 21:42:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:56.525 21:42:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:56.525 21:42:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:56.525 21:42:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:56.525 21:42:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:56.525 21:42:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:56.525 21:42:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:56.525 21:42:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:56.525 21:42:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:56.525 21:42:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:56.525 21:42:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:25:56.525 21:42:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:25:56.525 21:42:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:56.525 21:42:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:56.525 21:42:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:56.525 21:42:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:56.525 21:42:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:56.525 21:42:56 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:56.525 21:42:56 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:56.525 21:42:56 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:56.525 21:42:56 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.525 21:42:56 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.526 21:42:56 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.526 21:42:56 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:25:56.526 21:42:56 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.526 21:42:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:25:56.526 21:42:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:56.526 21:42:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:56.526 21:42:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:56.526 21:42:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:56.526 21:42:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:56.526 21:42:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:56.526 21:42:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:56.526 21:42:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:56.526 21:42:56 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:56.526 21:42:56 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:56.526 21:42:56 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:56.526 21:42:56 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:25:56.526 21:42:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:56.526 21:42:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:56.526 21:42:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:56.526 21:42:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:56.526 21:42:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:56.526 21:42:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:56.526 21:42:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:56.526 21:42:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:56.526 21:42:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:56.526 21:42:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:56.526 21:42:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:25:56.526 21:42:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:03.116 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:03.116 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:26:03.116 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:03.116 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:03.116 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:03.116 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:03.116 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:03.116 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:26:03.116 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:03.116 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:26:03.116 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:26:03.116 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:26:03.116 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:26:03.116 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:26:03.116 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:26:03.116 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:03.116 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:03.116 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:03.116 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:03.116 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:03.116 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:03.116 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:03.116 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:03.116 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:03.117 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:03.117 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:03.117 Found net devices under 0000:af:00.0: cvl_0_0 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:03.117 Found net devices under 0000:af:00.1: cvl_0_1 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:03.117 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:03.117 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 00:26:03.117 00:26:03.117 --- 10.0.0.2 ping statistics --- 00:26:03.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:03.117 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:03.117 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:03.117 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:26:03.117 00:26:03.117 --- 10.0.0.1 ping statistics --- 00:26:03.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:03.117 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@723 -- # xtrace_disable 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1552451 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1552451 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@830 -- # '[' -z 1552451 ']' 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:03.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:03.117 21:43:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:03.117 [2024-06-07 21:43:02.728994] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:26:03.117 [2024-06-07 21:43:02.729063] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:03.117 EAL: No free 2048 kB hugepages reported on node 1 00:26:03.117 [2024-06-07 21:43:02.824920] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:03.117 [2024-06-07 21:43:02.918186] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:03.117 [2024-06-07 21:43:02.918228] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:03.117 [2024-06-07 21:43:02.918239] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:03.117 [2024-06-07 21:43:02.918249] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:03.117 [2024-06-07 21:43:02.918256] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:03.117 [2024-06-07 21:43:02.918306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:03.117 [2024-06-07 21:43:02.918417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:03.117 [2024-06-07 21:43:02.918532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:03.117 [2024-06-07 21:43:02.918532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:03.684 21:43:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:03.684 21:43:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@863 -- # return 0 00:26:03.684 21:43:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:03.684 21:43:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@729 -- # xtrace_disable 00:26:03.684 21:43:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:03.684 21:43:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:03.684 21:43:03 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:03.684 21:43:03 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:26:06.969 21:43:06 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:26:06.969 21:43:06 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:26:06.969 21:43:07 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:86:00.0 00:26:06.969 21:43:07 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:06.969 21:43:07 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:26:06.969 21:43:07 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:86:00.0 ']' 00:26:06.969 21:43:07 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:26:06.969 21:43:07 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:26:06.970 21:43:07 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:07.228 [2024-06-07 21:43:07.328793] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:07.228 21:43:07 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:07.487 21:43:07 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:07.487 21:43:07 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:07.745 21:43:07 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:07.745 21:43:07 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:08.004 21:43:08 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:08.263 [2024-06-07 21:43:08.355031] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:08.263 21:43:08 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:08.521 21:43:08 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:86:00.0 ']' 00:26:08.522 21:43:08 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:86:00.0' 00:26:08.522 21:43:08 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:26:08.522 21:43:08 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:86:00.0' 00:26:09.898 Initializing NVMe Controllers 00:26:09.898 Attached to NVMe Controller at 0000:86:00.0 [8086:0a54] 00:26:09.898 Associating PCIE (0000:86:00.0) NSID 1 with lcore 0 00:26:09.898 Initialization complete. Launching workers. 00:26:09.898 ======================================================== 00:26:09.898 Latency(us) 00:26:09.898 Device Information : IOPS MiB/s Average min max 00:26:09.898 PCIE (0000:86:00.0) NSID 1 from core 0: 69259.16 270.54 461.56 61.68 4361.92 00:26:09.898 ======================================================== 00:26:09.898 Total : 69259.16 270.54 461.56 61.68 4361.92 00:26:09.898 00:26:09.898 21:43:09 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:09.898 EAL: No free 2048 kB hugepages reported on node 1 00:26:11.274 Initializing NVMe Controllers 00:26:11.274 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:11.274 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:11.274 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:11.274 Initialization complete. Launching workers. 00:26:11.274 ======================================================== 00:26:11.274 Latency(us) 00:26:11.274 Device Information : IOPS MiB/s Average min max 00:26:11.274 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 105.00 0.41 9566.81 208.04 45063.65 00:26:11.274 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 45.00 0.18 23208.79 7946.96 51899.23 00:26:11.274 ======================================================== 00:26:11.274 Total : 150.00 0.59 13659.40 208.04 51899.23 00:26:11.274 00:26:11.274 21:43:11 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:11.274 EAL: No free 2048 kB hugepages reported on node 1 00:26:12.210 Initializing NVMe Controllers 00:26:12.210 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:12.210 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:12.210 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:12.210 Initialization complete. Launching workers. 00:26:12.210 ======================================================== 00:26:12.210 Latency(us) 00:26:12.210 Device Information : IOPS MiB/s Average min max 00:26:12.210 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7659.00 29.92 4179.83 474.29 8155.10 00:26:12.210 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3931.00 15.36 8183.74 6782.01 15989.68 00:26:12.210 ======================================================== 00:26:12.210 Total : 11590.00 45.27 5537.84 474.29 15989.68 00:26:12.210 00:26:12.210 21:43:12 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:26:12.210 21:43:12 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:26:12.210 21:43:12 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:12.210 EAL: No free 2048 kB hugepages reported on node 1 00:26:14.742 Initializing NVMe Controllers 00:26:14.742 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:14.742 Controller IO queue size 128, less than required. 00:26:14.742 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:14.742 Controller IO queue size 128, less than required. 00:26:14.742 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:14.742 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:14.742 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:14.742 Initialization complete. Launching workers. 00:26:14.742 ======================================================== 00:26:14.742 Latency(us) 00:26:14.742 Device Information : IOPS MiB/s Average min max 00:26:14.742 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 930.80 232.70 142262.80 82910.53 200564.30 00:26:14.742 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 552.38 138.10 236278.03 77385.27 365720.96 00:26:14.742 ======================================================== 00:26:14.742 Total : 1483.18 370.80 177276.90 77385.27 365720.96 00:26:14.742 00:26:14.742 21:43:14 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:26:14.742 EAL: No free 2048 kB hugepages reported on node 1 00:26:15.309 No valid NVMe controllers or AIO or URING devices found 00:26:15.309 Initializing NVMe Controllers 00:26:15.309 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:15.309 Controller IO queue size 128, less than required. 00:26:15.309 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:15.309 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:26:15.309 Controller IO queue size 128, less than required. 00:26:15.309 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:15.309 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:26:15.309 WARNING: Some requested NVMe devices were skipped 00:26:15.309 21:43:15 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:26:15.309 EAL: No free 2048 kB hugepages reported on node 1 00:26:17.835 Initializing NVMe Controllers 00:26:17.835 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:17.835 Controller IO queue size 128, less than required. 00:26:17.835 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:17.835 Controller IO queue size 128, less than required. 00:26:17.835 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:17.835 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:17.835 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:17.835 Initialization complete. Launching workers. 00:26:17.835 00:26:17.835 ==================== 00:26:17.835 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:26:17.835 TCP transport: 00:26:17.835 polls: 27753 00:26:17.835 idle_polls: 9273 00:26:17.835 sock_completions: 18480 00:26:17.835 nvme_completions: 4079 00:26:17.835 submitted_requests: 6142 00:26:17.835 queued_requests: 1 00:26:17.835 00:26:17.835 ==================== 00:26:17.835 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:26:17.835 TCP transport: 00:26:17.835 polls: 23885 00:26:17.835 idle_polls: 5694 00:26:17.835 sock_completions: 18191 00:26:17.835 nvme_completions: 3903 00:26:17.835 submitted_requests: 5828 00:26:17.835 queued_requests: 1 00:26:17.835 ======================================================== 00:26:17.835 Latency(us) 00:26:17.835 Device Information : IOPS MiB/s Average min max 00:26:17.835 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1018.80 254.70 130374.86 65356.60 199303.12 00:26:17.835 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 974.83 243.71 133221.14 74828.07 206080.90 00:26:17.835 ======================================================== 00:26:17.835 Total : 1993.64 498.41 131766.61 65356.60 206080.90 00:26:17.835 00:26:17.835 21:43:17 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:26:17.835 21:43:17 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:17.835 21:43:18 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:26:17.835 21:43:18 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:26:17.835 21:43:18 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:26:17.835 21:43:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:17.835 21:43:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:26:17.835 21:43:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:17.835 21:43:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:26:17.835 21:43:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:17.835 21:43:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:18.093 rmmod nvme_tcp 00:26:18.093 rmmod nvme_fabrics 00:26:18.093 rmmod nvme_keyring 00:26:18.093 21:43:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:18.093 21:43:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:26:18.093 21:43:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:26:18.093 21:43:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1552451 ']' 00:26:18.093 21:43:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1552451 00:26:18.093 21:43:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@949 -- # '[' -z 1552451 ']' 00:26:18.093 21:43:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # kill -0 1552451 00:26:18.093 21:43:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # uname 00:26:18.093 21:43:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:18.093 21:43:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1552451 00:26:18.093 21:43:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:26:18.093 21:43:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:26:18.093 21:43:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1552451' 00:26:18.093 killing process with pid 1552451 00:26:18.093 21:43:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@968 -- # kill 1552451 00:26:18.093 21:43:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@973 -- # wait 1552451 00:26:19.992 21:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:19.992 21:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:19.992 21:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:19.992 21:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:19.992 21:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:19.992 21:43:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:19.992 21:43:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:19.992 21:43:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:21.898 21:43:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:21.898 00:26:21.898 real 0m25.600s 00:26:21.898 user 1m8.312s 00:26:21.898 sys 0m7.693s 00:26:21.898 21:43:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:26:21.898 21:43:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:21.898 ************************************ 00:26:21.898 END TEST nvmf_perf 00:26:21.898 ************************************ 00:26:21.898 21:43:21 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:21.898 21:43:21 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:26:21.898 21:43:21 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:26:21.898 21:43:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:21.898 ************************************ 00:26:21.898 START TEST nvmf_fio_host 00:26:21.898 ************************************ 00:26:21.898 21:43:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:21.898 * Looking for test storage... 00:26:21.898 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:21.898 21:43:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:21.898 21:43:21 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:21.898 21:43:21 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:21.898 21:43:21 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:21.898 21:43:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.898 21:43:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.899 21:43:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.899 21:43:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:26:21.899 21:43:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.899 21:43:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:21.899 21:43:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:26:21.899 21:43:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:21.899 21:43:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:21.899 21:43:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:21.899 21:43:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:21.899 21:43:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:21.899 21:43:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:21.899 21:43:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:21.899 21:43:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:21.899 21:43:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:21.899 21:43:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:21.899 21:43:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:26:21.899 21:43:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:26:21.899 21:43:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:21.899 21:43:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:21.899 21:43:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:21.899 21:43:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:21.899 21:43:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:21.899 21:43:22 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:21.899 21:43:22 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:21.899 21:43:22 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:21.899 21:43:22 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.899 21:43:22 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.899 21:43:22 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.899 21:43:22 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:26:21.899 21:43:22 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.899 21:43:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:26:21.899 21:43:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:21.899 21:43:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:21.899 21:43:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:21.899 21:43:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:21.899 21:43:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:21.899 21:43:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:21.899 21:43:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:21.899 21:43:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:21.899 21:43:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:21.899 21:43:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:26:21.899 21:43:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:21.899 21:43:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:21.899 21:43:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:21.899 21:43:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:21.899 21:43:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:21.899 21:43:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:21.899 21:43:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:21.899 21:43:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:21.899 21:43:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:21.899 21:43:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:21.899 21:43:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:26:21.899 21:43:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.464 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:28.464 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:26:28.464 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:28.465 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:28.465 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:28.465 Found net devices under 0000:af:00.0: cvl_0_0 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:28.465 Found net devices under 0000:af:00.1: cvl_0_1 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:28.465 21:43:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:28.465 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:28.465 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:28.465 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:28.465 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:28.465 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:28.465 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:28.465 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:28.465 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:28.465 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:26:28.465 00:26:28.465 --- 10.0.0.2 ping statistics --- 00:26:28.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.465 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:26:28.465 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:28.465 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:28.465 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:26:28.465 00:26:28.465 --- 10.0.0.1 ping statistics --- 00:26:28.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.465 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:26:28.465 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:28.465 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:26:28.465 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:28.465 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:28.465 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:28.465 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:28.465 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:28.465 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:28.465 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:28.465 21:43:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:26:28.465 21:43:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:26:28.465 21:43:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@723 -- # xtrace_disable 00:26:28.465 21:43:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.465 21:43:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1559388 00:26:28.465 21:43:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:28.465 21:43:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1559388 00:26:28.465 21:43:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@830 -- # '[' -z 1559388 ']' 00:26:28.465 21:43:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:28.465 21:43:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:28.465 21:43:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:28.466 21:43:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:28.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:28.466 21:43:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:28.466 21:43:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.466 [2024-06-07 21:43:28.350396] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:26:28.466 [2024-06-07 21:43:28.350455] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:28.466 EAL: No free 2048 kB hugepages reported on node 1 00:26:28.466 [2024-06-07 21:43:28.443930] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:28.466 [2024-06-07 21:43:28.536891] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:28.466 [2024-06-07 21:43:28.536931] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:28.466 [2024-06-07 21:43:28.536941] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:28.466 [2024-06-07 21:43:28.536950] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:28.466 [2024-06-07 21:43:28.536958] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:28.466 [2024-06-07 21:43:28.537004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:28.466 [2024-06-07 21:43:28.537108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:28.466 [2024-06-07 21:43:28.537223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:28.466 [2024-06-07 21:43:28.537223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:29.032 21:43:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:29.032 21:43:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@863 -- # return 0 00:26:29.032 21:43:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:29.291 [2024-06-07 21:43:29.513555] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:29.291 21:43:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:26:29.291 21:43:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@729 -- # xtrace_disable 00:26:29.291 21:43:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.550 21:43:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:26:29.809 Malloc1 00:26:29.809 21:43:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:30.068 21:43:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:30.326 21:43:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:30.326 [2024-06-07 21:43:30.577058] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:30.585 21:43:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:30.870 21:43:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:30.870 21:43:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:30.870 21:43:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:30.870 21:43:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:26:30.870 21:43:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:30.870 21:43:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:26:30.870 21:43:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:30.870 21:43:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:26:30.870 21:43:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:26:30.870 21:43:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:26:30.870 21:43:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:30.870 21:43:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:26:30.870 21:43:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:26:30.870 21:43:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:26:30.870 21:43:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:26:30.870 21:43:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:26:30.870 21:43:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:30.870 21:43:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:26:30.870 21:43:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:26:30.870 21:43:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:26:30.870 21:43:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:26:30.870 21:43:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:30.870 21:43:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:31.135 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:26:31.135 fio-3.35 00:26:31.135 Starting 1 thread 00:26:31.135 EAL: No free 2048 kB hugepages reported on node 1 00:26:33.661 00:26:33.661 test: (groupid=0, jobs=1): err= 0: pid=1560074: Fri Jun 7 21:43:33 2024 00:26:33.661 read: IOPS=8063, BW=31.5MiB/s (33.0MB/s)(63.2MiB/2007msec) 00:26:33.661 slat (usec): min=2, max=250, avg= 2.60, stdev= 2.68 00:26:33.661 clat (usec): min=2901, max=15108, avg=8754.62, stdev=704.74 00:26:33.661 lat (usec): min=2935, max=15110, avg=8757.22, stdev=704.49 00:26:33.661 clat percentiles (usec): 00:26:33.661 | 1.00th=[ 7111], 5.00th=[ 7635], 10.00th=[ 7898], 20.00th=[ 8225], 00:26:33.661 | 30.00th=[ 8455], 40.00th=[ 8586], 50.00th=[ 8717], 60.00th=[ 8979], 00:26:33.661 | 70.00th=[ 9110], 80.00th=[ 9372], 90.00th=[ 9634], 95.00th=[ 9765], 00:26:33.661 | 99.00th=[10159], 99.50th=[10421], 99.90th=[13042], 99.95th=[14746], 00:26:33.661 | 99.99th=[15008] 00:26:33.661 bw ( KiB/s): min=31321, max=32664, per=99.92%, avg=32228.25, stdev=612.69, samples=4 00:26:33.661 iops : min= 7830, max= 8166, avg=8057.00, stdev=153.30, samples=4 00:26:33.661 write: IOPS=8047, BW=31.4MiB/s (33.0MB/s)(63.1MiB/2007msec); 0 zone resets 00:26:33.661 slat (usec): min=2, max=239, avg= 2.70, stdev= 2.04 00:26:33.661 clat (usec): min=2476, max=13849, avg=7077.30, stdev=592.74 00:26:33.661 lat (usec): min=2491, max=13852, avg=7080.00, stdev=592.54 00:26:33.661 clat percentiles (usec): 00:26:33.661 | 1.00th=[ 5735], 5.00th=[ 6259], 10.00th=[ 6390], 20.00th=[ 6652], 00:26:33.661 | 30.00th=[ 6783], 40.00th=[ 6980], 50.00th=[ 7111], 60.00th=[ 7242], 00:26:33.661 | 70.00th=[ 7373], 80.00th=[ 7504], 90.00th=[ 7701], 95.00th=[ 7898], 00:26:33.661 | 99.00th=[ 8291], 99.50th=[ 8455], 99.90th=[11863], 99.95th=[13566], 00:26:33.661 | 99.99th=[13829] 00:26:33.661 bw ( KiB/s): min=32064, max=32216, per=99.89%, avg=32153.75, stdev=64.05, samples=4 00:26:33.661 iops : min= 8016, max= 8054, avg=8038.25, stdev=15.97, samples=4 00:26:33.661 lat (msec) : 4=0.11%, 10=98.62%, 20=1.26% 00:26:33.661 cpu : usr=66.75%, sys=27.82%, ctx=76, majf=0, minf=5 00:26:33.661 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:26:33.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.661 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:33.661 issued rwts: total=16183,16151,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.661 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:33.661 00:26:33.661 Run status group 0 (all jobs): 00:26:33.661 READ: bw=31.5MiB/s (33.0MB/s), 31.5MiB/s-31.5MiB/s (33.0MB/s-33.0MB/s), io=63.2MiB (66.3MB), run=2007-2007msec 00:26:33.661 WRITE: bw=31.4MiB/s (33.0MB/s), 31.4MiB/s-31.4MiB/s (33.0MB/s-33.0MB/s), io=63.1MiB (66.2MB), run=2007-2007msec 00:26:33.661 21:43:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:33.661 21:43:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:33.661 21:43:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:26:33.661 21:43:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:33.661 21:43:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:26:33.661 21:43:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:33.661 21:43:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:26:33.661 21:43:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:26:33.661 21:43:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:26:33.661 21:43:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:33.661 21:43:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:26:33.661 21:43:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:26:33.661 21:43:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:26:33.661 21:43:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:26:33.661 21:43:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:26:33.661 21:43:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:33.661 21:43:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:26:33.661 21:43:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:26:33.662 21:43:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:26:33.662 21:43:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:26:33.662 21:43:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:33.662 21:43:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:33.919 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:26:33.919 fio-3.35 00:26:33.919 Starting 1 thread 00:26:33.919 EAL: No free 2048 kB hugepages reported on node 1 00:26:36.449 00:26:36.449 test: (groupid=0, jobs=1): err= 0: pid=1560724: Fri Jun 7 21:43:36 2024 00:26:36.449 read: IOPS=9422, BW=147MiB/s (154MB/s)(296MiB/2009msec) 00:26:36.449 slat (nsec): min=2183, max=75715, avg=2582.88, stdev=1283.42 00:26:36.449 clat (usec): min=2250, max=54837, avg=8271.54, stdev=4319.49 00:26:36.449 lat (usec): min=2252, max=54839, avg=8274.12, stdev=4319.60 00:26:36.449 clat percentiles (usec): 00:26:36.449 | 1.00th=[ 3654], 5.00th=[ 4359], 10.00th=[ 4883], 20.00th=[ 5669], 00:26:36.449 | 30.00th=[ 6325], 40.00th=[ 6980], 50.00th=[ 7767], 60.00th=[ 8586], 00:26:36.449 | 70.00th=[ 9241], 80.00th=[10028], 90.00th=[11731], 95.00th=[13173], 00:26:36.449 | 99.00th=[15664], 99.50th=[49021], 99.90th=[52691], 99.95th=[54264], 00:26:36.449 | 99.99th=[54789] 00:26:36.449 bw ( KiB/s): min=57472, max=94080, per=49.76%, avg=75016.00, stdev=15696.15, samples=4 00:26:36.450 iops : min= 3592, max= 5880, avg=4688.50, stdev=981.01, samples=4 00:26:36.450 write: IOPS=5558, BW=86.8MiB/s (91.1MB/s)(153MiB/1766msec); 0 zone resets 00:26:36.450 slat (usec): min=25, max=375, avg=29.18, stdev= 8.42 00:26:36.450 clat (usec): min=3957, max=22546, avg=9354.23, stdev=2628.60 00:26:36.450 lat (usec): min=3985, max=22576, avg=9383.41, stdev=2630.20 00:26:36.450 clat percentiles (usec): 00:26:36.450 | 1.00th=[ 5407], 5.00th=[ 6063], 10.00th=[ 6456], 20.00th=[ 7046], 00:26:36.450 | 30.00th=[ 7635], 40.00th=[ 8225], 50.00th=[ 8717], 60.00th=[ 9503], 00:26:36.450 | 70.00th=[10421], 80.00th=[11469], 90.00th=[13304], 95.00th=[14615], 00:26:36.450 | 99.00th=[15926], 99.50th=[16909], 99.90th=[19268], 99.95th=[22414], 00:26:36.450 | 99.99th=[22676] 00:26:36.450 bw ( KiB/s): min=60288, max=98304, per=87.63%, avg=77928.00, stdev=16312.62, samples=4 00:26:36.450 iops : min= 3768, max= 6144, avg=4870.50, stdev=1019.54, samples=4 00:26:36.450 lat (msec) : 4=1.71%, 10=73.21%, 20=24.62%, 50=0.24%, 100=0.23% 00:26:36.450 cpu : usr=80.73%, sys=16.48%, ctx=109, majf=0, minf=2 00:26:36.450 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:26:36.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.450 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:36.450 issued rwts: total=18930,9816,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:36.450 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:36.450 00:26:36.450 Run status group 0 (all jobs): 00:26:36.450 READ: bw=147MiB/s (154MB/s), 147MiB/s-147MiB/s (154MB/s-154MB/s), io=296MiB (310MB), run=2009-2009msec 00:26:36.450 WRITE: bw=86.8MiB/s (91.1MB/s), 86.8MiB/s-86.8MiB/s (91.1MB/s-91.1MB/s), io=153MiB (161MB), run=1766-1766msec 00:26:36.450 21:43:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:36.450 21:43:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:26:36.450 21:43:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:26:36.450 21:43:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:26:36.450 21:43:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:26:36.450 21:43:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:36.450 21:43:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:26:36.450 21:43:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:36.450 21:43:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:26:36.450 21:43:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:36.450 21:43:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:36.450 rmmod nvme_tcp 00:26:36.450 rmmod nvme_fabrics 00:26:36.450 rmmod nvme_keyring 00:26:36.450 21:43:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:36.450 21:43:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:26:36.450 21:43:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:26:36.450 21:43:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1559388 ']' 00:26:36.450 21:43:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1559388 00:26:36.450 21:43:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@949 -- # '[' -z 1559388 ']' 00:26:36.450 21:43:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # kill -0 1559388 00:26:36.450 21:43:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # uname 00:26:36.450 21:43:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:36.450 21:43:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1559388 00:26:36.450 21:43:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:26:36.450 21:43:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:26:36.450 21:43:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1559388' 00:26:36.450 killing process with pid 1559388 00:26:36.450 21:43:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@968 -- # kill 1559388 00:26:36.450 21:43:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@973 -- # wait 1559388 00:26:36.709 21:43:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:36.709 21:43:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:36.709 21:43:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:36.709 21:43:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:36.709 21:43:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:36.709 21:43:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:36.709 21:43:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:36.709 21:43:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:39.311 21:43:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:39.311 00:26:39.311 real 0m16.990s 00:26:39.311 user 1m1.591s 00:26:39.311 sys 0m6.908s 00:26:39.311 21:43:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1125 -- # xtrace_disable 00:26:39.311 21:43:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.311 ************************************ 00:26:39.311 END TEST nvmf_fio_host 00:26:39.311 ************************************ 00:26:39.311 21:43:38 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:39.311 21:43:38 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:26:39.311 21:43:38 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:26:39.311 21:43:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:39.311 ************************************ 00:26:39.311 START TEST nvmf_failover 00:26:39.311 ************************************ 00:26:39.311 21:43:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:39.311 * Looking for test storage... 00:26:39.311 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:39.311 21:43:39 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:39.311 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:26:39.311 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:39.311 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:39.311 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:39.311 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:39.311 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:39.311 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:39.311 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:39.311 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:39.311 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:39.311 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:39.311 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:26:39.311 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:26:39.311 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:39.311 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:39.311 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:39.311 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:39.311 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:39.311 21:43:39 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:39.311 21:43:39 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:39.311 21:43:39 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:39.311 21:43:39 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.311 21:43:39 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.311 21:43:39 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.311 21:43:39 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:26:39.312 21:43:39 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.312 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:26:39.312 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:39.312 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:39.312 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:39.312 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:39.312 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:39.312 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:39.312 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:39.312 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:39.312 21:43:39 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:39.312 21:43:39 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:39.312 21:43:39 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:39.312 21:43:39 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:39.312 21:43:39 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:26:39.312 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:39.312 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:39.312 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:39.312 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:39.312 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:39.312 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:39.312 21:43:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:39.312 21:43:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:39.312 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:39.312 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:39.312 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:26:39.312 21:43:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:45.871 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:45.871 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:26:45.871 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:45.871 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:45.871 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:45.871 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:45.871 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:45.871 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:26:45.871 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:45.871 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:45.872 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:45.872 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:45.872 Found net devices under 0000:af:00.0: cvl_0_0 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:45.872 Found net devices under 0000:af:00.1: cvl_0_1 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:45.872 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:45.872 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:26:45.872 00:26:45.872 --- 10.0.0.2 ping statistics --- 00:26:45.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:45.872 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:45.872 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:45.872 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:26:45.872 00:26:45.872 --- 10.0.0.1 ping statistics --- 00:26:45.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:45.872 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@723 -- # xtrace_disable 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1565172 00:26:45.872 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1565172 00:26:45.873 21:43:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:45.873 21:43:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 1565172 ']' 00:26:45.873 21:43:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:45.873 21:43:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:45.873 21:43:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:45.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:45.873 21:43:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:45.873 21:43:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:45.873 [2024-06-07 21:43:45.796933] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:26:45.873 [2024-06-07 21:43:45.796997] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:45.873 EAL: No free 2048 kB hugepages reported on node 1 00:26:45.873 [2024-06-07 21:43:45.885681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:45.873 [2024-06-07 21:43:45.975842] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:45.873 [2024-06-07 21:43:45.975884] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:45.873 [2024-06-07 21:43:45.975895] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:45.873 [2024-06-07 21:43:45.975904] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:45.873 [2024-06-07 21:43:45.975911] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:45.873 [2024-06-07 21:43:45.976020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:45.873 [2024-06-07 21:43:45.976136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:45.873 [2024-06-07 21:43:45.976137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:46.806 21:43:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:46.806 21:43:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:26:46.806 21:43:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:46.806 21:43:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@729 -- # xtrace_disable 00:26:46.806 21:43:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:46.806 21:43:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:46.806 21:43:46 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:46.806 [2024-06-07 21:43:47.010737] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:46.806 21:43:47 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:47.064 Malloc0 00:26:47.064 21:43:47 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:47.322 21:43:47 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:47.580 21:43:47 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:47.838 [2024-06-07 21:43:48.056567] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:47.838 21:43:48 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:48.096 [2024-06-07 21:43:48.309404] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:48.096 21:43:48 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:48.354 [2024-06-07 21:43:48.558276] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:48.354 21:43:48 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1565632 00:26:48.354 21:43:48 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:26:48.354 21:43:48 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:48.354 21:43:48 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1565632 /var/tmp/bdevperf.sock 00:26:48.354 21:43:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 1565632 ']' 00:26:48.354 21:43:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:48.354 21:43:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:48.354 21:43:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:48.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:48.354 21:43:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:48.354 21:43:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:48.920 21:43:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:48.920 21:43:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:26:48.920 21:43:48 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:49.178 NVMe0n1 00:26:49.178 21:43:49 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:49.436 00:26:49.436 21:43:49 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1565813 00:26:49.436 21:43:49 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:49.436 21:43:49 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:26:50.812 21:43:50 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:50.812 [2024-06-07 21:43:50.929202] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b7f30 is same with the state(5) to be set 00:26:50.812 [2024-06-07 21:43:50.929249] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b7f30 is same with the state(5) to be set 00:26:50.812 [2024-06-07 21:43:50.929256] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b7f30 is same with the state(5) to be set 00:26:50.812 [2024-06-07 21:43:50.929262] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b7f30 is same with the state(5) to be set 00:26:50.812 [2024-06-07 21:43:50.929268] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b7f30 is same with the state(5) to be set 00:26:50.812 [2024-06-07 21:43:50.929274] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b7f30 is same with the state(5) to be set 00:26:50.812 [2024-06-07 21:43:50.929279] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b7f30 is same with the state(5) to be set 00:26:50.812 [2024-06-07 21:43:50.929285] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b7f30 is same with the state(5) to be set 00:26:50.812 [2024-06-07 21:43:50.929290] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b7f30 is same with the state(5) to be set 00:26:50.812 [2024-06-07 21:43:50.929295] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b7f30 is same with the state(5) to be set 00:26:50.812 [2024-06-07 21:43:50.929300] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b7f30 is same with the state(5) to be set 00:26:50.812 [2024-06-07 21:43:50.929305] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b7f30 is same with the state(5) to be set 00:26:50.812 [2024-06-07 21:43:50.929310] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b7f30 is same with the state(5) to be set 00:26:50.812 [2024-06-07 21:43:50.929321] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b7f30 is same with the state(5) to be set 00:26:50.812 [2024-06-07 21:43:50.929327] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b7f30 is same with the state(5) to be set 00:26:50.812 [2024-06-07 21:43:50.929332] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b7f30 is same with the state(5) to be set 00:26:50.812 [2024-06-07 21:43:50.929338] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b7f30 is same with the state(5) to be set 00:26:50.812 [2024-06-07 21:43:50.929344] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b7f30 is same with the state(5) to be set 00:26:50.812 [2024-06-07 21:43:50.929349] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b7f30 is same with the state(5) to be set 00:26:50.812 [2024-06-07 21:43:50.929355] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b7f30 is same with the state(5) to be set 00:26:50.812 [2024-06-07 21:43:50.929360] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b7f30 is same with the state(5) to be set 00:26:50.812 [2024-06-07 21:43:50.929366] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b7f30 is same with the state(5) to be set 00:26:50.812 [2024-06-07 21:43:50.929371] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b7f30 is same with the state(5) to be set 00:26:50.812 [2024-06-07 21:43:50.929376] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b7f30 is same with the state(5) to be set 00:26:50.812 [2024-06-07 21:43:50.929381] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b7f30 is same with the state(5) to be set 00:26:50.812 [2024-06-07 21:43:50.929386] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b7f30 is same with the state(5) to be set 00:26:50.812 [2024-06-07 21:43:50.929392] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b7f30 is same with the state(5) to be set 00:26:50.812 [2024-06-07 21:43:50.929398] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b7f30 is same with the state(5) to be set 00:26:50.812 [2024-06-07 21:43:50.929404] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b7f30 is same with the state(5) to be set 00:26:50.812 [2024-06-07 21:43:50.929409] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b7f30 is same with the state(5) to be set 00:26:50.812 [2024-06-07 21:43:50.929414] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b7f30 is same with the state(5) to be set 00:26:50.812 [2024-06-07 21:43:50.929419] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b7f30 is same with the state(5) to be set 00:26:50.812 [2024-06-07 21:43:50.929424] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b7f30 is same with the state(5) to be set 00:26:50.812 [2024-06-07 21:43:50.929429] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b7f30 is same with the state(5) to be set 00:26:50.812 [2024-06-07 21:43:50.929436] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b7f30 is same with the state(5) to be set 00:26:50.812 [2024-06-07 21:43:50.929441] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b7f30 is same with the state(5) to be set 00:26:50.812 [2024-06-07 21:43:50.929446] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b7f30 is same with the state(5) to be set 00:26:50.812 [2024-06-07 21:43:50.929452] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b7f30 is same with the state(5) to be set 00:26:50.812 [2024-06-07 21:43:50.929457] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b7f30 is same with the state(5) to be set 00:26:50.812 [2024-06-07 21:43:50.929463] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b7f30 is same with the state(5) to be set 00:26:50.812 [2024-06-07 21:43:50.929469] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b7f30 is same with the state(5) to be set 00:26:50.812 [2024-06-07 21:43:50.929476] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b7f30 is same with the state(5) to be set 00:26:50.812 [2024-06-07 21:43:50.929481] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b7f30 is same with the state(5) to be set 00:26:50.812 [2024-06-07 21:43:50.929487] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b7f30 is same with the state(5) to be set 00:26:50.812 [2024-06-07 21:43:50.929492] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b7f30 is same with the state(5) to be set 00:26:50.812 [2024-06-07 21:43:50.929498] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b7f30 is same with the state(5) to be set 00:26:50.812 [2024-06-07 21:43:50.929503] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b7f30 is same with the state(5) to be set 00:26:50.812 [2024-06-07 21:43:50.929508] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b7f30 is same with the state(5) to be set 00:26:50.812 [2024-06-07 21:43:50.929513] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b7f30 is same with the state(5) to be set 00:26:50.812 [2024-06-07 21:43:50.929518] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b7f30 is same with the state(5) to be set 00:26:50.812 [2024-06-07 21:43:50.929524] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b7f30 is same with the state(5) to be set 00:26:50.812 21:43:50 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:26:54.091 21:43:53 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:54.091 00:26:54.091 21:43:54 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:54.349 21:43:54 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:26:57.629 21:43:57 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:57.629 [2024-06-07 21:43:57.828852] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:57.629 21:43:57 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:26:59.009 21:43:58 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:59.009 [2024-06-07 21:43:59.089396] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x150f620 is same with the state(5) to be set 00:26:59.009 [2024-06-07 21:43:59.089450] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x150f620 is same with the state(5) to be set 00:26:59.009 [2024-06-07 21:43:59.089460] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x150f620 is same with the state(5) to be set 00:26:59.009 [2024-06-07 21:43:59.089470] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x150f620 is same with the state(5) to be set 00:26:59.009 [2024-06-07 21:43:59.089479] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x150f620 is same with the state(5) to be set 00:26:59.009 [2024-06-07 21:43:59.089487] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x150f620 is same with the state(5) to be set 00:26:59.009 [2024-06-07 21:43:59.089496] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x150f620 is same with the state(5) to be set 00:26:59.009 [2024-06-07 21:43:59.089505] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x150f620 is same with the state(5) to be set 00:26:59.009 21:43:59 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 1565813 00:27:05.578 0 00:27:05.578 21:44:04 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 1565632 00:27:05.578 21:44:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 1565632 ']' 00:27:05.578 21:44:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 1565632 00:27:05.578 21:44:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:27:05.578 21:44:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:05.578 21:44:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1565632 00:27:05.578 21:44:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:27:05.578 21:44:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:27:05.578 21:44:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1565632' 00:27:05.578 killing process with pid 1565632 00:27:05.578 21:44:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 1565632 00:27:05.578 21:44:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 1565632 00:27:05.578 21:44:05 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:05.578 [2024-06-07 21:43:48.635733] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:27:05.578 [2024-06-07 21:43:48.635804] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1565632 ] 00:27:05.578 EAL: No free 2048 kB hugepages reported on node 1 00:27:05.578 [2024-06-07 21:43:48.724926] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:05.578 [2024-06-07 21:43:48.813060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:05.578 Running I/O for 15 seconds... 00:27:05.578 [2024-06-07 21:43:50.930059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:89624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.578 [2024-06-07 21:43:50.930104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.578 [2024-06-07 21:43:50.930127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:89632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.578 [2024-06-07 21:43:50.930139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.578 [2024-06-07 21:43:50.930153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:89640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.578 [2024-06-07 21:43:50.930163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.578 [2024-06-07 21:43:50.930177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:89648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.578 [2024-06-07 21:43:50.930187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.578 [2024-06-07 21:43:50.930199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:89656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.578 [2024-06-07 21:43:50.930209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.578 [2024-06-07 21:43:50.930222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:89664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.578 [2024-06-07 21:43:50.930232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.578 [2024-06-07 21:43:50.930245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:89672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.578 [2024-06-07 21:43:50.930256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.578 [2024-06-07 21:43:50.930268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:89680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.578 [2024-06-07 21:43:50.930278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.578 [2024-06-07 21:43:50.930290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:89688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.578 [2024-06-07 21:43:50.930300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.578 [2024-06-07 21:43:50.930312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:89696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.578 [2024-06-07 21:43:50.930322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.578 [2024-06-07 21:43:50.930335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.578 [2024-06-07 21:43:50.930345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.578 [2024-06-07 21:43:50.930363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:89712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.579 [2024-06-07 21:43:50.930374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.579 [2024-06-07 21:43:50.930387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:89720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.579 [2024-06-07 21:43:50.930397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.579 [2024-06-07 21:43:50.930409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:89728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.579 [2024-06-07 21:43:50.930419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.579 [2024-06-07 21:43:50.930431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:89736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.579 [2024-06-07 21:43:50.930441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.579 [2024-06-07 21:43:50.930455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:89744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.579 [2024-06-07 21:43:50.930466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.579 [2024-06-07 21:43:50.930478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:89752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.579 [2024-06-07 21:43:50.930489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.579 [2024-06-07 21:43:50.930501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:89760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.579 [2024-06-07 21:43:50.930512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.579 [2024-06-07 21:43:50.930524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:89768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.579 [2024-06-07 21:43:50.930534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.579 [2024-06-07 21:43:50.930546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:89776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.579 [2024-06-07 21:43:50.930556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.579 [2024-06-07 21:43:50.930569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:89784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.579 [2024-06-07 21:43:50.930578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.579 [2024-06-07 21:43:50.930591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:89792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.579 [2024-06-07 21:43:50.930601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.579 [2024-06-07 21:43:50.930613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:89800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.579 [2024-06-07 21:43:50.930623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.579 [2024-06-07 21:43:50.930636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:89808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.579 [2024-06-07 21:43:50.930653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.579 [2024-06-07 21:43:50.930665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:89816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.579 [2024-06-07 21:43:50.930675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.579 [2024-06-07 21:43:50.930687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:89824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.579 [2024-06-07 21:43:50.930697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.579 [2024-06-07 21:43:50.930710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:89832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.579 [2024-06-07 21:43:50.930720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.579 [2024-06-07 21:43:50.930732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:89840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.579 [2024-06-07 21:43:50.930741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.579 [2024-06-07 21:43:50.930754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:89848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.579 [2024-06-07 21:43:50.930764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.579 [2024-06-07 21:43:50.930776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:89856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.579 [2024-06-07 21:43:50.930787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.579 [2024-06-07 21:43:50.930799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:89864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.579 [2024-06-07 21:43:50.930809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.579 [2024-06-07 21:43:50.930822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:89872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.579 [2024-06-07 21:43:50.930832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.579 [2024-06-07 21:43:50.930844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:89880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.579 [2024-06-07 21:43:50.930860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.579 [2024-06-07 21:43:50.930872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:89888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.579 [2024-06-07 21:43:50.930882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.579 [2024-06-07 21:43:50.930894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:89896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.579 [2024-06-07 21:43:50.930904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.579 [2024-06-07 21:43:50.930917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:89904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.579 [2024-06-07 21:43:50.930926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.579 [2024-06-07 21:43:50.930940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:89912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.579 [2024-06-07 21:43:50.930950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.579 [2024-06-07 21:43:50.930963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:89920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.579 [2024-06-07 21:43:50.930974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.579 [2024-06-07 21:43:50.930986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:89928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.579 [2024-06-07 21:43:50.930995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.579 [2024-06-07 21:43:50.931007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:89936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.579 [2024-06-07 21:43:50.931018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.579 [2024-06-07 21:43:50.931036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:89944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.579 [2024-06-07 21:43:50.931047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.579 [2024-06-07 21:43:50.931059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.579 [2024-06-07 21:43:50.931069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.579 [2024-06-07 21:43:50.931081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:89960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.579 [2024-06-07 21:43:50.931091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.579 [2024-06-07 21:43:50.931103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:89968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.579 [2024-06-07 21:43:50.931113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.579 [2024-06-07 21:43:50.931126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:89976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.579 [2024-06-07 21:43:50.931136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.579 [2024-06-07 21:43:50.931148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:89984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.579 [2024-06-07 21:43:50.931158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.579 [2024-06-07 21:43:50.931171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:89992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.579 [2024-06-07 21:43:50.931181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.579 [2024-06-07 21:43:50.931194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:90000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.580 [2024-06-07 21:43:50.931204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.580 [2024-06-07 21:43:50.931216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:90008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.580 [2024-06-07 21:43:50.931230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.580 [2024-06-07 21:43:50.931242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:90016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.580 [2024-06-07 21:43:50.931252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.580 [2024-06-07 21:43:50.931264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:90024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.580 [2024-06-07 21:43:50.931274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.580 [2024-06-07 21:43:50.931286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:90032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.580 [2024-06-07 21:43:50.931297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.580 [2024-06-07 21:43:50.931309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:90040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.580 [2024-06-07 21:43:50.931319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.580 [2024-06-07 21:43:50.931332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.580 [2024-06-07 21:43:50.931342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.580 [2024-06-07 21:43:50.931354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:90056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.580 [2024-06-07 21:43:50.931364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.580 [2024-06-07 21:43:50.931376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:90064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.580 [2024-06-07 21:43:50.931387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.580 [2024-06-07 21:43:50.931400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:90072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.580 [2024-06-07 21:43:50.931409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.580 [2024-06-07 21:43:50.931421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:90080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.580 [2024-06-07 21:43:50.931431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.580 [2024-06-07 21:43:50.931444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.580 [2024-06-07 21:43:50.931454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.580 [2024-06-07 21:43:50.931466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:90096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.580 [2024-06-07 21:43:50.931476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.580 [2024-06-07 21:43:50.931488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:90104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.580 [2024-06-07 21:43:50.931498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.580 [2024-06-07 21:43:50.931513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:90112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.580 [2024-06-07 21:43:50.931523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.580 [2024-06-07 21:43:50.931535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:90120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.580 [2024-06-07 21:43:50.931545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.580 [2024-06-07 21:43:50.931557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:90128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.580 [2024-06-07 21:43:50.931567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.580 [2024-06-07 21:43:50.931579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:90136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.580 [2024-06-07 21:43:50.931590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.580 [2024-06-07 21:43:50.931602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:90144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.580 [2024-06-07 21:43:50.931612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.580 [2024-06-07 21:43:50.931624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:90152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.580 [2024-06-07 21:43:50.931634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.580 [2024-06-07 21:43:50.931647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:90160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.580 [2024-06-07 21:43:50.931657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.580 [2024-06-07 21:43:50.931670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:90168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.580 [2024-06-07 21:43:50.931679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.580 [2024-06-07 21:43:50.931691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:90176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.580 [2024-06-07 21:43:50.931701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.580 [2024-06-07 21:43:50.931714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:90184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.580 [2024-06-07 21:43:50.931724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.580 [2024-06-07 21:43:50.931736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:90192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.580 [2024-06-07 21:43:50.931746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.580 [2024-06-07 21:43:50.931758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:90200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.580 [2024-06-07 21:43:50.931767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.580 [2024-06-07 21:43:50.931779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:90208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.580 [2024-06-07 21:43:50.931789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.580 [2024-06-07 21:43:50.931803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.580 [2024-06-07 21:43:50.931813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.580 [2024-06-07 21:43:50.931825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:90224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.580 [2024-06-07 21:43:50.931836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.580 [2024-06-07 21:43:50.931848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:90232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.580 [2024-06-07 21:43:50.931858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.580 [2024-06-07 21:43:50.931871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:90240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.580 [2024-06-07 21:43:50.931880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.580 [2024-06-07 21:43:50.931893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:90248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.580 [2024-06-07 21:43:50.931903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.580 [2024-06-07 21:43:50.931916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:90256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.580 [2024-06-07 21:43:50.931925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.580 [2024-06-07 21:43:50.931937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:90264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.580 [2024-06-07 21:43:50.931948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.580 [2024-06-07 21:43:50.931963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.580 [2024-06-07 21:43:50.931973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.580 [2024-06-07 21:43:50.931985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:90280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.580 [2024-06-07 21:43:50.931995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.580 [2024-06-07 21:43:50.932008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:90288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.580 [2024-06-07 21:43:50.932018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.580 [2024-06-07 21:43:50.932037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:90320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.580 [2024-06-07 21:43:50.932047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.580 [2024-06-07 21:43:50.932059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:90328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.580 [2024-06-07 21:43:50.932069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.580 [2024-06-07 21:43:50.932081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:90336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.581 [2024-06-07 21:43:50.932094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.581 [2024-06-07 21:43:50.932106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:90344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.581 [2024-06-07 21:43:50.932116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.581 [2024-06-07 21:43:50.932129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:90352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.581 [2024-06-07 21:43:50.932139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.581 [2024-06-07 21:43:50.932151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:90360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.581 [2024-06-07 21:43:50.932160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.581 [2024-06-07 21:43:50.932173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:90368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.581 [2024-06-07 21:43:50.932183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.581 [2024-06-07 21:43:50.932196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:90376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.581 [2024-06-07 21:43:50.932206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.581 [2024-06-07 21:43:50.932218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:90384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.581 [2024-06-07 21:43:50.932228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.581 [2024-06-07 21:43:50.932240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:90392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.581 [2024-06-07 21:43:50.932250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.581 [2024-06-07 21:43:50.932262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:90400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.581 [2024-06-07 21:43:50.932272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.581 [2024-06-07 21:43:50.932284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:90408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.581 [2024-06-07 21:43:50.932294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.581 [2024-06-07 21:43:50.932306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:90416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.581 [2024-06-07 21:43:50.932317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.581 [2024-06-07 21:43:50.932330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:90424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.581 [2024-06-07 21:43:50.932340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.581 [2024-06-07 21:43:50.932353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:90432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.581 [2024-06-07 21:43:50.932363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.581 [2024-06-07 21:43:50.932377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:90440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.581 [2024-06-07 21:43:50.932388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.581 [2024-06-07 21:43:50.932400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:90448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.581 [2024-06-07 21:43:50.932410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.581 [2024-06-07 21:43:50.932422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:90456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.581 [2024-06-07 21:43:50.932432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.581 [2024-06-07 21:43:50.932444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:90464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.581 [2024-06-07 21:43:50.932454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.581 [2024-06-07 21:43:50.932466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:90472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.581 [2024-06-07 21:43:50.932477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.581 [2024-06-07 21:43:50.932489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:90480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.581 [2024-06-07 21:43:50.932499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.581 [2024-06-07 21:43:50.932511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:90488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.581 [2024-06-07 21:43:50.932521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.581 [2024-06-07 21:43:50.932533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:90496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.581 [2024-06-07 21:43:50.932544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.581 [2024-06-07 21:43:50.932557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.581 [2024-06-07 21:43:50.932569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.581 [2024-06-07 21:43:50.932582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:90296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.581 [2024-06-07 21:43:50.932593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.581 [2024-06-07 21:43:50.932605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:90304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.581 [2024-06-07 21:43:50.932615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.581 [2024-06-07 21:43:50.932628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:90312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.581 [2024-06-07 21:43:50.932640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.581 [2024-06-07 21:43:50.932653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:90512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.581 [2024-06-07 21:43:50.932665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.581 [2024-06-07 21:43:50.932679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:90520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.581 [2024-06-07 21:43:50.932692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.581 [2024-06-07 21:43:50.932706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:90528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.581 [2024-06-07 21:43:50.932716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.581 [2024-06-07 21:43:50.932729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:90536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.581 [2024-06-07 21:43:50.932739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.581 [2024-06-07 21:43:50.932751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:90544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.581 [2024-06-07 21:43:50.932761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.581 [2024-06-07 21:43:50.932773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:90552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.581 [2024-06-07 21:43:50.932784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.581 [2024-06-07 21:43:50.932796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:90560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.581 [2024-06-07 21:43:50.932805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.581 [2024-06-07 21:43:50.932818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:90568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.581 [2024-06-07 21:43:50.932828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.581 [2024-06-07 21:43:50.932840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:90576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.581 [2024-06-07 21:43:50.932850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.581 [2024-06-07 21:43:50.932863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:90584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.581 [2024-06-07 21:43:50.932874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.581 [2024-06-07 21:43:50.932886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:90592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.581 [2024-06-07 21:43:50.932896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.581 [2024-06-07 21:43:50.932908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:90600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.581 [2024-06-07 21:43:50.932918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.582 [2024-06-07 21:43:50.932931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:90608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.582 [2024-06-07 21:43:50.932941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.582 [2024-06-07 21:43:50.932953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:90616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.582 [2024-06-07 21:43:50.932964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.582 [2024-06-07 21:43:50.932977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:90624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.582 [2024-06-07 21:43:50.932987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.582 [2024-06-07 21:43:50.932999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:90632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.582 [2024-06-07 21:43:50.933009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.582 [2024-06-07 21:43:50.933039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.582 [2024-06-07 21:43:50.933049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.582 [2024-06-07 21:43:50.933058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90640 len:8 PRP1 0x0 PRP2 0x0 00:27:05.582 [2024-06-07 21:43:50.933070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.582 [2024-06-07 21:43:50.933120] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10bf800 was disconnected and freed. reset controller. 00:27:05.582 [2024-06-07 21:43:50.933132] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:27:05.582 [2024-06-07 21:43:50.933158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:05.582 [2024-06-07 21:43:50.933170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.582 [2024-06-07 21:43:50.933182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:05.582 [2024-06-07 21:43:50.933192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.582 [2024-06-07 21:43:50.933203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:05.582 [2024-06-07 21:43:50.933213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.582 [2024-06-07 21:43:50.933224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:05.582 [2024-06-07 21:43:50.933235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.582 [2024-06-07 21:43:50.933245] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.582 [2024-06-07 21:43:50.937524] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.582 [2024-06-07 21:43:50.937559] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x109f980 (9): Bad file descriptor 00:27:05.582 [2024-06-07 21:43:51.105820] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:05.582 [2024-06-07 21:43:54.571059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:73496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.582 [2024-06-07 21:43:54.571106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.582 [2024-06-07 21:43:54.571127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:73504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.582 [2024-06-07 21:43:54.571138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.582 [2024-06-07 21:43:54.571157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:73512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.582 [2024-06-07 21:43:54.571168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.582 [2024-06-07 21:43:54.571180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:73520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.582 [2024-06-07 21:43:54.571190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.582 [2024-06-07 21:43:54.571202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:73528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.582 [2024-06-07 21:43:54.571213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.582 [2024-06-07 21:43:54.571225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:73536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.582 [2024-06-07 21:43:54.571236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.582 [2024-06-07 21:43:54.571248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:73544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.582 [2024-06-07 21:43:54.571259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.582 [2024-06-07 21:43:54.571272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:73552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.582 [2024-06-07 21:43:54.571282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.582 [2024-06-07 21:43:54.571295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:73560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.582 [2024-06-07 21:43:54.571304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.582 [2024-06-07 21:43:54.571316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:73568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.582 [2024-06-07 21:43:54.571327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.582 [2024-06-07 21:43:54.571341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:73576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.582 [2024-06-07 21:43:54.571352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.582 [2024-06-07 21:43:54.571365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:73584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.582 [2024-06-07 21:43:54.571375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.582 [2024-06-07 21:43:54.571389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.582 [2024-06-07 21:43:54.571400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.582 [2024-06-07 21:43:54.571413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:73600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.582 [2024-06-07 21:43:54.571424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.582 [2024-06-07 21:43:54.571437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:73608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.582 [2024-06-07 21:43:54.571449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.582 [2024-06-07 21:43:54.571461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:73616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.582 [2024-06-07 21:43:54.571471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.582 [2024-06-07 21:43:54.571482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:73624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.582 [2024-06-07 21:43:54.571492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.582 [2024-06-07 21:43:54.571506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:73632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.582 [2024-06-07 21:43:54.571516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.582 [2024-06-07 21:43:54.571528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:73640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.582 [2024-06-07 21:43:54.571538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.582 [2024-06-07 21:43:54.571550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:73648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.582 [2024-06-07 21:43:54.571560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.582 [2024-06-07 21:43:54.571571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:73656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.582 [2024-06-07 21:43:54.571581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.582 [2024-06-07 21:43:54.571593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:73664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.582 [2024-06-07 21:43:54.571602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.582 [2024-06-07 21:43:54.571614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:73672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.582 [2024-06-07 21:43:54.571623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.582 [2024-06-07 21:43:54.571635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:73680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.582 [2024-06-07 21:43:54.571644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.582 [2024-06-07 21:43:54.571656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:73688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.582 [2024-06-07 21:43:54.571666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.582 [2024-06-07 21:43:54.571678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:73696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.582 [2024-06-07 21:43:54.571688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.582 [2024-06-07 21:43:54.571700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:73704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.583 [2024-06-07 21:43:54.571710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.583 [2024-06-07 21:43:54.571724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:73712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.583 [2024-06-07 21:43:54.571734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.583 [2024-06-07 21:43:54.571746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:73720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.583 [2024-06-07 21:43:54.571756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.583 [2024-06-07 21:43:54.571767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:73728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.583 [2024-06-07 21:43:54.571777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.583 [2024-06-07 21:43:54.571790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:73736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.583 [2024-06-07 21:43:54.571800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.583 [2024-06-07 21:43:54.571812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:73744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.583 [2024-06-07 21:43:54.571822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.583 [2024-06-07 21:43:54.571833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:73752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.583 [2024-06-07 21:43:54.571843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.583 [2024-06-07 21:43:54.571858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:73760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.583 [2024-06-07 21:43:54.571868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.583 [2024-06-07 21:43:54.571880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:73768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.583 [2024-06-07 21:43:54.571889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.583 [2024-06-07 21:43:54.571902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:73776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.583 [2024-06-07 21:43:54.571912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.583 [2024-06-07 21:43:54.571924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:73784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.583 [2024-06-07 21:43:54.571933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.583 [2024-06-07 21:43:54.571945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:73792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.583 [2024-06-07 21:43:54.571955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.583 [2024-06-07 21:43:54.571967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.583 [2024-06-07 21:43:54.571977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.583 [2024-06-07 21:43:54.571990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.583 [2024-06-07 21:43:54.572001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.583 [2024-06-07 21:43:54.572014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:73832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.583 [2024-06-07 21:43:54.572030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.583 [2024-06-07 21:43:54.572043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:73840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.583 [2024-06-07 21:43:54.572053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.583 [2024-06-07 21:43:54.572065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:73848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.583 [2024-06-07 21:43:54.572075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.583 [2024-06-07 21:43:54.572087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:73856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.583 [2024-06-07 21:43:54.572097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.583 [2024-06-07 21:43:54.572109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:73864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.583 [2024-06-07 21:43:54.572119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.583 [2024-06-07 21:43:54.572131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:73872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.583 [2024-06-07 21:43:54.572141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.583 [2024-06-07 21:43:54.572153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:73880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.583 [2024-06-07 21:43:54.572163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.583 [2024-06-07 21:43:54.572175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:73888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.583 [2024-06-07 21:43:54.572185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.583 [2024-06-07 21:43:54.572198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:73896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.583 [2024-06-07 21:43:54.572207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.583 [2024-06-07 21:43:54.572219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:73904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.583 [2024-06-07 21:43:54.572230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.583 [2024-06-07 21:43:54.572241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:73912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.583 [2024-06-07 21:43:54.572251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.583 [2024-06-07 21:43:54.572263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:73920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.583 [2024-06-07 21:43:54.572272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.583 [2024-06-07 21:43:54.572285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:73928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.583 [2024-06-07 21:43:54.572297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.583 [2024-06-07 21:43:54.572309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:73936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.583 [2024-06-07 21:43:54.572319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.583 [2024-06-07 21:43:54.572330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:73944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.584 [2024-06-07 21:43:54.572340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.584 [2024-06-07 21:43:54.572352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:73952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.584 [2024-06-07 21:43:54.572362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.584 [2024-06-07 21:43:54.572374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:73960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.584 [2024-06-07 21:43:54.572384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.584 [2024-06-07 21:43:54.572396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:73968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.584 [2024-06-07 21:43:54.572405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.584 [2024-06-07 21:43:54.572418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:73976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.584 [2024-06-07 21:43:54.572427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.584 [2024-06-07 21:43:54.572439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:73984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.584 [2024-06-07 21:43:54.572449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.584 [2024-06-07 21:43:54.572461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:73992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.584 [2024-06-07 21:43:54.572471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.584 [2024-06-07 21:43:54.572482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.584 [2024-06-07 21:43:54.572492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.584 [2024-06-07 21:43:54.572504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:74008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.584 [2024-06-07 21:43:54.572514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.584 [2024-06-07 21:43:54.572526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:74016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.584 [2024-06-07 21:43:54.572535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.584 [2024-06-07 21:43:54.572547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:74024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.584 [2024-06-07 21:43:54.572557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.584 [2024-06-07 21:43:54.572571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.584 [2024-06-07 21:43:54.572581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.584 [2024-06-07 21:43:54.572592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:74040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.584 [2024-06-07 21:43:54.572603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.584 [2024-06-07 21:43:54.572615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:74048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.584 [2024-06-07 21:43:54.572625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.584 [2024-06-07 21:43:54.572637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:74056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.584 [2024-06-07 21:43:54.572647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.584 [2024-06-07 21:43:54.572659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:73808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.584 [2024-06-07 21:43:54.572669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.584 [2024-06-07 21:43:54.572681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:73816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.584 [2024-06-07 21:43:54.572691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.584 [2024-06-07 21:43:54.572703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:74064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.584 [2024-06-07 21:43:54.572713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.584 [2024-06-07 21:43:54.572725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.584 [2024-06-07 21:43:54.572736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.584 [2024-06-07 21:43:54.572747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:74080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.584 [2024-06-07 21:43:54.572757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.584 [2024-06-07 21:43:54.572769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:74088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.584 [2024-06-07 21:43:54.572779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.584 [2024-06-07 21:43:54.572791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:74096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.584 [2024-06-07 21:43:54.572801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.584 [2024-06-07 21:43:54.572813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:74104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.584 [2024-06-07 21:43:54.572823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.584 [2024-06-07 21:43:54.572835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:74112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.584 [2024-06-07 21:43:54.572847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.584 [2024-06-07 21:43:54.572859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:74120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.584 [2024-06-07 21:43:54.572869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.584 [2024-06-07 21:43:54.572881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:74128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.584 [2024-06-07 21:43:54.572891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.584 [2024-06-07 21:43:54.572903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:74136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.584 [2024-06-07 21:43:54.572913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.584 [2024-06-07 21:43:54.572925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.584 [2024-06-07 21:43:54.572935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.584 [2024-06-07 21:43:54.572947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:74152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.584 [2024-06-07 21:43:54.572957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.584 [2024-06-07 21:43:54.572969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:74160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.584 [2024-06-07 21:43:54.572979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.584 [2024-06-07 21:43:54.572991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:74168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.584 [2024-06-07 21:43:54.573001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.584 [2024-06-07 21:43:54.573013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:74176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.584 [2024-06-07 21:43:54.573023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.584 [2024-06-07 21:43:54.573042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.584 [2024-06-07 21:43:54.573052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.584 [2024-06-07 21:43:54.573064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:74192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.584 [2024-06-07 21:43:54.573074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.584 [2024-06-07 21:43:54.573086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:74200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.584 [2024-06-07 21:43:54.573096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.584 [2024-06-07 21:43:54.573107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:74208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.584 [2024-06-07 21:43:54.573118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.584 [2024-06-07 21:43:54.573132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:74216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.584 [2024-06-07 21:43:54.573142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.584 [2024-06-07 21:43:54.573153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:74224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.584 [2024-06-07 21:43:54.573163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.585 [2024-06-07 21:43:54.573175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:74232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.585 [2024-06-07 21:43:54.573185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.585 [2024-06-07 21:43:54.573197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:74240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.585 [2024-06-07 21:43:54.573206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.585 [2024-06-07 21:43:54.573218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:74248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.585 [2024-06-07 21:43:54.573228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.585 [2024-06-07 21:43:54.573240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:74256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.585 [2024-06-07 21:43:54.573250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.585 [2024-06-07 21:43:54.573261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:74264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.585 [2024-06-07 21:43:54.573271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.585 [2024-06-07 21:43:54.573286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:74272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.585 [2024-06-07 21:43:54.573296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.585 [2024-06-07 21:43:54.573308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:74280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.585 [2024-06-07 21:43:54.573317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.585 [2024-06-07 21:43:54.573329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:74288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.585 [2024-06-07 21:43:54.573339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.585 [2024-06-07 21:43:54.573351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:74296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.585 [2024-06-07 21:43:54.573361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.585 [2024-06-07 21:43:54.573373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:74304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.585 [2024-06-07 21:43:54.573383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.585 [2024-06-07 21:43:54.573395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.585 [2024-06-07 21:43:54.573405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.585 [2024-06-07 21:43:54.573419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:74320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.585 [2024-06-07 21:43:54.573429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.585 [2024-06-07 21:43:54.573440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:74328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.585 [2024-06-07 21:43:54.573451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.585 [2024-06-07 21:43:54.573462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:74336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.585 [2024-06-07 21:43:54.573472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.585 [2024-06-07 21:43:54.573484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:74344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.585 [2024-06-07 21:43:54.573493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.585 [2024-06-07 21:43:54.573506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.585 [2024-06-07 21:43:54.573515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.585 [2024-06-07 21:43:54.573527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.585 [2024-06-07 21:43:54.573537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.585 [2024-06-07 21:43:54.573549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:74368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.585 [2024-06-07 21:43:54.573559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.585 [2024-06-07 21:43:54.573570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:74376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.585 [2024-06-07 21:43:54.573580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.585 [2024-06-07 21:43:54.573592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:74384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.585 [2024-06-07 21:43:54.573601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.585 [2024-06-07 21:43:54.573613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:74392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.585 [2024-06-07 21:43:54.573623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.585 [2024-06-07 21:43:54.573637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:74400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.585 [2024-06-07 21:43:54.573647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.585 [2024-06-07 21:43:54.573659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:74408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.585 [2024-06-07 21:43:54.573670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.585 [2024-06-07 21:43:54.573682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:74416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.585 [2024-06-07 21:43:54.573694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.585 [2024-06-07 21:43:54.573706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:74424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.585 [2024-06-07 21:43:54.573716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.585 [2024-06-07 21:43:54.573728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:74432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.585 [2024-06-07 21:43:54.573738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.585 [2024-06-07 21:43:54.573750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:74440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.585 [2024-06-07 21:43:54.573761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.585 [2024-06-07 21:43:54.573774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:74448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.585 [2024-06-07 21:43:54.573784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.585 [2024-06-07 21:43:54.573798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:74456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.585 [2024-06-07 21:43:54.573807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.585 [2024-06-07 21:43:54.573819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:74464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.585 [2024-06-07 21:43:54.573830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.585 [2024-06-07 21:43:54.573842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:74472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.585 [2024-06-07 21:43:54.573852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.585 [2024-06-07 21:43:54.573864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:74480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.585 [2024-06-07 21:43:54.573874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.585 [2024-06-07 21:43:54.573887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:74488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.585 [2024-06-07 21:43:54.573896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.585 [2024-06-07 21:43:54.573908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.585 [2024-06-07 21:43:54.573919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.585 [2024-06-07 21:43:54.573931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:74504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.585 [2024-06-07 21:43:54.573942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.585 [2024-06-07 21:43:54.573969] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.585 [2024-06-07 21:43:54.573978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.585 [2024-06-07 21:43:54.573988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74512 len:8 PRP1 0x0 PRP2 0x0 00:27:05.585 [2024-06-07 21:43:54.574001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.585 [2024-06-07 21:43:54.574056] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1099d90 was disconnected and freed. reset controller. 00:27:05.585 [2024-06-07 21:43:54.574069] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:27:05.585 [2024-06-07 21:43:54.574094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:05.586 [2024-06-07 21:43:54.574105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.586 [2024-06-07 21:43:54.574116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:05.586 [2024-06-07 21:43:54.574126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.586 [2024-06-07 21:43:54.574137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:05.586 [2024-06-07 21:43:54.574147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.586 [2024-06-07 21:43:54.574157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:05.586 [2024-06-07 21:43:54.574167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.586 [2024-06-07 21:43:54.574177] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.586 [2024-06-07 21:43:54.578418] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.586 [2024-06-07 21:43:54.578454] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x109f980 (9): Bad file descriptor 00:27:05.586 [2024-06-07 21:43:54.620013] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:05.586 [2024-06-07 21:43:59.093231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:88896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.586 [2024-06-07 21:43:59.093277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.586 [2024-06-07 21:43:59.093298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:88904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.586 [2024-06-07 21:43:59.093309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.586 [2024-06-07 21:43:59.093322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:88912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.586 [2024-06-07 21:43:59.093333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.586 [2024-06-07 21:43:59.093345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:88920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.586 [2024-06-07 21:43:59.093356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.586 [2024-06-07 21:43:59.093369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:88928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.586 [2024-06-07 21:43:59.093379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.586 [2024-06-07 21:43:59.093392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:88936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.586 [2024-06-07 21:43:59.093408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.586 [2024-06-07 21:43:59.093421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:88944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.586 [2024-06-07 21:43:59.093431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.586 [2024-06-07 21:43:59.093444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:88952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.586 [2024-06-07 21:43:59.093456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.586 [2024-06-07 21:43:59.093470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:88960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.586 [2024-06-07 21:43:59.093480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.586 [2024-06-07 21:43:59.093493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:88968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.586 [2024-06-07 21:43:59.093503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.586 [2024-06-07 21:43:59.093516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.586 [2024-06-07 21:43:59.093526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.586 [2024-06-07 21:43:59.093540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:88984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.586 [2024-06-07 21:43:59.093550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.586 [2024-06-07 21:43:59.093563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:88992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.586 [2024-06-07 21:43:59.093573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.586 [2024-06-07 21:43:59.093585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:89000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.586 [2024-06-07 21:43:59.093595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.586 [2024-06-07 21:43:59.093607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:89008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.586 [2024-06-07 21:43:59.093617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.586 [2024-06-07 21:43:59.093629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:89016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.586 [2024-06-07 21:43:59.093639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.586 [2024-06-07 21:43:59.093650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:89024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.586 [2024-06-07 21:43:59.093660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.586 [2024-06-07 21:43:59.093672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:89032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.586 [2024-06-07 21:43:59.093683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.586 [2024-06-07 21:43:59.093694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:89040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.586 [2024-06-07 21:43:59.093706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.586 [2024-06-07 21:43:59.093718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.586 [2024-06-07 21:43:59.093730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.586 [2024-06-07 21:43:59.093742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:89056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.586 [2024-06-07 21:43:59.093752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.586 [2024-06-07 21:43:59.093765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:89064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.586 [2024-06-07 21:43:59.093775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.586 [2024-06-07 21:43:59.093786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:89072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.586 [2024-06-07 21:43:59.093796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.586 [2024-06-07 21:43:59.093808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:89080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.586 [2024-06-07 21:43:59.093819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.586 [2024-06-07 21:43:59.093831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:89088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.586 [2024-06-07 21:43:59.093841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.586 [2024-06-07 21:43:59.093853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:89096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.586 [2024-06-07 21:43:59.093863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.586 [2024-06-07 21:43:59.093875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:89104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.586 [2024-06-07 21:43:59.093885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.586 [2024-06-07 21:43:59.093900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:89112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.586 [2024-06-07 21:43:59.093912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.586 [2024-06-07 21:43:59.093924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:89120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.586 [2024-06-07 21:43:59.093934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.586 [2024-06-07 21:43:59.093947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:89128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.586 [2024-06-07 21:43:59.093960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.586 [2024-06-07 21:43:59.093972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:89136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.586 [2024-06-07 21:43:59.093982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-06-07 21:43:59.094003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:89144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.587 [2024-06-07 21:43:59.094013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-06-07 21:43:59.094029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:89152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.587 [2024-06-07 21:43:59.094040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-06-07 21:43:59.094053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:89160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.587 [2024-06-07 21:43:59.094063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-06-07 21:43:59.094075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:89168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.587 [2024-06-07 21:43:59.094085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-06-07 21:43:59.094097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:89176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.587 [2024-06-07 21:43:59.094107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-06-07 21:43:59.094119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:89184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.587 [2024-06-07 21:43:59.094129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-06-07 21:43:59.094141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:89192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.587 [2024-06-07 21:43:59.094151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-06-07 21:43:59.094163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:89200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.587 [2024-06-07 21:43:59.094173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-06-07 21:43:59.094185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:89208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.587 [2024-06-07 21:43:59.094194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-06-07 21:43:59.094206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:89216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.587 [2024-06-07 21:43:59.094216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-06-07 21:43:59.094229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:89224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.587 [2024-06-07 21:43:59.094238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-06-07 21:43:59.094250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:89232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.587 [2024-06-07 21:43:59.094260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-06-07 21:43:59.094273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:89240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.587 [2024-06-07 21:43:59.094285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-06-07 21:43:59.094297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:89248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.587 [2024-06-07 21:43:59.094307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-06-07 21:43:59.094318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:89256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.587 [2024-06-07 21:43:59.094329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-06-07 21:43:59.094341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.587 [2024-06-07 21:43:59.094350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-06-07 21:43:59.094362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:89272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.587 [2024-06-07 21:43:59.094372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-06-07 21:43:59.094384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:89280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.587 [2024-06-07 21:43:59.094394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-06-07 21:43:59.094407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:89288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.587 [2024-06-07 21:43:59.094417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-06-07 21:43:59.094429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:89296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.587 [2024-06-07 21:43:59.094439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-06-07 21:43:59.094451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:89304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.587 [2024-06-07 21:43:59.094461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-06-07 21:43:59.094473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:89312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.587 [2024-06-07 21:43:59.094483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-06-07 21:43:59.094494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:89320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.587 [2024-06-07 21:43:59.094504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-06-07 21:43:59.094516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:89328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.587 [2024-06-07 21:43:59.094526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-06-07 21:43:59.094538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:89336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.587 [2024-06-07 21:43:59.094547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-06-07 21:43:59.094561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:89344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.587 [2024-06-07 21:43:59.094571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-06-07 21:43:59.094582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:89352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.587 [2024-06-07 21:43:59.094592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-06-07 21:43:59.094605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:89360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.587 [2024-06-07 21:43:59.094615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-06-07 21:43:59.094627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:89368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.587 [2024-06-07 21:43:59.094636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-06-07 21:43:59.094648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:89376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.587 [2024-06-07 21:43:59.094658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-06-07 21:43:59.094670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:89384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.587 [2024-06-07 21:43:59.094680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-06-07 21:43:59.094691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:89392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.587 [2024-06-07 21:43:59.094701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-06-07 21:43:59.094713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:89400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.587 [2024-06-07 21:43:59.094723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-06-07 21:43:59.094735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:89408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.587 [2024-06-07 21:43:59.094744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-06-07 21:43:59.094756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:89416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.587 [2024-06-07 21:43:59.094767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-06-07 21:43:59.094779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:89424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.587 [2024-06-07 21:43:59.094789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-06-07 21:43:59.094801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:89432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.587 [2024-06-07 21:43:59.094810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-06-07 21:43:59.094822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:89440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.587 [2024-06-07 21:43:59.094832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.587 [2024-06-07 21:43:59.094845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:89448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-06-07 21:43:59.094855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-06-07 21:43:59.094867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:89456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-06-07 21:43:59.094877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-06-07 21:43:59.094889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-06-07 21:43:59.094898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-06-07 21:43:59.094912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:89472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-06-07 21:43:59.094922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-06-07 21:43:59.094934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:89480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-06-07 21:43:59.094945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-06-07 21:43:59.094957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:89488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-06-07 21:43:59.094966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-06-07 21:43:59.094978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:89496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-06-07 21:43:59.094989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-06-07 21:43:59.095001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:89504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-06-07 21:43:59.095011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-06-07 21:43:59.095023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:89512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-06-07 21:43:59.095039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-06-07 21:43:59.095051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:89520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-06-07 21:43:59.095062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-06-07 21:43:59.095074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:89528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-06-07 21:43:59.095083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-06-07 21:43:59.095095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:89536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-06-07 21:43:59.095106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-06-07 21:43:59.095118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:89544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-06-07 21:43:59.095130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-06-07 21:43:59.095143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:89552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-06-07 21:43:59.095153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-06-07 21:43:59.095165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:89560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-06-07 21:43:59.095175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-06-07 21:43:59.095187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-06-07 21:43:59.095197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-06-07 21:43:59.095209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-06-07 21:43:59.095219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-06-07 21:43:59.095231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:89584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-06-07 21:43:59.095241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-06-07 21:43:59.095253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:88720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.588 [2024-06-07 21:43:59.095263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-06-07 21:43:59.095276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:88728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.588 [2024-06-07 21:43:59.095286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-06-07 21:43:59.095298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:88736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.588 [2024-06-07 21:43:59.095307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-06-07 21:43:59.095319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:88744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.588 [2024-06-07 21:43:59.095330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-06-07 21:43:59.095342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:88752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.588 [2024-06-07 21:43:59.095353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-06-07 21:43:59.095365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:88760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.588 [2024-06-07 21:43:59.095375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-06-07 21:43:59.095387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:88768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.588 [2024-06-07 21:43:59.095397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-06-07 21:43:59.095411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:89592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-06-07 21:43:59.095421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-06-07 21:43:59.095433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:89600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-06-07 21:43:59.095443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-06-07 21:43:59.095455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:89608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-06-07 21:43:59.095465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-06-07 21:43:59.095477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:89616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-06-07 21:43:59.095488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.588 [2024-06-07 21:43:59.095501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:89624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.588 [2024-06-07 21:43:59.095511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-06-07 21:43:59.095522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:89632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.589 [2024-06-07 21:43:59.095532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-06-07 21:43:59.095544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:89640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.589 [2024-06-07 21:43:59.095554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-06-07 21:43:59.095579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.589 [2024-06-07 21:43:59.095589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89648 len:8 PRP1 0x0 PRP2 0x0 00:27:05.589 [2024-06-07 21:43:59.095599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-06-07 21:43:59.095859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.589 [2024-06-07 21:43:59.095869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.589 [2024-06-07 21:43:59.095879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89656 len:8 PRP1 0x0 PRP2 0x0 00:27:05.589 [2024-06-07 21:43:59.095889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-06-07 21:43:59.095901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.589 [2024-06-07 21:43:59.095909] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.589 [2024-06-07 21:43:59.095918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89664 len:8 PRP1 0x0 PRP2 0x0 00:27:05.589 [2024-06-07 21:43:59.095928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-06-07 21:43:59.095938] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.589 [2024-06-07 21:43:59.095946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.589 [2024-06-07 21:43:59.095954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89672 len:8 PRP1 0x0 PRP2 0x0 00:27:05.589 [2024-06-07 21:43:59.095967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-06-07 21:43:59.095977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.589 [2024-06-07 21:43:59.095985] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.589 [2024-06-07 21:43:59.095994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89680 len:8 PRP1 0x0 PRP2 0x0 00:27:05.589 [2024-06-07 21:43:59.096003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-06-07 21:43:59.096013] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.589 [2024-06-07 21:43:59.096021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.589 [2024-06-07 21:43:59.096037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89688 len:8 PRP1 0x0 PRP2 0x0 00:27:05.589 [2024-06-07 21:43:59.096048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-06-07 21:43:59.096058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.589 [2024-06-07 21:43:59.096067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.589 [2024-06-07 21:43:59.096076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89696 len:8 PRP1 0x0 PRP2 0x0 00:27:05.589 [2024-06-07 21:43:59.096087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-06-07 21:43:59.096098] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.589 [2024-06-07 21:43:59.096105] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.589 [2024-06-07 21:43:59.096114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89704 len:8 PRP1 0x0 PRP2 0x0 00:27:05.589 [2024-06-07 21:43:59.096123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-06-07 21:43:59.096133] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.589 [2024-06-07 21:43:59.096141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.589 [2024-06-07 21:43:59.096150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89712 len:8 PRP1 0x0 PRP2 0x0 00:27:05.589 [2024-06-07 21:43:59.096160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-06-07 21:43:59.096170] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.589 [2024-06-07 21:43:59.096177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.589 [2024-06-07 21:43:59.096185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89720 len:8 PRP1 0x0 PRP2 0x0 00:27:05.589 [2024-06-07 21:43:59.096196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-06-07 21:43:59.096207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.589 [2024-06-07 21:43:59.096215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.589 [2024-06-07 21:43:59.096223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89728 len:8 PRP1 0x0 PRP2 0x0 00:27:05.589 [2024-06-07 21:43:59.096233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-06-07 21:43:59.096243] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.589 [2024-06-07 21:43:59.096251] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.589 [2024-06-07 21:43:59.096261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88776 len:8 PRP1 0x0 PRP2 0x0 00:27:05.589 [2024-06-07 21:43:59.096271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-06-07 21:43:59.096281] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.589 [2024-06-07 21:43:59.096289] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.589 [2024-06-07 21:43:59.096296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88784 len:8 PRP1 0x0 PRP2 0x0 00:27:05.589 [2024-06-07 21:43:59.096308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-06-07 21:43:59.096318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.589 [2024-06-07 21:43:59.096326] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.589 [2024-06-07 21:43:59.096335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88792 len:8 PRP1 0x0 PRP2 0x0 00:27:05.589 [2024-06-07 21:43:59.096344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-06-07 21:43:59.096355] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.589 [2024-06-07 21:43:59.096363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.589 [2024-06-07 21:43:59.096372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88800 len:8 PRP1 0x0 PRP2 0x0 00:27:05.589 [2024-06-07 21:43:59.096381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-06-07 21:43:59.096391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.589 [2024-06-07 21:43:59.096399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.589 [2024-06-07 21:43:59.096408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88808 len:8 PRP1 0x0 PRP2 0x0 00:27:05.589 [2024-06-07 21:43:59.096418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-06-07 21:43:59.096429] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.589 [2024-06-07 21:43:59.096437] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.589 [2024-06-07 21:43:59.096445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88816 len:8 PRP1 0x0 PRP2 0x0 00:27:05.589 [2024-06-07 21:43:59.096455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-06-07 21:43:59.096465] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.589 [2024-06-07 21:43:59.096473] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.589 [2024-06-07 21:43:59.096481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88824 len:8 PRP1 0x0 PRP2 0x0 00:27:05.589 [2024-06-07 21:43:59.096491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-06-07 21:43:59.096501] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.589 [2024-06-07 21:43:59.096508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.589 [2024-06-07 21:43:59.096516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88832 len:8 PRP1 0x0 PRP2 0x0 00:27:05.589 [2024-06-07 21:43:59.096526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.589 [2024-06-07 21:43:59.096538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.589 [2024-06-07 21:43:59.096546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.589 [2024-06-07 21:43:59.096554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88840 len:8 PRP1 0x0 PRP2 0x0 00:27:05.589 [2024-06-07 21:43:59.096564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.590 [2024-06-07 21:43:59.096574] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.590 [2024-06-07 21:43:59.096581] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.590 [2024-06-07 21:43:59.096590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88848 len:8 PRP1 0x0 PRP2 0x0 00:27:05.590 [2024-06-07 21:43:59.096600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.590 [2024-06-07 21:43:59.096610] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.590 [2024-06-07 21:43:59.096617] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.590 [2024-06-07 21:43:59.096626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88856 len:8 PRP1 0x0 PRP2 0x0 00:27:05.590 [2024-06-07 21:43:59.096635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.590 [2024-06-07 21:43:59.096646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.590 [2024-06-07 21:43:59.096654] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.590 [2024-06-07 21:43:59.096662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88864 len:8 PRP1 0x0 PRP2 0x0 00:27:05.590 [2024-06-07 21:43:59.096671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.590 [2024-06-07 21:43:59.096682] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.590 [2024-06-07 21:43:59.096689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.590 [2024-06-07 21:43:59.096698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88872 len:8 PRP1 0x0 PRP2 0x0 00:27:05.590 [2024-06-07 21:43:59.096707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.590 [2024-06-07 21:43:59.096717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.590 [2024-06-07 21:43:59.096724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.590 [2024-06-07 21:43:59.096733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88880 len:8 PRP1 0x0 PRP2 0x0 00:27:05.590 [2024-06-07 21:43:59.096742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.590 [2024-06-07 21:43:59.096753] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.590 [2024-06-07 21:43:59.096760] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.590 [2024-06-07 21:43:59.096769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88888 len:8 PRP1 0x0 PRP2 0x0 00:27:05.590 [2024-06-07 21:43:59.096778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.590 [2024-06-07 21:43:59.096789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.590 [2024-06-07 21:43:59.096796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.590 [2024-06-07 21:43:59.096804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89736 len:8 PRP1 0x0 PRP2 0x0 00:27:05.590 [2024-06-07 21:43:59.096814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.590 [2024-06-07 21:43:59.096825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.590 [2024-06-07 21:43:59.096833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.590 [2024-06-07 21:43:59.096842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88896 len:8 PRP1 0x0 PRP2 0x0 00:27:05.590 [2024-06-07 21:43:59.096852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.590 [2024-06-07 21:43:59.096862] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.590 [2024-06-07 21:43:59.096869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.590 [2024-06-07 21:43:59.096877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88904 len:8 PRP1 0x0 PRP2 0x0 00:27:05.590 [2024-06-07 21:43:59.096887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.590 [2024-06-07 21:43:59.096897] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.590 [2024-06-07 21:43:59.096905] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.590 [2024-06-07 21:43:59.096913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88912 len:8 PRP1 0x0 PRP2 0x0 00:27:05.590 [2024-06-07 21:43:59.096923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.590 [2024-06-07 21:43:59.096933] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.590 [2024-06-07 21:43:59.096940] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.590 [2024-06-07 21:43:59.096949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88920 len:8 PRP1 0x0 PRP2 0x0 00:27:05.590 [2024-06-07 21:43:59.096959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.590 [2024-06-07 21:43:59.096969] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.590 [2024-06-07 21:43:59.096976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.590 [2024-06-07 21:43:59.096985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88928 len:8 PRP1 0x0 PRP2 0x0 00:27:05.590 [2024-06-07 21:43:59.096994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.590 [2024-06-07 21:43:59.097005] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.590 [2024-06-07 21:43:59.097012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.590 [2024-06-07 21:43:59.097021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88936 len:8 PRP1 0x0 PRP2 0x0 00:27:05.590 [2024-06-07 21:43:59.097036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.590 [2024-06-07 21:43:59.097047] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.590 [2024-06-07 21:43:59.097054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.590 [2024-06-07 21:43:59.097063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88944 len:8 PRP1 0x0 PRP2 0x0 00:27:05.590 [2024-06-07 21:43:59.097073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.590 [2024-06-07 21:43:59.097084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.590 [2024-06-07 21:43:59.097092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.590 [2024-06-07 21:43:59.097100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88952 len:8 PRP1 0x0 PRP2 0x0 00:27:05.590 [2024-06-07 21:43:59.097112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.590 [2024-06-07 21:43:59.097123] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.590 [2024-06-07 21:43:59.108120] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.590 [2024-06-07 21:43:59.108134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88960 len:8 PRP1 0x0 PRP2 0x0 00:27:05.590 [2024-06-07 21:43:59.108146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.590 [2024-06-07 21:43:59.108156] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.590 [2024-06-07 21:43:59.108165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.590 [2024-06-07 21:43:59.108173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88968 len:8 PRP1 0x0 PRP2 0x0 00:27:05.590 [2024-06-07 21:43:59.108183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.590 [2024-06-07 21:43:59.108192] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.590 [2024-06-07 21:43:59.108200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.590 [2024-06-07 21:43:59.108208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88976 len:8 PRP1 0x0 PRP2 0x0 00:27:05.590 [2024-06-07 21:43:59.108218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.590 [2024-06-07 21:43:59.108229] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.590 [2024-06-07 21:43:59.108238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.590 [2024-06-07 21:43:59.108247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88984 len:8 PRP1 0x0 PRP2 0x0 00:27:05.590 [2024-06-07 21:43:59.108256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.590 [2024-06-07 21:43:59.108266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.590 [2024-06-07 21:43:59.108274] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.590 [2024-06-07 21:43:59.108282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88992 len:8 PRP1 0x0 PRP2 0x0 00:27:05.590 [2024-06-07 21:43:59.108291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.590 [2024-06-07 21:43:59.108301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.590 [2024-06-07 21:43:59.108309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.590 [2024-06-07 21:43:59.108317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89000 len:8 PRP1 0x0 PRP2 0x0 00:27:05.590 [2024-06-07 21:43:59.108327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.590 [2024-06-07 21:43:59.108336] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.590 [2024-06-07 21:43:59.108344] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.590 [2024-06-07 21:43:59.108352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89008 len:8 PRP1 0x0 PRP2 0x0 00:27:05.590 [2024-06-07 21:43:59.108361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.590 [2024-06-07 21:43:59.108371] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.590 [2024-06-07 21:43:59.108383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.591 [2024-06-07 21:43:59.108391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89016 len:8 PRP1 0x0 PRP2 0x0 00:27:05.591 [2024-06-07 21:43:59.108401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.591 [2024-06-07 21:43:59.108411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.591 [2024-06-07 21:43:59.108418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.591 [2024-06-07 21:43:59.108426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89024 len:8 PRP1 0x0 PRP2 0x0 00:27:05.591 [2024-06-07 21:43:59.108436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.591 [2024-06-07 21:43:59.108446] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.591 [2024-06-07 21:43:59.108454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.591 [2024-06-07 21:43:59.108462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89032 len:8 PRP1 0x0 PRP2 0x0 00:27:05.591 [2024-06-07 21:43:59.108471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.591 [2024-06-07 21:43:59.108480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.591 [2024-06-07 21:43:59.108488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.591 [2024-06-07 21:43:59.108497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89040 len:8 PRP1 0x0 PRP2 0x0 00:27:05.591 [2024-06-07 21:43:59.108506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.591 [2024-06-07 21:43:59.108516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.591 [2024-06-07 21:43:59.108524] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.591 [2024-06-07 21:43:59.108532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89048 len:8 PRP1 0x0 PRP2 0x0 00:27:05.591 [2024-06-07 21:43:59.108543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.591 [2024-06-07 21:43:59.108554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.591 [2024-06-07 21:43:59.108570] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.591 [2024-06-07 21:43:59.108578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89056 len:8 PRP1 0x0 PRP2 0x0 00:27:05.591 [2024-06-07 21:43:59.108587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.591 [2024-06-07 21:43:59.108597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.591 [2024-06-07 21:43:59.108605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.591 [2024-06-07 21:43:59.108613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89064 len:8 PRP1 0x0 PRP2 0x0 00:27:05.591 [2024-06-07 21:43:59.108623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.591 [2024-06-07 21:43:59.108632] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.591 [2024-06-07 21:43:59.108640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.591 [2024-06-07 21:43:59.108649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89072 len:8 PRP1 0x0 PRP2 0x0 00:27:05.591 [2024-06-07 21:43:59.108658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.591 [2024-06-07 21:43:59.108670] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.591 [2024-06-07 21:43:59.108678] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.591 [2024-06-07 21:43:59.108686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89080 len:8 PRP1 0x0 PRP2 0x0 00:27:05.591 [2024-06-07 21:43:59.108695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.591 [2024-06-07 21:43:59.108705] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.591 [2024-06-07 21:43:59.108713] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.591 [2024-06-07 21:43:59.108721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89088 len:8 PRP1 0x0 PRP2 0x0 00:27:05.591 [2024-06-07 21:43:59.108730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.591 [2024-06-07 21:43:59.108740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.591 [2024-06-07 21:43:59.108748] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.591 [2024-06-07 21:43:59.108755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89096 len:8 PRP1 0x0 PRP2 0x0 00:27:05.591 [2024-06-07 21:43:59.108764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.591 [2024-06-07 21:43:59.108774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.591 [2024-06-07 21:43:59.108782] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.591 [2024-06-07 21:43:59.108790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89104 len:8 PRP1 0x0 PRP2 0x0 00:27:05.591 [2024-06-07 21:43:59.108799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.591 [2024-06-07 21:43:59.108808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.591 [2024-06-07 21:43:59.108817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.591 [2024-06-07 21:43:59.108825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89112 len:8 PRP1 0x0 PRP2 0x0 00:27:05.591 [2024-06-07 21:43:59.108835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.591 [2024-06-07 21:43:59.108846] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.591 [2024-06-07 21:43:59.108853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.591 [2024-06-07 21:43:59.108861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89120 len:8 PRP1 0x0 PRP2 0x0 00:27:05.591 [2024-06-07 21:43:59.108870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.591 [2024-06-07 21:43:59.108881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.591 [2024-06-07 21:43:59.108888] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.591 [2024-06-07 21:43:59.108897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89128 len:8 PRP1 0x0 PRP2 0x0 00:27:05.591 [2024-06-07 21:43:59.108906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.591 [2024-06-07 21:43:59.108916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.591 [2024-06-07 21:43:59.108924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.591 [2024-06-07 21:43:59.108932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89136 len:8 PRP1 0x0 PRP2 0x0 00:27:05.591 [2024-06-07 21:43:59.108943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.591 [2024-06-07 21:43:59.108953] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.591 [2024-06-07 21:43:59.108961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.591 [2024-06-07 21:43:59.108969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89144 len:8 PRP1 0x0 PRP2 0x0 00:27:05.591 [2024-06-07 21:43:59.108979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.591 [2024-06-07 21:43:59.108989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.591 [2024-06-07 21:43:59.108997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.591 [2024-06-07 21:43:59.109006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89152 len:8 PRP1 0x0 PRP2 0x0 00:27:05.591 [2024-06-07 21:43:59.109015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.591 [2024-06-07 21:43:59.109031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.591 [2024-06-07 21:43:59.109039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.591 [2024-06-07 21:43:59.109047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89160 len:8 PRP1 0x0 PRP2 0x0 00:27:05.591 [2024-06-07 21:43:59.109057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.591 [2024-06-07 21:43:59.109068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.591 [2024-06-07 21:43:59.109075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.591 [2024-06-07 21:43:59.109084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89168 len:8 PRP1 0x0 PRP2 0x0 00:27:05.591 [2024-06-07 21:43:59.109094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.591 [2024-06-07 21:43:59.109104] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.591 [2024-06-07 21:43:59.109112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.591 [2024-06-07 21:43:59.109121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89176 len:8 PRP1 0x0 PRP2 0x0 00:27:05.591 [2024-06-07 21:43:59.109131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.591 [2024-06-07 21:43:59.109142] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.592 [2024-06-07 21:43:59.109149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.592 [2024-06-07 21:43:59.109157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89184 len:8 PRP1 0x0 PRP2 0x0 00:27:05.592 [2024-06-07 21:43:59.109168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.592 [2024-06-07 21:43:59.109177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.592 [2024-06-07 21:43:59.109185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.592 [2024-06-07 21:43:59.109194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89192 len:8 PRP1 0x0 PRP2 0x0 00:27:05.592 [2024-06-07 21:43:59.109203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.592 [2024-06-07 21:43:59.109214] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.592 [2024-06-07 21:43:59.109221] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.592 [2024-06-07 21:43:59.109231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89200 len:8 PRP1 0x0 PRP2 0x0 00:27:05.592 [2024-06-07 21:43:59.109241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.592 [2024-06-07 21:43:59.109253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.592 [2024-06-07 21:43:59.109260] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.592 [2024-06-07 21:43:59.109269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89208 len:8 PRP1 0x0 PRP2 0x0 00:27:05.592 [2024-06-07 21:43:59.109279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.592 [2024-06-07 21:43:59.109289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.592 [2024-06-07 21:43:59.109296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.592 [2024-06-07 21:43:59.109305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89216 len:8 PRP1 0x0 PRP2 0x0 00:27:05.592 [2024-06-07 21:43:59.109314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.592 [2024-06-07 21:43:59.109324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.592 [2024-06-07 21:43:59.109332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.592 [2024-06-07 21:43:59.109340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89224 len:8 PRP1 0x0 PRP2 0x0 00:27:05.592 [2024-06-07 21:43:59.109349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.592 [2024-06-07 21:43:59.109359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.592 [2024-06-07 21:43:59.109366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.592 [2024-06-07 21:43:59.109375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89232 len:8 PRP1 0x0 PRP2 0x0 00:27:05.592 [2024-06-07 21:43:59.109384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.592 [2024-06-07 21:43:59.109394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.592 [2024-06-07 21:43:59.109403] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.592 [2024-06-07 21:43:59.109412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89240 len:8 PRP1 0x0 PRP2 0x0 00:27:05.592 [2024-06-07 21:43:59.109421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.592 [2024-06-07 21:43:59.109431] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.592 [2024-06-07 21:43:59.109439] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.592 [2024-06-07 21:43:59.109447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89248 len:8 PRP1 0x0 PRP2 0x0 00:27:05.592 [2024-06-07 21:43:59.109457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.592 [2024-06-07 21:43:59.109466] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.592 [2024-06-07 21:43:59.109474] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.592 [2024-06-07 21:43:59.109482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89256 len:8 PRP1 0x0 PRP2 0x0 00:27:05.592 [2024-06-07 21:43:59.109492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.592 [2024-06-07 21:43:59.109507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.592 [2024-06-07 21:43:59.109515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.592 [2024-06-07 21:43:59.109523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89264 len:8 PRP1 0x0 PRP2 0x0 00:27:05.592 [2024-06-07 21:43:59.109533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.592 [2024-06-07 21:43:59.109543] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.592 [2024-06-07 21:43:59.109550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.592 [2024-06-07 21:43:59.109559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89272 len:8 PRP1 0x0 PRP2 0x0 00:27:05.592 [2024-06-07 21:43:59.109568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.592 [2024-06-07 21:43:59.109579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.592 [2024-06-07 21:43:59.109587] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.592 [2024-06-07 21:43:59.109595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89280 len:8 PRP1 0x0 PRP2 0x0 00:27:05.592 [2024-06-07 21:43:59.109605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.592 [2024-06-07 21:43:59.109616] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.592 [2024-06-07 21:43:59.109625] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.592 [2024-06-07 21:43:59.109633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89288 len:8 PRP1 0x0 PRP2 0x0 00:27:05.592 [2024-06-07 21:43:59.109642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.592 [2024-06-07 21:43:59.109653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.592 [2024-06-07 21:43:59.109661] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.592 [2024-06-07 21:43:59.109669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89296 len:8 PRP1 0x0 PRP2 0x0 00:27:05.592 [2024-06-07 21:43:59.109678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.592 [2024-06-07 21:43:59.109689] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.592 [2024-06-07 21:43:59.109697] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.592 [2024-06-07 21:43:59.109705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89304 len:8 PRP1 0x0 PRP2 0x0 00:27:05.592 [2024-06-07 21:43:59.109715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.592 [2024-06-07 21:43:59.109725] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.592 [2024-06-07 21:43:59.109733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.592 [2024-06-07 21:43:59.109741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89312 len:8 PRP1 0x0 PRP2 0x0 00:27:05.592 [2024-06-07 21:43:59.109751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.592 [2024-06-07 21:43:59.109761] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.592 [2024-06-07 21:43:59.109769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.592 [2024-06-07 21:43:59.109777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89320 len:8 PRP1 0x0 PRP2 0x0 00:27:05.592 [2024-06-07 21:43:59.109787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.592 [2024-06-07 21:43:59.109799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.592 [2024-06-07 21:43:59.109807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.592 [2024-06-07 21:43:59.109814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89328 len:8 PRP1 0x0 PRP2 0x0 00:27:05.592 [2024-06-07 21:43:59.109825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.592 [2024-06-07 21:43:59.109835] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.592 [2024-06-07 21:43:59.109842] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.592 [2024-06-07 21:43:59.109851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89336 len:8 PRP1 0x0 PRP2 0x0 00:27:05.592 [2024-06-07 21:43:59.109861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.592 [2024-06-07 21:43:59.109871] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.592 [2024-06-07 21:43:59.109878] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.592 [2024-06-07 21:43:59.109887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89344 len:8 PRP1 0x0 PRP2 0x0 00:27:05.592 [2024-06-07 21:43:59.109897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.592 [2024-06-07 21:43:59.109907] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.593 [2024-06-07 21:43:59.109914] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.593 [2024-06-07 21:43:59.109923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89352 len:8 PRP1 0x0 PRP2 0x0 00:27:05.593 [2024-06-07 21:43:59.109933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.593 [2024-06-07 21:43:59.109943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.593 [2024-06-07 21:43:59.109951] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.593 [2024-06-07 21:43:59.109959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89360 len:8 PRP1 0x0 PRP2 0x0 00:27:05.593 [2024-06-07 21:43:59.109969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.593 [2024-06-07 21:43:59.109979] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.593 [2024-06-07 21:43:59.109987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.593 [2024-06-07 21:43:59.109996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89368 len:8 PRP1 0x0 PRP2 0x0 00:27:05.593 [2024-06-07 21:43:59.110006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.593 [2024-06-07 21:43:59.110016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.593 [2024-06-07 21:43:59.110030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.593 [2024-06-07 21:43:59.110039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89376 len:8 PRP1 0x0 PRP2 0x0 00:27:05.593 [2024-06-07 21:43:59.110050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.593 [2024-06-07 21:43:59.110060] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.593 [2024-06-07 21:43:59.110068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.593 [2024-06-07 21:43:59.110078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89384 len:8 PRP1 0x0 PRP2 0x0 00:27:05.593 [2024-06-07 21:43:59.110088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.593 [2024-06-07 21:43:59.110099] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.593 [2024-06-07 21:43:59.110107] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.593 [2024-06-07 21:43:59.110116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89392 len:8 PRP1 0x0 PRP2 0x0 00:27:05.593 [2024-06-07 21:43:59.110125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.593 [2024-06-07 21:43:59.110136] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.593 [2024-06-07 21:43:59.110143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.593 [2024-06-07 21:43:59.110152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89400 len:8 PRP1 0x0 PRP2 0x0 00:27:05.593 [2024-06-07 21:43:59.110161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.593 [2024-06-07 21:43:59.110171] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.593 [2024-06-07 21:43:59.110179] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.593 [2024-06-07 21:43:59.110188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89408 len:8 PRP1 0x0 PRP2 0x0 00:27:05.593 [2024-06-07 21:43:59.110197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.593 [2024-06-07 21:43:59.110207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.593 [2024-06-07 21:43:59.110216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.593 [2024-06-07 21:43:59.110224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89416 len:8 PRP1 0x0 PRP2 0x0 00:27:05.593 [2024-06-07 21:43:59.110233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.593 [2024-06-07 21:43:59.110244] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.593 [2024-06-07 21:43:59.110252] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.593 [2024-06-07 21:43:59.110260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89424 len:8 PRP1 0x0 PRP2 0x0 00:27:05.593 [2024-06-07 21:43:59.110270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.593 [2024-06-07 21:43:59.110281] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.593 [2024-06-07 21:43:59.110289] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.593 [2024-06-07 21:43:59.110297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89432 len:8 PRP1 0x0 PRP2 0x0 00:27:05.593 [2024-06-07 21:43:59.110307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.593 [2024-06-07 21:43:59.110317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.593 [2024-06-07 21:43:59.110325] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.593 [2024-06-07 21:43:59.110333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89440 len:8 PRP1 0x0 PRP2 0x0 00:27:05.593 [2024-06-07 21:43:59.110343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.593 [2024-06-07 21:43:59.110353] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.593 [2024-06-07 21:43:59.110362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.593 [2024-06-07 21:43:59.110370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89448 len:8 PRP1 0x0 PRP2 0x0 00:27:05.593 [2024-06-07 21:43:59.110380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.593 [2024-06-07 21:43:59.110408] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.593 [2024-06-07 21:43:59.110418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.593 [2024-06-07 21:43:59.110430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89456 len:8 PRP1 0x0 PRP2 0x0 00:27:05.593 [2024-06-07 21:43:59.110443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.593 [2024-06-07 21:43:59.110457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.593 [2024-06-07 21:43:59.110467] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.593 [2024-06-07 21:43:59.110479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89464 len:8 PRP1 0x0 PRP2 0x0 00:27:05.593 [2024-06-07 21:43:59.110493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.593 [2024-06-07 21:43:59.110506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.593 [2024-06-07 21:43:59.110517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.593 [2024-06-07 21:43:59.110528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89472 len:8 PRP1 0x0 PRP2 0x0 00:27:05.593 [2024-06-07 21:43:59.110541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.593 [2024-06-07 21:43:59.110555] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.593 [2024-06-07 21:43:59.110565] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.594 [2024-06-07 21:43:59.110576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89480 len:8 PRP1 0x0 PRP2 0x0 00:27:05.594 [2024-06-07 21:43:59.110589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.594 [2024-06-07 21:43:59.110603] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.594 [2024-06-07 21:43:59.110613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.594 [2024-06-07 21:43:59.110625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89488 len:8 PRP1 0x0 PRP2 0x0 00:27:05.594 [2024-06-07 21:43:59.110637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.594 [2024-06-07 21:43:59.110652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.594 [2024-06-07 21:43:59.110664] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.594 [2024-06-07 21:43:59.110676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89496 len:8 PRP1 0x0 PRP2 0x0 00:27:05.594 [2024-06-07 21:43:59.110688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.594 [2024-06-07 21:43:59.110702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.594 [2024-06-07 21:43:59.110713] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.594 [2024-06-07 21:43:59.110724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89504 len:8 PRP1 0x0 PRP2 0x0 00:27:05.594 [2024-06-07 21:43:59.110737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.594 [2024-06-07 21:43:59.110754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.594 [2024-06-07 21:43:59.110765] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.594 [2024-06-07 21:43:59.110776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89512 len:8 PRP1 0x0 PRP2 0x0 00:27:05.594 [2024-06-07 21:43:59.110789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.594 [2024-06-07 21:43:59.110803] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.594 [2024-06-07 21:43:59.110813] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.594 [2024-06-07 21:43:59.117649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89520 len:8 PRP1 0x0 PRP2 0x0 00:27:05.594 [2024-06-07 21:43:59.117667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.594 [2024-06-07 21:43:59.117682] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.594 [2024-06-07 21:43:59.117692] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.594 [2024-06-07 21:43:59.117705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89528 len:8 PRP1 0x0 PRP2 0x0 00:27:05.594 [2024-06-07 21:43:59.117718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.594 [2024-06-07 21:43:59.117732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.594 [2024-06-07 21:43:59.117743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.594 [2024-06-07 21:43:59.117754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89536 len:8 PRP1 0x0 PRP2 0x0 00:27:05.594 [2024-06-07 21:43:59.117767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.594 [2024-06-07 21:43:59.117781] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.594 [2024-06-07 21:43:59.117792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.594 [2024-06-07 21:43:59.117802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89544 len:8 PRP1 0x0 PRP2 0x0 00:27:05.594 [2024-06-07 21:43:59.117815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.594 [2024-06-07 21:43:59.117829] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.594 [2024-06-07 21:43:59.117839] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.594 [2024-06-07 21:43:59.117850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89552 len:8 PRP1 0x0 PRP2 0x0 00:27:05.594 [2024-06-07 21:43:59.117863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.594 [2024-06-07 21:43:59.117877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.594 [2024-06-07 21:43:59.117889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.594 [2024-06-07 21:43:59.117900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89560 len:8 PRP1 0x0 PRP2 0x0 00:27:05.594 [2024-06-07 21:43:59.117914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.594 [2024-06-07 21:43:59.117927] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.594 [2024-06-07 21:43:59.117937] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.594 [2024-06-07 21:43:59.117949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89568 len:8 PRP1 0x0 PRP2 0x0 00:27:05.594 [2024-06-07 21:43:59.117965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.594 [2024-06-07 21:43:59.117979] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.594 [2024-06-07 21:43:59.117989] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.594 [2024-06-07 21:43:59.118000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89576 len:8 PRP1 0x0 PRP2 0x0 00:27:05.594 [2024-06-07 21:43:59.118013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.594 [2024-06-07 21:43:59.118034] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.594 [2024-06-07 21:43:59.118045] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.594 [2024-06-07 21:43:59.118057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89584 len:8 PRP1 0x0 PRP2 0x0 00:27:05.594 [2024-06-07 21:43:59.118070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.594 [2024-06-07 21:43:59.118085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.594 [2024-06-07 21:43:59.118095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.594 [2024-06-07 21:43:59.118107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88720 len:8 PRP1 0x0 PRP2 0x0 00:27:05.594 [2024-06-07 21:43:59.118121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.594 [2024-06-07 21:43:59.118135] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.594 [2024-06-07 21:43:59.118145] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.594 [2024-06-07 21:43:59.118156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88728 len:8 PRP1 0x0 PRP2 0x0 00:27:05.594 [2024-06-07 21:43:59.118169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.594 [2024-06-07 21:43:59.118182] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.594 [2024-06-07 21:43:59.118193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.594 [2024-06-07 21:43:59.118204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88736 len:8 PRP1 0x0 PRP2 0x0 00:27:05.594 [2024-06-07 21:43:59.118217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.594 [2024-06-07 21:43:59.118231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.594 [2024-06-07 21:43:59.118242] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.594 [2024-06-07 21:43:59.118253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88744 len:8 PRP1 0x0 PRP2 0x0 00:27:05.594 [2024-06-07 21:43:59.118266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.594 [2024-06-07 21:43:59.118280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.594 [2024-06-07 21:43:59.118291] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.594 [2024-06-07 21:43:59.118303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88752 len:8 PRP1 0x0 PRP2 0x0 00:27:05.594 [2024-06-07 21:43:59.118316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.594 [2024-06-07 21:43:59.118330] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.594 [2024-06-07 21:43:59.118340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.594 [2024-06-07 21:43:59.118353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88760 len:8 PRP1 0x0 PRP2 0x0 00:27:05.594 [2024-06-07 21:43:59.118367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.594 [2024-06-07 21:43:59.118380] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.594 [2024-06-07 21:43:59.118392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.594 [2024-06-07 21:43:59.118402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88768 len:8 PRP1 0x0 PRP2 0x0 00:27:05.594 [2024-06-07 21:43:59.118415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.594 [2024-06-07 21:43:59.118429] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.594 [2024-06-07 21:43:59.118439] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.594 [2024-06-07 21:43:59.118450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89592 len:8 PRP1 0x0 PRP2 0x0 00:27:05.594 [2024-06-07 21:43:59.118462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.594 [2024-06-07 21:43:59.118476] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.594 [2024-06-07 21:43:59.118486] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.594 [2024-06-07 21:43:59.118498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89600 len:8 PRP1 0x0 PRP2 0x0 00:27:05.595 [2024-06-07 21:43:59.118511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.595 [2024-06-07 21:43:59.118525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.595 [2024-06-07 21:43:59.118535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.595 [2024-06-07 21:43:59.118546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89608 len:8 PRP1 0x0 PRP2 0x0 00:27:05.595 [2024-06-07 21:43:59.118559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.595 [2024-06-07 21:43:59.118572] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.595 [2024-06-07 21:43:59.118583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.595 [2024-06-07 21:43:59.118595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89616 len:8 PRP1 0x0 PRP2 0x0 00:27:05.595 [2024-06-07 21:43:59.118609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.595 [2024-06-07 21:43:59.118623] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.595 [2024-06-07 21:43:59.118633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.595 [2024-06-07 21:43:59.118645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89624 len:8 PRP1 0x0 PRP2 0x0 00:27:05.595 [2024-06-07 21:43:59.118657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.595 [2024-06-07 21:43:59.118671] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.595 [2024-06-07 21:43:59.118682] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.595 [2024-06-07 21:43:59.118694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89632 len:8 PRP1 0x0 PRP2 0x0 00:27:05.595 [2024-06-07 21:43:59.118707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.595 [2024-06-07 21:43:59.118721] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.595 [2024-06-07 21:43:59.118734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.595 [2024-06-07 21:43:59.118745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89640 len:8 PRP1 0x0 PRP2 0x0 00:27:05.595 [2024-06-07 21:43:59.118759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.595 [2024-06-07 21:43:59.118772] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.595 [2024-06-07 21:43:59.118783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.595 [2024-06-07 21:43:59.118794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89648 len:8 PRP1 0x0 PRP2 0x0 00:27:05.595 [2024-06-07 21:43:59.118807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.595 [2024-06-07 21:43:59.118865] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10c15c0 was disconnected and freed. reset controller. 00:27:05.595 [2024-06-07 21:43:59.118884] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:27:05.595 [2024-06-07 21:43:59.118918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:05.595 [2024-06-07 21:43:59.118934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.595 [2024-06-07 21:43:59.118949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:05.595 [2024-06-07 21:43:59.118962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.595 [2024-06-07 21:43:59.118977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:05.595 [2024-06-07 21:43:59.118990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.595 [2024-06-07 21:43:59.119005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:05.595 [2024-06-07 21:43:59.119018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.595 [2024-06-07 21:43:59.119041] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.595 [2024-06-07 21:43:59.119077] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x109f980 (9): Bad file descriptor 00:27:05.595 [2024-06-07 21:43:59.126354] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.595 [2024-06-07 21:43:59.209723] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:05.595 00:27:05.595 Latency(us) 00:27:05.595 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:05.595 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:05.595 Verification LBA range: start 0x0 length 0x4000 00:27:05.595 NVMe0n1 : 15.01 7526.06 29.40 581.62 0.00 15753.31 923.46 34555.35 00:27:05.595 =================================================================================================================== 00:27:05.595 Total : 7526.06 29.40 581.62 0.00 15753.31 923.46 34555.35 00:27:05.595 Received shutdown signal, test time was about 15.000000 seconds 00:27:05.595 00:27:05.595 Latency(us) 00:27:05.595 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:05.595 =================================================================================================================== 00:27:05.595 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:05.595 21:44:05 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:27:05.595 21:44:05 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:27:05.595 21:44:05 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:27:05.595 21:44:05 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1568455 00:27:05.595 21:44:05 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1568455 /var/tmp/bdevperf.sock 00:27:05.595 21:44:05 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:27:05.595 21:44:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 1568455 ']' 00:27:05.595 21:44:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:05.595 21:44:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:05.595 21:44:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:05.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:05.595 21:44:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:05.595 21:44:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:06.160 21:44:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:06.160 21:44:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:27:06.160 21:44:06 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:06.160 [2024-06-07 21:44:06.361663] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:06.160 21:44:06 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:06.418 [2024-06-07 21:44:06.610408] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:27:06.418 21:44:06 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:06.984 NVMe0n1 00:27:06.984 21:44:07 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:07.242 00:27:07.242 21:44:07 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:07.808 00:27:07.808 21:44:07 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:27:07.808 21:44:07 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:08.066 21:44:08 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:08.324 21:44:08 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:27:11.603 21:44:11 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:11.603 21:44:11 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:27:11.603 21:44:11 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1569729 00:27:11.603 21:44:11 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:11.603 21:44:11 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 1569729 00:27:12.537 0 00:27:12.537 21:44:12 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:12.537 [2024-06-07 21:44:05.189587] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:27:12.537 [2024-06-07 21:44:05.189655] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1568455 ] 00:27:12.537 EAL: No free 2048 kB hugepages reported on node 1 00:27:12.537 [2024-06-07 21:44:05.281281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.537 [2024-06-07 21:44:05.367322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:12.537 [2024-06-07 21:44:08.314287] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:27:12.537 [2024-06-07 21:44:08.314341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.537 [2024-06-07 21:44:08.314356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.537 [2024-06-07 21:44:08.314369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.537 [2024-06-07 21:44:08.314379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.537 [2024-06-07 21:44:08.314389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.537 [2024-06-07 21:44:08.314400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.537 [2024-06-07 21:44:08.314412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.537 [2024-06-07 21:44:08.314422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.537 [2024-06-07 21:44:08.314431] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.537 [2024-06-07 21:44:08.314464] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.537 [2024-06-07 21:44:08.314482] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x184a980 (9): Bad file descriptor 00:27:12.537 [2024-06-07 21:44:08.327898] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:12.537 Running I/O for 1 seconds... 00:27:12.537 00:27:12.537 Latency(us) 00:27:12.537 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:12.537 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:12.537 Verification LBA range: start 0x0 length 0x4000 00:27:12.537 NVMe0n1 : 1.01 7442.77 29.07 0.00 0.00 17111.76 3053.38 14358.34 00:27:12.537 =================================================================================================================== 00:27:12.537 Total : 7442.77 29.07 0.00 0.00 17111.76 3053.38 14358.34 00:27:12.537 21:44:12 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:12.537 21:44:12 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:27:12.795 21:44:13 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:13.052 21:44:13 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:13.052 21:44:13 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:27:13.310 21:44:13 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:13.568 21:44:13 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:27:16.849 21:44:16 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:16.849 21:44:16 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:27:16.849 21:44:17 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 1568455 00:27:16.849 21:44:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 1568455 ']' 00:27:16.849 21:44:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 1568455 00:27:16.849 21:44:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:27:16.849 21:44:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:16.849 21:44:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1568455 00:27:16.849 21:44:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:27:16.849 21:44:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:27:16.849 21:44:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1568455' 00:27:16.849 killing process with pid 1568455 00:27:16.849 21:44:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 1568455 00:27:16.849 21:44:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 1568455 00:27:17.107 21:44:17 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:27:17.107 21:44:17 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:17.365 21:44:17 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:27:17.365 21:44:17 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:17.365 21:44:17 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:27:17.365 21:44:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:17.365 21:44:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:27:17.365 21:44:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:17.365 21:44:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:27:17.365 21:44:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:17.365 21:44:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:17.365 rmmod nvme_tcp 00:27:17.365 rmmod nvme_fabrics 00:27:17.365 rmmod nvme_keyring 00:27:17.365 21:44:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:17.365 21:44:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:27:17.365 21:44:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:27:17.365 21:44:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1565172 ']' 00:27:17.365 21:44:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1565172 00:27:17.365 21:44:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 1565172 ']' 00:27:17.365 21:44:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 1565172 00:27:17.365 21:44:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:27:17.365 21:44:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:17.365 21:44:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1565172 00:27:17.624 21:44:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:27:17.624 21:44:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:27:17.624 21:44:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1565172' 00:27:17.624 killing process with pid 1565172 00:27:17.624 21:44:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 1565172 00:27:17.624 21:44:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 1565172 00:27:17.624 21:44:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:17.624 21:44:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:17.624 21:44:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:17.624 21:44:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:17.624 21:44:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:17.624 21:44:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:17.624 21:44:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:17.624 21:44:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.206 21:44:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:20.206 00:27:20.206 real 0m40.957s 00:27:20.206 user 2m10.984s 00:27:20.206 sys 0m8.323s 00:27:20.206 21:44:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:20.206 21:44:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:20.206 ************************************ 00:27:20.206 END TEST nvmf_failover 00:27:20.206 ************************************ 00:27:20.206 21:44:19 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:27:20.206 21:44:19 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:27:20.206 21:44:19 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:20.206 21:44:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:20.206 ************************************ 00:27:20.206 START TEST nvmf_host_discovery 00:27:20.206 ************************************ 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:27:20.206 * Looking for test storage... 00:27:20.206 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:27:20.206 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:26.767 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:26.767 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:26.767 Found net devices under 0000:af:00.0: cvl_0_0 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:26.767 Found net devices under 0000:af:00.1: cvl_0_1 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:26.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:26.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:27:26.767 00:27:26.767 --- 10.0.0.2 ping statistics --- 00:27:26.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:26.767 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:27:26.767 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:26.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:26.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:27:26.767 00:27:26.767 --- 10.0.0.1 ping statistics --- 00:27:26.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:26.768 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:27:26.768 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:26.768 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:27:26.768 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:26.768 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:26.768 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:26.768 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:26.768 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:26.768 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:26.768 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:26.768 21:44:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:27:26.768 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:26.768 21:44:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@723 -- # xtrace_disable 00:27:26.768 21:44:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.768 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1574610 00:27:26.768 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1574610 00:27:26.768 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:26.768 21:44:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@830 -- # '[' -z 1574610 ']' 00:27:26.768 21:44:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:26.768 21:44:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:26.768 21:44:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:26.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:26.768 21:44:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:26.768 21:44:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.768 [2024-06-07 21:44:26.405011] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:27:26.768 [2024-06-07 21:44:26.405080] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:26.768 EAL: No free 2048 kB hugepages reported on node 1 00:27:26.768 [2024-06-07 21:44:26.492669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.768 [2024-06-07 21:44:26.580759] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:26.768 [2024-06-07 21:44:26.580801] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:26.768 [2024-06-07 21:44:26.580812] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:26.768 [2024-06-07 21:44:26.580820] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:26.768 [2024-06-07 21:44:26.580828] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:26.768 [2024-06-07 21:44:26.580856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:27.333 21:44:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:27.333 21:44:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@863 -- # return 0 00:27:27.333 21:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:27.333 21:44:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@729 -- # xtrace_disable 00:27:27.333 21:44:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:27.333 21:44:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:27.333 21:44:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:27.333 21:44:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:27.333 21:44:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:27.333 [2024-06-07 21:44:27.377861] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:27.333 21:44:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:27.333 21:44:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:27:27.333 21:44:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:27.333 21:44:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:27.333 [2024-06-07 21:44:27.386024] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:27.333 21:44:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:27.333 21:44:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:27:27.333 21:44:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:27.333 21:44:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:27.333 null0 00:27:27.333 21:44:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:27.333 21:44:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:27:27.333 21:44:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:27.333 21:44:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:27.333 null1 00:27:27.333 21:44:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:27.333 21:44:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:27:27.333 21:44:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:27.333 21:44:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:27.333 21:44:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:27.333 21:44:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1574855 00:27:27.333 21:44:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1574855 /tmp/host.sock 00:27:27.333 21:44:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@830 -- # '[' -z 1574855 ']' 00:27:27.333 21:44:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/tmp/host.sock 00:27:27.333 21:44:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:27.333 21:44:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:27.333 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:27.333 21:44:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:27.333 21:44:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:27.333 21:44:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:27:27.333 [2024-06-07 21:44:27.463586] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:27:27.333 [2024-06-07 21:44:27.463640] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1574855 ] 00:27:27.333 EAL: No free 2048 kB hugepages reported on node 1 00:27:27.333 [2024-06-07 21:44:27.551828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:27.591 [2024-06-07 21:44:27.641900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:28.156 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:28.156 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@863 -- # return 0 00:27:28.156 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:28.156 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:27:28.156 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:28.156 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:28.156 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:28.156 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:27:28.156 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:28.156 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:28.156 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:28.156 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:27:28.156 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:27:28.156 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:28.156 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:28.156 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:28.156 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:28.156 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:28.156 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:28.414 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:28.414 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:27:28.414 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:27:28.414 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:28.414 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:28.414 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:28.414 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:28.414 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:28.414 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:28.414 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:28.414 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:27:28.414 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:27:28.414 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:28.414 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:28.414 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:28.414 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:27:28.414 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:28.414 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:28.414 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:28.414 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:28.414 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:28.414 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:28.414 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:28.414 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:27:28.414 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:27:28.414 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:28.414 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:28.414 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:28.414 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:28.414 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:28.414 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:28.414 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:28.414 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:27:28.414 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:27:28.414 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:28.414 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:28.414 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:28.414 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:27:28.414 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:28.414 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:28.414 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:28.414 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:28.414 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:28.414 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:28.414 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:28.672 [2024-06-07 21:44:28.757767] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:28.672 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:27:28.673 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:27:28.673 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:28.673 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:27:28.673 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:28.673 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:28.673 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:28.673 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:28.673 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:28.673 21:44:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:28.673 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:28.930 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == \n\v\m\e\0 ]] 00:27:28.930 21:44:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@919 -- # sleep 1 00:27:29.495 [2024-06-07 21:44:29.468248] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:29.495 [2024-06-07 21:44:29.468283] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:29.495 [2024-06-07 21:44:29.468306] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:29.495 [2024-06-07 21:44:29.554573] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:29.753 [2024-06-07 21:44:29.779979] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:29.753 [2024-06-07 21:44:29.780005] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:29.753 21:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:27:29.753 21:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:29.753 21:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:27:29.753 21:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:29.753 21:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:29.753 21:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:29.753 21:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:29.753 21:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:29.753 21:44:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:29.753 21:44:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 == \4\4\2\0 ]] 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:27:30.011 21:44:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:30.012 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:30.012 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:27:30.012 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:27:30.012 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:30.012 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:27:30.012 21:44:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:27:30.012 21:44:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:30.012 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.012 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:30.012 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.270 21:44:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:27:30.270 21:44:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:30.270 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:27:30.270 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:27:30.270 21:44:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:27:30.270 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.270 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:30.270 [2024-06-07 21:44:30.310259] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:30.270 [2024-06-07 21:44:30.310922] bdev_nvme.c:6960:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:30.270 [2024-06-07 21:44:30.310952] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:30.270 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.270 21:44:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:30.270 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:30.270 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:27:30.270 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:27:30.270 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:30.270 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:27:30.270 21:44:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:30.270 21:44:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:30.270 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.270 21:44:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:30.270 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:30.270 21:44:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:30.270 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.270 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.270 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:27:30.270 21:44:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:30.270 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:30.270 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:27:30.270 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:27:30.270 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:30.270 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:27:30.270 21:44:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:30.270 21:44:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:30.270 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.270 21:44:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:30.270 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:30.270 21:44:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:30.270 [2024-06-07 21:44:30.399245] bdev_nvme.c:6902:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:27:30.270 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.270 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:30.270 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:27:30.270 21:44:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:27:30.270 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:27:30.270 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:27:30.270 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:27:30.270 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:27:30.270 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:27:30.270 21:44:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:30.270 21:44:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:30.270 21:44:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:30.270 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.270 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:30.270 21:44:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:30.270 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.270 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:27:30.270 21:44:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@919 -- # sleep 1 00:27:30.270 [2024-06-07 21:44:30.501986] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:30.270 [2024-06-07 21:44:30.502010] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:30.270 [2024-06-07 21:44:30.502017] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:31.643 21:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:27:31.643 21:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:27:31.643 21:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:27:31.643 21:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:31.643 21:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:31.643 21:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:31.643 21:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.643 21:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:31.643 21:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:31.643 21:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.643 21:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:27:31.643 21:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:27:31.643 21:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:27:31.643 21:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:31.643 21:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:31.643 21:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:31.643 21:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:27:31.643 21:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:27:31.643 21:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:31.643 21:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:27:31.643 21:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:31.643 21:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:31.643 21:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.643 21:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:31.643 21:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.643 21:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:31.643 21:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:31.643 21:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:27:31.643 21:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:27:31.643 21:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:31.643 21:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.643 21:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:31.643 [2024-06-07 21:44:31.590738] bdev_nvme.c:6960:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:31.643 [2024-06-07 21:44:31.590768] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:31.643 [2024-06-07 21:44:31.591474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:31.643 [2024-06-07 21:44:31.591497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.643 [2024-06-07 21:44:31.591509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:31.644 [2024-06-07 21:44:31.591519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.644 [2024-06-07 21:44:31.591535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:31.644 [2024-06-07 21:44:31.591545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.644 [2024-06-07 21:44:31.591555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:31.644 [2024-06-07 21:44:31.591564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.644 [2024-06-07 21:44:31.591575] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9f70 is same with the state(5) to be set 00:27:31.644 21:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.644 21:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:31.644 21:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:31.644 21:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:27:31.644 21:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:27:31.644 21:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:31.644 21:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:27:31.644 21:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:31.644 21:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:31.644 21:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.644 21:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:31.644 21:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:31.644 21:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:31.644 [2024-06-07 21:44:31.601479] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc9f70 (9): Bad file descriptor 00:27:31.644 [2024-06-07 21:44:31.611522] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:31.644 [2024-06-07 21:44:31.611898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.644 [2024-06-07 21:44:31.611919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc9f70 with addr=10.0.0.2, port=4420 00:27:31.644 [2024-06-07 21:44:31.611930] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9f70 is same with the state(5) to be set 00:27:31.644 [2024-06-07 21:44:31.611947] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc9f70 (9): Bad file descriptor 00:27:31.644 [2024-06-07 21:44:31.611973] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:31.644 [2024-06-07 21:44:31.611983] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:31.644 [2024-06-07 21:44:31.611994] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:31.644 [2024-06-07 21:44:31.612009] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.644 21:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.644 [2024-06-07 21:44:31.621588] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:31.644 [2024-06-07 21:44:31.621864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.644 [2024-06-07 21:44:31.621882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc9f70 with addr=10.0.0.2, port=4420 00:27:31.644 [2024-06-07 21:44:31.621893] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9f70 is same with the state(5) to be set 00:27:31.644 [2024-06-07 21:44:31.621913] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc9f70 (9): Bad file descriptor 00:27:31.644 [2024-06-07 21:44:31.621928] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:31.644 [2024-06-07 21:44:31.621936] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:31.644 [2024-06-07 21:44:31.621946] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:31.644 [2024-06-07 21:44:31.621959] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.644 [2024-06-07 21:44:31.631647] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:31.644 [2024-06-07 21:44:31.632002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.644 [2024-06-07 21:44:31.632020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc9f70 with addr=10.0.0.2, port=4420 00:27:31.644 [2024-06-07 21:44:31.632037] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9f70 is same with the state(5) to be set 00:27:31.644 [2024-06-07 21:44:31.632053] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc9f70 (9): Bad file descriptor 00:27:31.644 [2024-06-07 21:44:31.632076] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:31.644 [2024-06-07 21:44:31.632087] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:31.644 [2024-06-07 21:44:31.632096] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:31.644 [2024-06-07 21:44:31.632109] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.644 [2024-06-07 21:44:31.641709] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:31.644 [2024-06-07 21:44:31.642036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.644 [2024-06-07 21:44:31.642055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc9f70 with addr=10.0.0.2, port=4420 00:27:31.644 [2024-06-07 21:44:31.642065] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9f70 is same with the state(5) to be set 00:27:31.644 [2024-06-07 21:44:31.642081] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc9f70 (9): Bad file descriptor 00:27:31.644 [2024-06-07 21:44:31.642105] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:31.644 [2024-06-07 21:44:31.642115] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:31.644 [2024-06-07 21:44:31.642125] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:31.644 [2024-06-07 21:44:31.642147] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.644 21:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.644 21:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:27:31.644 21:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:31.644 [2024-06-07 21:44:31.651770] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:31.644 21:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:31.644 [2024-06-07 21:44:31.652066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.644 [2024-06-07 21:44:31.652085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc9f70 with addr=10.0.0.2, port=4420 00:27:31.644 [2024-06-07 21:44:31.652096] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9f70 is same with the state(5) to be set 00:27:31.644 [2024-06-07 21:44:31.652116] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc9f70 (9): Bad file descriptor 00:27:31.644 [2024-06-07 21:44:31.652140] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:31.644 [2024-06-07 21:44:31.652151] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:31.644 [2024-06-07 21:44:31.652165] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:31.644 [2024-06-07 21:44:31.652190] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.644 21:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:27:31.644 21:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:27:31.644 21:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:31.644 21:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:27:31.644 21:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:31.644 21:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:31.644 21:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.644 21:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:31.644 21:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:31.644 21:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:31.644 [2024-06-07 21:44:31.661832] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:31.644 [2024-06-07 21:44:31.662183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.644 [2024-06-07 21:44:31.662201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc9f70 with addr=10.0.0.2, port=4420 00:27:31.644 [2024-06-07 21:44:31.662212] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9f70 is same with the state(5) to be set 00:27:31.644 [2024-06-07 21:44:31.662227] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc9f70 (9): Bad file descriptor 00:27:31.644 [2024-06-07 21:44:31.663128] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:31.645 [2024-06-07 21:44:31.663145] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:31.645 [2024-06-07 21:44:31.663155] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:31.645 [2024-06-07 21:44:31.663173] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.645 [2024-06-07 21:44:31.671892] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:31.645 [2024-06-07 21:44:31.672237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.645 [2024-06-07 21:44:31.672256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc9f70 with addr=10.0.0.2, port=4420 00:27:31.645 [2024-06-07 21:44:31.672267] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9f70 is same with the state(5) to be set 00:27:31.645 [2024-06-07 21:44:31.672283] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc9f70 (9): Bad file descriptor 00:27:31.645 [2024-06-07 21:44:31.672305] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:31.645 [2024-06-07 21:44:31.672316] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:31.645 [2024-06-07 21:44:31.672325] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:31.645 [2024-06-07 21:44:31.672340] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.645 [2024-06-07 21:44:31.681955] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:31.645 [2024-06-07 21:44:31.682237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.645 [2024-06-07 21:44:31.682256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc9f70 with addr=10.0.0.2, port=4420 00:27:31.645 [2024-06-07 21:44:31.682267] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9f70 is same with the state(5) to be set 00:27:31.645 [2024-06-07 21:44:31.682284] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc9f70 (9): Bad file descriptor 00:27:31.645 [2024-06-07 21:44:31.682306] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:31.645 [2024-06-07 21:44:31.682316] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:31.645 [2024-06-07 21:44:31.682326] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:31.645 [2024-06-07 21:44:31.682341] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.645 21:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.645 [2024-06-07 21:44:31.692020] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:31.645 [2024-06-07 21:44:31.692331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.645 [2024-06-07 21:44:31.692348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc9f70 with addr=10.0.0.2, port=4420 00:27:31.645 [2024-06-07 21:44:31.692359] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9f70 is same with the state(5) to be set 00:27:31.645 [2024-06-07 21:44:31.692375] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc9f70 (9): Bad file descriptor 00:27:31.645 [2024-06-07 21:44:31.692388] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:31.645 [2024-06-07 21:44:31.692398] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:31.645 [2024-06-07 21:44:31.692407] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:31.645 [2024-06-07 21:44:31.692429] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.645 [2024-06-07 21:44:31.702086] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:31.645 [2024-06-07 21:44:31.702431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.645 [2024-06-07 21:44:31.702448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc9f70 with addr=10.0.0.2, port=4420 00:27:31.645 [2024-06-07 21:44:31.702459] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9f70 is same with the state(5) to be set 00:27:31.645 [2024-06-07 21:44:31.702474] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc9f70 (9): Bad file descriptor 00:27:31.645 [2024-06-07 21:44:31.702504] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:31.645 [2024-06-07 21:44:31.702515] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:31.645 [2024-06-07 21:44:31.702524] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:31.645 [2024-06-07 21:44:31.702538] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.645 21:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:31.645 21:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:27:31.645 21:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:27:31.645 21:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:27:31.645 21:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:27:31.645 21:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:27:31.645 21:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:27:31.645 21:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:27:31.645 21:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:31.645 21:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:31.645 21:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:31.645 21:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.645 21:44:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:31.645 21:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:31.645 [2024-06-07 21:44:31.712147] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:31.645 [2024-06-07 21:44:31.712363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.645 [2024-06-07 21:44:31.712381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc9f70 with addr=10.0.0.2, port=4420 00:27:31.645 [2024-06-07 21:44:31.712393] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9f70 is same with the state(5) to be set 00:27:31.645 [2024-06-07 21:44:31.712411] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc9f70 (9): Bad file descriptor 00:27:31.645 [2024-06-07 21:44:31.712427] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:31.645 [2024-06-07 21:44:31.712437] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:31.645 [2024-06-07 21:44:31.712448] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:31.645 [2024-06-07 21:44:31.712461] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.645 [2024-06-07 21:44:31.717707] bdev_nvme.c:6765:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:27:31.645 [2024-06-07 21:44:31.717729] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:31.645 21:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.645 21:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:27:31.645 21:44:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@919 -- # sleep 1 00:27:32.579 21:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:27:32.579 21:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:27:32.579 21:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:27:32.579 21:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:32.579 21:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:32.579 21:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:32.579 21:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.579 21:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:32.579 21:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:32.579 21:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.579 21:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4421 == \4\4\2\1 ]] 00:27:32.579 21:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:27:32.579 21:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:27:32.579 21:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:32.579 21:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:32.579 21:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:32.579 21:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:27:32.579 21:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:27:32.579 21:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:32.579 21:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:27:32.579 21:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:32.579 21:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:32.579 21:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.579 21:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:32.579 21:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.839 21:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:32.839 21:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:32.839 21:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:27:32.839 21:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:27:32.839 21:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:27:32.839 21:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.839 21:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:32.839 21:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.839 21:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:27:32.839 21:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:27:32.839 21:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:27:32.839 21:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:27:32.839 21:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:27:32.839 21:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:27:32.839 21:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:32.839 21:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:32.839 21:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.839 21:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:32.839 21:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:32.839 21:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:32.839 21:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.839 21:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == '' ]] 00:27:32.839 21:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:27:32.839 21:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:27:32.839 21:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:27:32.839 21:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:27:32.839 21:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:27:32.839 21:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:27:32.839 21:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:27:32.839 21:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:32.839 21:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:32.839 21:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.839 21:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:32.839 21:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:32.839 21:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:32.839 21:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.839 21:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == '' ]] 00:27:32.839 21:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:27:32.839 21:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:27:32.839 21:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:27:32.839 21:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:32.839 21:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:32.839 21:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:27:32.839 21:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:27:32.839 21:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:32.839 21:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:27:32.839 21:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:32.839 21:44:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:32.839 21:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.839 21:44:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:32.839 21:44:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.839 21:44:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:27:32.839 21:44:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:27:32.839 21:44:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:27:32.839 21:44:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:27:32.839 21:44:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:32.839 21:44:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.839 21:44:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:34.217 [2024-06-07 21:44:34.099138] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:34.217 [2024-06-07 21:44:34.099158] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:34.217 [2024-06-07 21:44:34.099177] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:34.217 [2024-06-07 21:44:34.186479] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:27:34.217 [2024-06-07 21:44:34.374985] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:34.217 [2024-06-07 21:44:34.375019] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:34.217 21:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.217 21:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:34.217 21:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:27:34.217 21:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:34.217 21:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:27:34.217 21:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:34.217 21:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:27:34.217 21:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:34.217 21:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:34.217 21:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.217 21:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:34.217 request: 00:27:34.217 { 00:27:34.217 "name": "nvme", 00:27:34.217 "trtype": "tcp", 00:27:34.217 "traddr": "10.0.0.2", 00:27:34.217 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:34.217 "adrfam": "ipv4", 00:27:34.217 "trsvcid": "8009", 00:27:34.217 "wait_for_attach": true, 00:27:34.217 "method": "bdev_nvme_start_discovery", 00:27:34.217 "req_id": 1 00:27:34.217 } 00:27:34.217 Got JSON-RPC error response 00:27:34.217 response: 00:27:34.217 { 00:27:34.217 "code": -17, 00:27:34.217 "message": "File exists" 00:27:34.217 } 00:27:34.217 21:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:27:34.217 21:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:27:34.217 21:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:27:34.217 21:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:27:34.217 21:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:27:34.217 21:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:27:34.217 21:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:34.217 21:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:34.217 21:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.217 21:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:34.217 21:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:34.217 21:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:34.217 21:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.217 21:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:27:34.217 21:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:27:34.217 21:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:34.217 21:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:34.217 21:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:34.217 21:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.217 21:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:34.217 21:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:34.217 21:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.477 21:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:34.477 21:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:34.477 21:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:27:34.477 21:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:34.477 21:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:27:34.477 21:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:34.477 21:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:27:34.477 21:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:34.477 21:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:34.477 21:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.477 21:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:34.477 request: 00:27:34.477 { 00:27:34.477 "name": "nvme_second", 00:27:34.477 "trtype": "tcp", 00:27:34.477 "traddr": "10.0.0.2", 00:27:34.477 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:34.477 "adrfam": "ipv4", 00:27:34.477 "trsvcid": "8009", 00:27:34.477 "wait_for_attach": true, 00:27:34.477 "method": "bdev_nvme_start_discovery", 00:27:34.477 "req_id": 1 00:27:34.477 } 00:27:34.477 Got JSON-RPC error response 00:27:34.477 response: 00:27:34.477 { 00:27:34.477 "code": -17, 00:27:34.477 "message": "File exists" 00:27:34.477 } 00:27:34.477 21:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:27:34.477 21:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:27:34.477 21:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:27:34.477 21:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:27:34.477 21:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:27:34.477 21:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:27:34.477 21:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:34.477 21:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:34.477 21:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.477 21:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:34.477 21:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:34.477 21:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:34.477 21:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.477 21:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:27:34.477 21:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:27:34.477 21:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:34.477 21:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:34.477 21:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.477 21:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:34.477 21:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:34.477 21:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:34.477 21:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.477 21:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:34.477 21:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:34.477 21:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:27:34.477 21:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:34.477 21:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:27:34.477 21:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:34.477 21:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:27:34.477 21:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:34.477 21:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:34.477 21:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.477 21:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:35.414 [2024-06-07 21:44:35.634694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.414 [2024-06-07 21:44:35.634730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce6560 with addr=10.0.0.2, port=8010 00:27:35.414 [2024-06-07 21:44:35.634748] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:35.414 [2024-06-07 21:44:35.634757] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:35.414 [2024-06-07 21:44:35.634767] bdev_nvme.c:7040:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:27:36.792 [2024-06-07 21:44:36.637091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.792 [2024-06-07 21:44:36.637121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce6560 with addr=10.0.0.2, port=8010 00:27:36.792 [2024-06-07 21:44:36.637136] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:36.792 [2024-06-07 21:44:36.637145] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:36.792 [2024-06-07 21:44:36.637154] bdev_nvme.c:7040:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:27:37.730 [2024-06-07 21:44:37.639171] bdev_nvme.c:7021:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:27:37.730 request: 00:27:37.730 { 00:27:37.730 "name": "nvme_second", 00:27:37.730 "trtype": "tcp", 00:27:37.730 "traddr": "10.0.0.2", 00:27:37.730 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:37.730 "adrfam": "ipv4", 00:27:37.730 "trsvcid": "8010", 00:27:37.730 "attach_timeout_ms": 3000, 00:27:37.730 "method": "bdev_nvme_start_discovery", 00:27:37.730 "req_id": 1 00:27:37.730 } 00:27:37.730 Got JSON-RPC error response 00:27:37.730 response: 00:27:37.730 { 00:27:37.730 "code": -110, 00:27:37.730 "message": "Connection timed out" 00:27:37.730 } 00:27:37.730 21:44:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:27:37.730 21:44:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:27:37.730 21:44:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:27:37.730 21:44:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:27:37.730 21:44:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:27:37.730 21:44:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:27:37.730 21:44:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:37.730 21:44:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:37.730 21:44:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.730 21:44:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:37.730 21:44:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:37.730 21:44:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:37.730 21:44:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.730 21:44:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:27:37.730 21:44:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:27:37.730 21:44:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1574855 00:27:37.730 21:44:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:27:37.730 21:44:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:37.730 21:44:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:27:37.730 21:44:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:37.730 21:44:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:27:37.730 21:44:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:37.730 21:44:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:37.730 rmmod nvme_tcp 00:27:37.730 rmmod nvme_fabrics 00:27:37.730 rmmod nvme_keyring 00:27:37.730 21:44:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:37.730 21:44:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:27:37.730 21:44:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:27:37.730 21:44:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1574610 ']' 00:27:37.730 21:44:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1574610 00:27:37.730 21:44:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@949 -- # '[' -z 1574610 ']' 00:27:37.730 21:44:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # kill -0 1574610 00:27:37.730 21:44:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # uname 00:27:37.730 21:44:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:37.730 21:44:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1574610 00:27:37.730 21:44:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:27:37.730 21:44:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:27:37.730 21:44:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1574610' 00:27:37.730 killing process with pid 1574610 00:27:37.730 21:44:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@968 -- # kill 1574610 00:27:37.730 21:44:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@973 -- # wait 1574610 00:27:37.989 21:44:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:37.989 21:44:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:37.989 21:44:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:37.989 21:44:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:37.989 21:44:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:37.989 21:44:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:37.989 21:44:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:37.989 21:44:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:39.895 21:44:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:39.895 00:27:39.895 real 0m20.061s 00:27:39.895 user 0m25.650s 00:27:39.895 sys 0m6.205s 00:27:39.895 21:44:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:39.895 21:44:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:39.895 ************************************ 00:27:39.895 END TEST nvmf_host_discovery 00:27:39.895 ************************************ 00:27:39.895 21:44:40 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:27:39.895 21:44:40 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:27:39.895 21:44:40 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:39.895 21:44:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:39.895 ************************************ 00:27:39.895 START TEST nvmf_host_multipath_status 00:27:39.895 ************************************ 00:27:39.895 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:27:40.155 * Looking for test storage... 00:27:40.155 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:40.155 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:40.155 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:27:40.155 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:40.155 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:40.155 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:40.155 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:40.155 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:40.155 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:40.155 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:40.155 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:40.155 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:40.155 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:40.155 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:27:40.155 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:27:40.155 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:40.155 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:40.155 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:40.155 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:40.155 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:40.155 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:40.155 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:40.155 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:40.155 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.155 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.155 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.155 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:27:40.155 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.155 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:27:40.155 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:40.155 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:40.155 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:40.155 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:40.155 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:40.155 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:40.155 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:40.155 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:40.155 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:40.155 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:40.155 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:40.155 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:27:40.156 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:40.156 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:27:40.156 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:27:40.156 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:40.156 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:40.156 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:40.156 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:40.156 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:40.156 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:40.156 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:40.156 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:40.156 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:40.156 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:40.156 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:27:40.156 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:46.723 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:46.723 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:46.723 Found net devices under 0000:af:00.0: cvl_0_0 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:46.723 Found net devices under 0000:af:00.1: cvl_0_1 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:46.723 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:46.724 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:46.724 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:46.724 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:46.724 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:46.724 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:46.724 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:46.724 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:46.724 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:46.724 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:46.724 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:46.724 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:46.724 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:46.724 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:46.724 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:46.724 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:27:46.724 00:27:46.724 --- 10.0.0.2 ping statistics --- 00:27:46.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:46.724 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:27:46.724 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:46.724 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:46.724 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:27:46.724 00:27:46.724 --- 10.0.0.1 ping statistics --- 00:27:46.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:46.724 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:27:46.724 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:46.724 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:27:46.724 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:46.724 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:46.724 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:46.724 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:46.724 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:46.724 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:46.724 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:46.724 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:27:46.724 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:46.724 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@723 -- # xtrace_disable 00:27:46.724 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:46.724 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1580879 00:27:46.724 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1580879 00:27:46.724 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:27:46.724 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # '[' -z 1580879 ']' 00:27:46.724 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:46.724 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:46.724 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:46.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:46.724 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:46.724 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:46.982 [2024-06-07 21:44:46.998960] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:27:46.982 [2024-06-07 21:44:46.999016] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:46.982 EAL: No free 2048 kB hugepages reported on node 1 00:27:46.982 [2024-06-07 21:44:47.092532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:46.982 [2024-06-07 21:44:47.184052] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:46.982 [2024-06-07 21:44:47.184094] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:46.982 [2024-06-07 21:44:47.184108] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:46.982 [2024-06-07 21:44:47.184117] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:46.982 [2024-06-07 21:44:47.184124] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:46.982 [2024-06-07 21:44:47.184182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:46.982 [2024-06-07 21:44:47.184185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:47.916 21:44:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:47.917 21:44:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@863 -- # return 0 00:27:47.917 21:44:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:47.917 21:44:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@729 -- # xtrace_disable 00:27:47.917 21:44:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:47.917 21:44:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:47.917 21:44:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1580879 00:27:47.917 21:44:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:48.175 [2024-06-07 21:44:48.195601] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:48.175 21:44:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:48.433 Malloc0 00:27:48.433 21:44:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:27:48.691 21:44:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:48.949 21:44:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:48.949 [2024-06-07 21:44:49.198762] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:49.207 21:44:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:49.207 [2024-06-07 21:44:49.455552] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:49.207 21:44:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1581366 00:27:49.207 21:44:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:49.207 21:44:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:27:49.207 21:44:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1581366 /var/tmp/bdevperf.sock 00:27:49.466 21:44:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # '[' -z 1581366 ']' 00:27:49.466 21:44:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:49.466 21:44:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:49.466 21:44:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:49.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:49.466 21:44:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:49.466 21:44:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:49.725 21:44:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:49.725 21:44:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@863 -- # return 0 00:27:49.725 21:44:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:27:49.984 21:44:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:27:50.242 Nvme0n1 00:27:50.243 21:44:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:50.810 Nvme0n1 00:27:50.810 21:44:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:27:50.810 21:44:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:27:52.714 21:44:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:27:52.714 21:44:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:52.971 21:44:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:53.228 21:44:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:27:54.603 21:44:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:27:54.603 21:44:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:54.603 21:44:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:54.603 21:44:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:54.603 21:44:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:54.603 21:44:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:54.603 21:44:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:54.603 21:44:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:54.861 21:44:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:54.861 21:44:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:54.861 21:44:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:54.861 21:44:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:55.119 21:44:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:55.119 21:44:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:55.119 21:44:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:55.119 21:44:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:55.377 21:44:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:55.377 21:44:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:55.377 21:44:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:55.378 21:44:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:55.636 21:44:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:55.636 21:44:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:55.636 21:44:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:55.636 21:44:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:55.894 21:44:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:55.894 21:44:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:27:55.894 21:44:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:56.155 21:44:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:56.427 21:44:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:27:57.407 21:44:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:27:57.407 21:44:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:57.407 21:44:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:57.407 21:44:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:57.665 21:44:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:57.665 21:44:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:57.665 21:44:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:57.665 21:44:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:57.923 21:44:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:57.923 21:44:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:57.923 21:44:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:57.923 21:44:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:58.182 21:44:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:58.182 21:44:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:58.182 21:44:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:58.182 21:44:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:58.440 21:44:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:58.441 21:44:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:58.441 21:44:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:58.441 21:44:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:58.700 21:44:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:58.700 21:44:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:58.700 21:44:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:58.700 21:44:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:58.959 21:44:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:58.959 21:44:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:27:58.959 21:44:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:59.218 21:44:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:59.477 21:44:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:28:00.413 21:45:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:28:00.413 21:45:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:00.413 21:45:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:00.413 21:45:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:00.672 21:45:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:00.672 21:45:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:00.672 21:45:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:00.672 21:45:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:00.931 21:45:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:00.931 21:45:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:00.931 21:45:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:00.931 21:45:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:01.189 21:45:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:01.189 21:45:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:01.189 21:45:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:01.189 21:45:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:01.447 21:45:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:01.447 21:45:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:01.447 21:45:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:01.447 21:45:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:01.706 21:45:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:01.706 21:45:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:01.706 21:45:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:01.706 21:45:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:01.966 21:45:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:01.966 21:45:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:28:01.966 21:45:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:01.966 21:45:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:02.225 21:45:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:28:03.601 21:45:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:28:03.601 21:45:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:03.601 21:45:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:03.601 21:45:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:03.601 21:45:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:03.601 21:45:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:03.601 21:45:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:03.601 21:45:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:03.859 21:45:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:03.859 21:45:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:03.859 21:45:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:03.859 21:45:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:04.119 21:45:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:04.119 21:45:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:04.119 21:45:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:04.119 21:45:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:04.378 21:45:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:04.378 21:45:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:04.378 21:45:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:04.378 21:45:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:04.637 21:45:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:04.637 21:45:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:04.637 21:45:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:04.637 21:45:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:04.896 21:45:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:04.896 21:45:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:28:04.896 21:45:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:28:05.155 21:45:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:05.414 21:45:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:28:06.350 21:45:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:28:06.350 21:45:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:06.350 21:45:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:06.350 21:45:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:06.609 21:45:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:06.609 21:45:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:06.609 21:45:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:06.609 21:45:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:06.867 21:45:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:06.867 21:45:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:06.867 21:45:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:06.867 21:45:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:07.126 21:45:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:07.126 21:45:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:07.126 21:45:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:07.126 21:45:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:07.385 21:45:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:07.385 21:45:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:28:07.385 21:45:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:07.385 21:45:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:07.385 21:45:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:07.385 21:45:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:07.644 21:45:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:07.644 21:45:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:07.644 21:45:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:07.903 21:45:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:28:07.903 21:45:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:28:07.903 21:45:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:08.162 21:45:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:28:09.540 21:45:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:28:09.540 21:45:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:09.540 21:45:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:09.540 21:45:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:09.540 21:45:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:09.540 21:45:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:09.540 21:45:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:09.540 21:45:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:09.799 21:45:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:09.799 21:45:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:09.799 21:45:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:09.799 21:45:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:10.058 21:45:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:10.058 21:45:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:10.058 21:45:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:10.058 21:45:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:10.317 21:45:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:10.317 21:45:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:28:10.317 21:45:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:10.317 21:45:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:10.575 21:45:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:10.575 21:45:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:10.575 21:45:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:10.575 21:45:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:10.834 21:45:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:10.834 21:45:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:28:11.093 21:45:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:28:11.093 21:45:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:28:11.352 21:45:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:11.611 21:45:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:28:12.547 21:45:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:28:12.547 21:45:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:12.547 21:45:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:12.547 21:45:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:12.806 21:45:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:12.806 21:45:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:12.806 21:45:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:12.806 21:45:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:13.064 21:45:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:13.064 21:45:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:13.064 21:45:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:13.064 21:45:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:13.322 21:45:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:13.322 21:45:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:13.322 21:45:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:13.323 21:45:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:13.581 21:45:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:13.581 21:45:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:13.581 21:45:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:13.581 21:45:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:13.840 21:45:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:13.840 21:45:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:13.840 21:45:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:13.840 21:45:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:14.098 21:45:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:14.098 21:45:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:28:14.098 21:45:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:14.357 21:45:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:14.616 21:45:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:28:15.552 21:45:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:28:15.552 21:45:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:15.552 21:45:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:15.552 21:45:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:15.814 21:45:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:15.814 21:45:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:15.814 21:45:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:15.814 21:45:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:16.072 21:45:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:16.072 21:45:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:16.072 21:45:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:16.072 21:45:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:16.331 21:45:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:16.331 21:45:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:16.331 21:45:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:16.331 21:45:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:16.590 21:45:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:16.590 21:45:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:16.590 21:45:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:16.590 21:45:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:16.849 21:45:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:16.849 21:45:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:16.849 21:45:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:16.849 21:45:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:17.107 21:45:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:17.107 21:45:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:28:17.107 21:45:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:17.365 21:45:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:28:17.624 21:45:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:28:18.559 21:45:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:28:18.559 21:45:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:18.559 21:45:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:18.559 21:45:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:18.818 21:45:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:18.818 21:45:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:18.818 21:45:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:18.818 21:45:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:19.077 21:45:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:19.077 21:45:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:19.077 21:45:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:19.077 21:45:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:19.335 21:45:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:19.335 21:45:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:19.335 21:45:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:19.335 21:45:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:19.593 21:45:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:19.594 21:45:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:19.594 21:45:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:19.594 21:45:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:19.852 21:45:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:19.852 21:45:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:19.852 21:45:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:19.852 21:45:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:20.158 21:45:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:20.158 21:45:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:28:20.158 21:45:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:20.442 21:45:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:20.700 21:45:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:28:21.636 21:45:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:28:21.636 21:45:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:21.636 21:45:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:21.636 21:45:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:21.895 21:45:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:21.895 21:45:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:21.895 21:45:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:21.895 21:45:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:22.154 21:45:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:22.154 21:45:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:22.154 21:45:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:22.154 21:45:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:22.413 21:45:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:22.413 21:45:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:22.413 21:45:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:22.413 21:45:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:22.672 21:45:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:22.672 21:45:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:22.672 21:45:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:22.672 21:45:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:22.932 21:45:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:22.932 21:45:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:22.933 21:45:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:22.933 21:45:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:23.191 21:45:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:23.191 21:45:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1581366 00:28:23.191 21:45:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@949 -- # '[' -z 1581366 ']' 00:28:23.191 21:45:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # kill -0 1581366 00:28:23.191 21:45:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # uname 00:28:23.191 21:45:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:23.191 21:45:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1581366 00:28:23.191 21:45:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:28:23.191 21:45:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:28:23.191 21:45:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1581366' 00:28:23.191 killing process with pid 1581366 00:28:23.191 21:45:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # kill 1581366 00:28:23.191 21:45:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # wait 1581366 00:28:23.191 Connection closed with partial response: 00:28:23.191 00:28:23.191 00:28:23.453 21:45:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1581366 00:28:23.453 21:45:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:23.453 [2024-06-07 21:44:49.523369] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:28:23.453 [2024-06-07 21:44:49.523434] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1581366 ] 00:28:23.453 EAL: No free 2048 kB hugepages reported on node 1 00:28:23.453 [2024-06-07 21:44:49.588311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:23.453 [2024-06-07 21:44:49.660206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:23.453 Running I/O for 90 seconds... 00:28:23.453 [2024-06-07 21:45:05.239876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.453 [2024-06-07 21:45:05.239912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:23.453 [2024-06-07 21:45:05.239947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.453 [2024-06-07 21:45:05.239955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:23.453 [2024-06-07 21:45:05.239968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.453 [2024-06-07 21:45:05.239974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:23.453 [2024-06-07 21:45:05.239985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.453 [2024-06-07 21:45:05.239992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:23.453 [2024-06-07 21:45:05.240004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.453 [2024-06-07 21:45:05.240010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:23.453 [2024-06-07 21:45:05.240021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.453 [2024-06-07 21:45:05.240033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:23.453 [2024-06-07 21:45:05.240044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.453 [2024-06-07 21:45:05.240050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.453 [2024-06-07 21:45:05.240063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.453 [2024-06-07 21:45:05.240069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:23.453 [2024-06-07 21:45:05.240397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.453 [2024-06-07 21:45:05.240408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:23.453 [2024-06-07 21:45:05.240421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.453 [2024-06-07 21:45:05.240428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:23.453 [2024-06-07 21:45:05.240439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.453 [2024-06-07 21:45:05.240450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:23.453 [2024-06-07 21:45:05.240462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.453 [2024-06-07 21:45:05.240468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:23.453 [2024-06-07 21:45:05.240479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.453 [2024-06-07 21:45:05.240485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:23.453 [2024-06-07 21:45:05.240496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.453 [2024-06-07 21:45:05.240502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:23.453 [2024-06-07 21:45:05.240514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.453 [2024-06-07 21:45:05.240519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:23.453 [2024-06-07 21:45:05.240531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.453 [2024-06-07 21:45:05.240537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:23.453 [2024-06-07 21:45:05.240549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.453 [2024-06-07 21:45:05.240555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:23.453 [2024-06-07 21:45:05.240567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.453 [2024-06-07 21:45:05.240573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:23.453 [2024-06-07 21:45:05.240584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.453 [2024-06-07 21:45:05.240590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:23.453 [2024-06-07 21:45:05.240601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.453 [2024-06-07 21:45:05.240607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:23.453 [2024-06-07 21:45:05.240619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.453 [2024-06-07 21:45:05.240625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:23.453 [2024-06-07 21:45:05.240636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.453 [2024-06-07 21:45:05.240642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.453 [2024-06-07 21:45:05.240654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.453 [2024-06-07 21:45:05.240660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:23.453 [2024-06-07 21:45:05.240673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.453 [2024-06-07 21:45:05.240679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:23.453 [2024-06-07 21:45:05.240691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.453 [2024-06-07 21:45:05.240697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:23.453 [2024-06-07 21:45:05.240708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.453 [2024-06-07 21:45:05.240714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:23.453 [2024-06-07 21:45:05.240725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.453 [2024-06-07 21:45:05.240731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:23.453 [2024-06-07 21:45:05.240743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.453 [2024-06-07 21:45:05.240748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:23.453 [2024-06-07 21:45:05.240760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.453 [2024-06-07 21:45:05.240767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:23.453 [2024-06-07 21:45:05.240778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.453 [2024-06-07 21:45:05.240783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:23.453 [2024-06-07 21:45:05.240795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.453 [2024-06-07 21:45:05.240800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:23.453 [2024-06-07 21:45:05.240811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.453 [2024-06-07 21:45:05.240817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:23.453 [2024-06-07 21:45:05.240828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.453 [2024-06-07 21:45:05.240835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:23.453 [2024-06-07 21:45:05.240846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.453 [2024-06-07 21:45:05.240852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:23.453 [2024-06-07 21:45:05.240863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.453 [2024-06-07 21:45:05.240869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:23.453 [2024-06-07 21:45:05.240881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.453 [2024-06-07 21:45:05.240887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:23.453 [2024-06-07 21:45:05.240899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.453 [2024-06-07 21:45:05.240906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:23.454 [2024-06-07 21:45:05.240917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.454 [2024-06-07 21:45:05.240923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.454 [2024-06-07 21:45:05.240934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.454 [2024-06-07 21:45:05.240940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.454 [2024-06-07 21:45:05.240952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.454 [2024-06-07 21:45:05.240958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:23.454 [2024-06-07 21:45:05.240970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.454 [2024-06-07 21:45:05.240976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:23.454 [2024-06-07 21:45:05.241047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.454 [2024-06-07 21:45:05.241057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:23.454 [2024-06-07 21:45:05.241071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.454 [2024-06-07 21:45:05.241077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:23.454 [2024-06-07 21:45:05.241091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.454 [2024-06-07 21:45:05.241097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:23.454 [2024-06-07 21:45:05.241111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.454 [2024-06-07 21:45:05.241116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:23.454 [2024-06-07 21:45:05.241129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.454 [2024-06-07 21:45:05.241135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:23.454 [2024-06-07 21:45:05.241147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.454 [2024-06-07 21:45:05.241154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:23.454 [2024-06-07 21:45:05.241166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.454 [2024-06-07 21:45:05.241174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:23.454 [2024-06-07 21:45:05.241187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.454 [2024-06-07 21:45:05.241193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:23.454 [2024-06-07 21:45:05.241224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.454 [2024-06-07 21:45:05.241233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:23.454 [2024-06-07 21:45:05.241247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.454 [2024-06-07 21:45:05.241253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:23.454 [2024-06-07 21:45:05.241267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.454 [2024-06-07 21:45:05.241273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:23.454 [2024-06-07 21:45:05.241287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.454 [2024-06-07 21:45:05.241292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:23.454 [2024-06-07 21:45:05.241305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.454 [2024-06-07 21:45:05.241311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:23.454 [2024-06-07 21:45:05.241324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.454 [2024-06-07 21:45:05.241331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:23.454 [2024-06-07 21:45:05.241344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.454 [2024-06-07 21:45:05.241350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:23.454 [2024-06-07 21:45:05.241364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.454 [2024-06-07 21:45:05.241370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:23.454 [2024-06-07 21:45:05.241518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.454 [2024-06-07 21:45:05.241527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:23.454 [2024-06-07 21:45:05.241541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.454 [2024-06-07 21:45:05.241547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:23.454 [2024-06-07 21:45:05.241561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.454 [2024-06-07 21:45:05.241567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:23.454 [2024-06-07 21:45:05.241582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.454 [2024-06-07 21:45:05.241588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:23.454 [2024-06-07 21:45:05.241602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.454 [2024-06-07 21:45:05.241608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:23.454 [2024-06-07 21:45:05.241622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.454 [2024-06-07 21:45:05.241628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:23.454 [2024-06-07 21:45:05.241641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.454 [2024-06-07 21:45:05.241647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:23.454 [2024-06-07 21:45:05.241661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.454 [2024-06-07 21:45:05.241668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:23.454 [2024-06-07 21:45:05.242073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.454 [2024-06-07 21:45:05.242083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:23.454 [2024-06-07 21:45:05.242098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.454 [2024-06-07 21:45:05.242104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:23.454 [2024-06-07 21:45:05.242119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.454 [2024-06-07 21:45:05.242126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:23.454 [2024-06-07 21:45:05.242140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.454 [2024-06-07 21:45:05.242146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:23.454 [2024-06-07 21:45:05.242160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.454 [2024-06-07 21:45:05.242166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:23.454 [2024-06-07 21:45:05.242181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.454 [2024-06-07 21:45:05.242187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.454 [2024-06-07 21:45:05.242202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.454 [2024-06-07 21:45:05.242207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:23.454 [2024-06-07 21:45:05.242223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.454 [2024-06-07 21:45:05.242230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:23.454 [2024-06-07 21:45:05.242244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.454 [2024-06-07 21:45:05.242250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:23.454 [2024-06-07 21:45:05.242266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.454 [2024-06-07 21:45:05.242272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:23.454 [2024-06-07 21:45:05.242287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.454 [2024-06-07 21:45:05.242293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:23.454 [2024-06-07 21:45:05.242308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.454 [2024-06-07 21:45:05.242314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:23.454 [2024-06-07 21:45:05.242328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.455 [2024-06-07 21:45:05.242335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:23.455 [2024-06-07 21:45:05.242349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.455 [2024-06-07 21:45:05.242355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:23.455 [2024-06-07 21:45:05.242370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.455 [2024-06-07 21:45:05.242376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:23.455 [2024-06-07 21:45:05.244003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.455 [2024-06-07 21:45:05.244012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:23.455 [2024-06-07 21:45:05.244033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.455 [2024-06-07 21:45:05.244041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:23.455 [2024-06-07 21:45:05.244057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.455 [2024-06-07 21:45:05.244063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:23.455 [2024-06-07 21:45:05.244079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.455 [2024-06-07 21:45:05.244086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:23.455 [2024-06-07 21:45:05.244102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.455 [2024-06-07 21:45:05.244111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:23.455 [2024-06-07 21:45:05.244141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.455 [2024-06-07 21:45:05.244148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:23.455 [2024-06-07 21:45:05.244164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.455 [2024-06-07 21:45:05.244171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:23.455 [2024-06-07 21:45:05.244186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.455 [2024-06-07 21:45:05.244193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:23.455 [2024-06-07 21:45:05.244208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.455 [2024-06-07 21:45:05.244215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:23.455 [2024-06-07 21:45:05.244231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.455 [2024-06-07 21:45:05.244236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:23.455 [2024-06-07 21:45:05.244252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.455 [2024-06-07 21:45:05.244258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:23.455 [2024-06-07 21:45:05.244274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.455 [2024-06-07 21:45:05.244280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:23.455 [2024-06-07 21:45:05.244296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.455 [2024-06-07 21:45:05.244302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:23.455 [2024-06-07 21:45:05.244318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.455 [2024-06-07 21:45:05.244324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:23.455 [2024-06-07 21:45:05.244340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.455 [2024-06-07 21:45:05.244346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:23.455 [2024-06-07 21:45:05.244361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.455 [2024-06-07 21:45:05.244368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:23.455 [2024-06-07 21:45:05.244384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.455 [2024-06-07 21:45:05.244391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:23.455 [2024-06-07 21:45:05.244453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.455 [2024-06-07 21:45:05.244463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:23.455 [2024-06-07 21:45:05.244480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.455 [2024-06-07 21:45:05.244487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:23.455 [2024-06-07 21:45:05.244504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.455 [2024-06-07 21:45:05.244510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:23.455 [2024-06-07 21:45:05.244527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.455 [2024-06-07 21:45:05.244533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:23.455 [2024-06-07 21:45:05.244550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.455 [2024-06-07 21:45:05.244557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:23.455 [2024-06-07 21:45:05.244574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.455 [2024-06-07 21:45:05.244580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.455 [2024-06-07 21:45:05.244597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.455 [2024-06-07 21:45:05.244603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:23.455 [2024-06-07 21:45:05.244619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.455 [2024-06-07 21:45:05.244626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:23.455 [2024-06-07 21:45:05.244643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.455 [2024-06-07 21:45:05.244649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:23.455 [2024-06-07 21:45:05.244665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.455 [2024-06-07 21:45:05.244671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:23.455 [2024-06-07 21:45:05.244688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.455 [2024-06-07 21:45:05.244694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:23.455 [2024-06-07 21:45:05.244711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.455 [2024-06-07 21:45:05.244717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:23.455 [2024-06-07 21:45:05.244735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.455 [2024-06-07 21:45:05.244742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:23.455 [2024-06-07 21:45:05.244758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.455 [2024-06-07 21:45:05.244767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:23.455 [2024-06-07 21:45:05.244784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.455 [2024-06-07 21:45:05.244790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:23.455 [2024-06-07 21:45:05.244807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.455 [2024-06-07 21:45:05.244813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:23.455 [2024-06-07 21:45:05.244830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.455 [2024-06-07 21:45:05.244837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:23.455 [2024-06-07 21:45:05.244853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.455 [2024-06-07 21:45:05.244860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:23.455 [2024-06-07 21:45:05.244877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.455 [2024-06-07 21:45:05.244883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:23.455 [2024-06-07 21:45:05.244899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.455 [2024-06-07 21:45:05.244906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:23.455 [2024-06-07 21:45:05.244923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.456 [2024-06-07 21:45:05.244929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:23.456 [2024-06-07 21:45:05.244945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.456 [2024-06-07 21:45:05.244952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:23.456 [2024-06-07 21:45:05.244969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.456 [2024-06-07 21:45:05.244975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:23.456 [2024-06-07 21:45:05.244992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.456 [2024-06-07 21:45:05.244999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:23.456 [2024-06-07 21:45:05.245017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.456 [2024-06-07 21:45:05.245023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:23.456 [2024-06-07 21:45:05.245045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.456 [2024-06-07 21:45:05.245052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:23.456 [2024-06-07 21:45:05.245069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.456 [2024-06-07 21:45:05.245075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:23.456 [2024-06-07 21:45:05.245092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.456 [2024-06-07 21:45:05.245098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:23.456 [2024-06-07 21:45:05.245115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.456 [2024-06-07 21:45:05.245121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:23.456 [2024-06-07 21:45:05.245138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.456 [2024-06-07 21:45:05.245144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:23.456 [2024-06-07 21:45:05.245161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.456 [2024-06-07 21:45:05.245167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:23.456 [2024-06-07 21:45:20.752113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:85936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.456 [2024-06-07 21:45:20.752149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:23.456 [2024-06-07 21:45:20.752183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.456 [2024-06-07 21:45:20.752191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:23.456 [2024-06-07 21:45:20.752203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:85760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.456 [2024-06-07 21:45:20.752211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:23.456 [2024-06-07 21:45:20.752222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.456 [2024-06-07 21:45:20.752228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:23.456 [2024-06-07 21:45:20.752239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:85312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.456 [2024-06-07 21:45:20.752245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:23.456 [2024-06-07 21:45:20.752257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:85344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.456 [2024-06-07 21:45:20.752271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:23.456 [2024-06-07 21:45:20.752282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:85376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.456 [2024-06-07 21:45:20.752289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:23.456 [2024-06-07 21:45:20.752835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:85960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.456 [2024-06-07 21:45:20.752849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:23.456 [2024-06-07 21:45:20.752863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:85976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.456 [2024-06-07 21:45:20.752870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:23.456 [2024-06-07 21:45:20.752881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:85992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.456 [2024-06-07 21:45:20.752888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:23.456 [2024-06-07 21:45:20.752899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.456 [2024-06-07 21:45:20.752905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:23.456 [2024-06-07 21:45:20.752917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.456 [2024-06-07 21:45:20.752923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:23.456 [2024-06-07 21:45:20.752934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:85816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.456 [2024-06-07 21:45:20.752940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:23.456 [2024-06-07 21:45:20.752951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:85848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.456 [2024-06-07 21:45:20.752957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:23.456 [2024-06-07 21:45:20.752968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:85880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.456 [2024-06-07 21:45:20.752975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:23.456 [2024-06-07 21:45:20.753146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:86032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.456 [2024-06-07 21:45:20.753157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:23.456 [2024-06-07 21:45:20.753169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:86048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.456 [2024-06-07 21:45:20.753176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:23.456 [2024-06-07 21:45:20.753188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:86064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.456 [2024-06-07 21:45:20.753198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:23.456 [2024-06-07 21:45:20.753210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:86080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.456 [2024-06-07 21:45:20.753216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:23.456 [2024-06-07 21:45:20.753227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:86096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.456 [2024-06-07 21:45:20.753233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:23.456 [2024-06-07 21:45:20.753243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:86112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.456 [2024-06-07 21:45:20.753250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:23.456 [2024-06-07 21:45:20.753262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:86128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.456 [2024-06-07 21:45:20.753268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:23.456 [2024-06-07 21:45:20.753279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:85408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.456 [2024-06-07 21:45:20.753285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:23.456 [2024-06-07 21:45:20.753296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:85440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.456 [2024-06-07 21:45:20.753303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.456 [2024-06-07 21:45:20.753314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:85472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.457 [2024-06-07 21:45:20.753320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:23.457 [2024-06-07 21:45:20.753331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:85504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.457 [2024-06-07 21:45:20.753337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:23.457 [2024-06-07 21:45:20.753349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:86144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.457 [2024-06-07 21:45:20.753356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:23.457 [2024-06-07 21:45:20.753367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:86160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.457 [2024-06-07 21:45:20.753373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:23.457 [2024-06-07 21:45:20.753383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:86176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.457 [2024-06-07 21:45:20.753389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:23.457 [2024-06-07 21:45:20.753400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.457 [2024-06-07 21:45:20.753406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:23.457 Received shutdown signal, test time was about 32.257509 seconds 00:28:23.457 00:28:23.457 Latency(us) 00:28:23.457 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:23.457 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:23.457 Verification LBA range: start 0x0 length 0x4000 00:28:23.457 Nvme0n1 : 32.26 7594.65 29.67 0.00 0.00 16835.33 240.17 4026531.84 00:28:23.457 =================================================================================================================== 00:28:23.457 Total : 7594.65 29.67 0.00 0.00 16835.33 240.17 4026531.84 00:28:23.457 21:45:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:23.716 21:45:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:28:23.716 21:45:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:23.716 21:45:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:28:23.716 21:45:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:23.716 21:45:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:28:23.716 21:45:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:23.716 21:45:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:28:23.716 21:45:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:23.716 21:45:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:23.716 rmmod nvme_tcp 00:28:23.716 rmmod nvme_fabrics 00:28:23.716 rmmod nvme_keyring 00:28:23.716 21:45:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:23.716 21:45:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:28:23.716 21:45:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:28:23.716 21:45:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1580879 ']' 00:28:23.716 21:45:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1580879 00:28:23.717 21:45:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@949 -- # '[' -z 1580879 ']' 00:28:23.717 21:45:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # kill -0 1580879 00:28:23.717 21:45:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # uname 00:28:23.717 21:45:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:23.717 21:45:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1580879 00:28:23.717 21:45:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:28:23.717 21:45:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:28:23.717 21:45:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1580879' 00:28:23.717 killing process with pid 1580879 00:28:23.717 21:45:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # kill 1580879 00:28:23.717 21:45:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # wait 1580879 00:28:23.976 21:45:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:23.976 21:45:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:23.976 21:45:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:23.976 21:45:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:23.976 21:45:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:23.976 21:45:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:23.976 21:45:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:23.976 21:45:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:26.514 21:45:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:26.514 00:28:26.514 real 0m46.049s 00:28:26.514 user 2m7.328s 00:28:26.514 sys 0m12.548s 00:28:26.514 21:45:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:26.514 21:45:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:26.514 ************************************ 00:28:26.514 END TEST nvmf_host_multipath_status 00:28:26.514 ************************************ 00:28:26.514 21:45:26 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:26.514 21:45:26 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:28:26.514 21:45:26 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:26.514 21:45:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:26.514 ************************************ 00:28:26.514 START TEST nvmf_discovery_remove_ifc 00:28:26.514 ************************************ 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:26.514 * Looking for test storage... 00:28:26.514 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:28:26.514 21:45:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:33.086 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:33.086 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:28:33.086 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:33.086 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:33.086 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:33.086 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:33.086 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:33.086 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:28:33.086 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:33.086 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:28:33.086 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:28:33.086 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:28:33.086 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:28:33.086 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:28:33.086 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:28:33.086 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:33.086 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:33.086 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:33.086 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:33.086 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:33.086 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:33.086 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:33.086 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:33.086 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:33.086 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:33.086 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:33.086 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:33.086 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:33.086 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:33.086 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:33.086 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:33.086 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:33.086 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:33.086 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:33.086 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:33.086 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:33.086 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:33.086 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.086 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.086 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:33.086 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:33.086 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:33.086 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:33.086 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:33.086 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:33.087 Found net devices under 0000:af:00.0: cvl_0_0 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:33.087 Found net devices under 0000:af:00.1: cvl_0_1 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:33.087 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:33.087 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:28:33.087 00:28:33.087 --- 10.0.0.2 ping statistics --- 00:28:33.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.087 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:33.087 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:33.087 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:28:33.087 00:28:33.087 --- 10.0.0.1 ping statistics --- 00:28:33.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.087 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1591909 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1591909 00:28:33.087 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@830 -- # '[' -z 1591909 ']' 00:28:33.088 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:33.088 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:33.088 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:33.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:33.088 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:33.088 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:33.088 21:45:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:33.088 [2024-06-07 21:45:32.794826] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:28:33.088 [2024-06-07 21:45:32.794883] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:33.088 EAL: No free 2048 kB hugepages reported on node 1 00:28:33.088 [2024-06-07 21:45:32.881718] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:33.088 [2024-06-07 21:45:32.970541] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:33.088 [2024-06-07 21:45:32.970585] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:33.088 [2024-06-07 21:45:32.970595] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:33.088 [2024-06-07 21:45:32.970604] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:33.088 [2024-06-07 21:45:32.970611] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:33.088 [2024-06-07 21:45:32.970640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:33.655 21:45:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:33.655 21:45:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@863 -- # return 0 00:28:33.655 21:45:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:33.655 21:45:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:33.655 21:45:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:33.655 21:45:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:33.655 21:45:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:28:33.655 21:45:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:33.655 21:45:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:33.655 [2024-06-07 21:45:33.775321] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:33.655 [2024-06-07 21:45:33.783456] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:33.655 null0 00:28:33.655 [2024-06-07 21:45:33.815471] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:33.655 21:45:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:33.655 21:45:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1592186 00:28:33.655 21:45:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1592186 /tmp/host.sock 00:28:33.655 21:45:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@830 -- # '[' -z 1592186 ']' 00:28:33.655 21:45:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local rpc_addr=/tmp/host.sock 00:28:33.655 21:45:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:33.655 21:45:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:33.655 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:33.655 21:45:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:33.655 21:45:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:33.655 21:45:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:28:33.655 [2024-06-07 21:45:33.885104] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:28:33.655 [2024-06-07 21:45:33.885156] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1592186 ] 00:28:33.655 EAL: No free 2048 kB hugepages reported on node 1 00:28:33.914 [2024-06-07 21:45:33.973291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:33.914 [2024-06-07 21:45:34.059776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:33.914 21:45:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:33.914 21:45:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@863 -- # return 0 00:28:33.914 21:45:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:33.914 21:45:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:28:33.914 21:45:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:33.914 21:45:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:33.914 21:45:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:33.914 21:45:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:28:33.914 21:45:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:33.914 21:45:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:33.914 21:45:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:33.914 21:45:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:28:33.914 21:45:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:33.914 21:45:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:35.292 [2024-06-07 21:45:35.234261] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:35.292 [2024-06-07 21:45:35.234285] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:35.292 [2024-06-07 21:45:35.234303] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:35.292 [2024-06-07 21:45:35.320591] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:35.292 [2024-06-07 21:45:35.503657] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:35.292 [2024-06-07 21:45:35.503715] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:35.292 [2024-06-07 21:45:35.503744] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:35.292 [2024-06-07 21:45:35.503761] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:35.292 [2024-06-07 21:45:35.503784] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:35.292 21:45:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:35.292 21:45:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:28:35.292 21:45:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:35.292 21:45:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:35.292 21:45:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:35.292 21:45:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:35.292 21:45:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:35.292 21:45:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:35.292 21:45:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:35.292 [2024-06-07 21:45:35.511460] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x22ac9b0 was disconnected and freed. delete nvme_qpair. 00:28:35.292 21:45:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:35.292 21:45:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:28:35.292 21:45:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:28:35.551 21:45:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:28:35.551 21:45:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:28:35.551 21:45:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:35.551 21:45:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:35.551 21:45:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:35.551 21:45:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:35.551 21:45:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:35.551 21:45:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:35.551 21:45:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:35.551 21:45:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:35.551 21:45:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:35.551 21:45:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:36.488 21:45:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:36.488 21:45:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:36.488 21:45:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:36.488 21:45:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:36.488 21:45:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:36.488 21:45:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:36.488 21:45:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:36.747 21:45:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:36.747 21:45:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:36.747 21:45:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:37.684 21:45:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:37.684 21:45:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:37.684 21:45:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:37.684 21:45:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:37.684 21:45:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:37.684 21:45:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:37.684 21:45:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:37.684 21:45:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:37.684 21:45:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:37.684 21:45:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:38.620 21:45:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:38.620 21:45:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:38.620 21:45:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:38.620 21:45:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:38.620 21:45:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:38.620 21:45:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:38.620 21:45:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:38.620 21:45:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:38.879 21:45:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:38.879 21:45:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:39.817 21:45:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:39.817 21:45:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:39.817 21:45:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:39.817 21:45:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:39.817 21:45:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:39.817 21:45:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:39.817 21:45:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:39.817 21:45:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:39.817 21:45:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:39.817 21:45:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:40.753 [2024-06-07 21:45:40.944576] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:28:40.753 [2024-06-07 21:45:40.944627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:40.753 [2024-06-07 21:45:40.944642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.753 [2024-06-07 21:45:40.944656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:40.753 [2024-06-07 21:45:40.944666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.753 [2024-06-07 21:45:40.944677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:40.753 [2024-06-07 21:45:40.944687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.753 [2024-06-07 21:45:40.944698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:40.753 [2024-06-07 21:45:40.944707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.753 [2024-06-07 21:45:40.944718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:40.753 [2024-06-07 21:45:40.944727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.753 [2024-06-07 21:45:40.944736] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2273ad0 is same with the state(5) to be set 00:28:40.753 [2024-06-07 21:45:40.954596] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2273ad0 (9): Bad file descriptor 00:28:40.753 [2024-06-07 21:45:40.964641] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:40.753 21:45:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:40.753 21:45:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:40.753 21:45:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:40.753 21:45:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:40.753 21:45:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:40.753 21:45:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:40.753 21:45:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:42.128 [2024-06-07 21:45:41.974123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:28:42.128 [2024-06-07 21:45:41.974205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2273ad0 with addr=10.0.0.2, port=4420 00:28:42.128 [2024-06-07 21:45:41.974236] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2273ad0 is same with the state(5) to be set 00:28:42.128 [2024-06-07 21:45:41.974292] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2273ad0 (9): Bad file descriptor 00:28:42.128 [2024-06-07 21:45:41.975171] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:42.128 [2024-06-07 21:45:41.975221] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:42.128 [2024-06-07 21:45:41.975242] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:42.128 [2024-06-07 21:45:41.975264] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:42.128 [2024-06-07 21:45:41.975314] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.128 [2024-06-07 21:45:41.975338] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:42.128 21:45:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:42.128 21:45:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:42.128 21:45:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:43.062 [2024-06-07 21:45:42.977845] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.062 [2024-06-07 21:45:42.977888] bdev_nvme.c:6729:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:28:43.062 [2024-06-07 21:45:42.977917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:43.062 [2024-06-07 21:45:42.977930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.062 [2024-06-07 21:45:42.977943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:43.062 [2024-06-07 21:45:42.977953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.062 [2024-06-07 21:45:42.977963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:43.062 [2024-06-07 21:45:42.977972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.062 [2024-06-07 21:45:42.977982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:43.062 [2024-06-07 21:45:42.977992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.062 [2024-06-07 21:45:42.978003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:43.062 [2024-06-07 21:45:42.978012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.062 [2024-06-07 21:45:42.978021] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:28:43.062 [2024-06-07 21:45:42.978221] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2272ea0 (9): Bad file descriptor 00:28:43.062 [2024-06-07 21:45:42.979233] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:28:43.062 [2024-06-07 21:45:42.979248] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:28:43.062 21:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:43.062 21:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:43.062 21:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:43.062 21:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:43.062 21:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:43.062 21:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:43.062 21:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:43.062 21:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:43.062 21:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:28:43.062 21:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:43.062 21:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:43.062 21:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:28:43.062 21:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:43.062 21:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:43.062 21:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:43.062 21:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:43.062 21:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:43.062 21:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:43.062 21:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:43.062 21:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:43.062 21:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:43.062 21:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:43.997 21:45:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:43.997 21:45:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:43.997 21:45:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:43.997 21:45:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:43.997 21:45:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:43.997 21:45:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:43.997 21:45:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:43.997 21:45:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:43.997 21:45:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:43.997 21:45:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:44.931 [2024-06-07 21:45:45.030240] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:44.931 [2024-06-07 21:45:45.030261] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:44.931 [2024-06-07 21:45:45.030283] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:44.931 [2024-06-07 21:45:45.118580] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:28:45.190 [2024-06-07 21:45:45.219642] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:45.190 [2024-06-07 21:45:45.219688] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:45.190 [2024-06-07 21:45:45.219712] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:45.190 [2024-06-07 21:45:45.219730] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:28:45.190 [2024-06-07 21:45:45.219739] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:45.190 [2024-06-07 21:45:45.227592] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x22807d0 was disconnected and freed. delete nvme_qpair. 00:28:45.190 21:45:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:45.190 21:45:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:45.190 21:45:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:45.190 21:45:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:45.190 21:45:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:45.190 21:45:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:45.190 21:45:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:45.190 21:45:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:45.190 21:45:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:28:45.190 21:45:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:28:45.190 21:45:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1592186 00:28:45.190 21:45:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@949 -- # '[' -z 1592186 ']' 00:28:45.190 21:45:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # kill -0 1592186 00:28:45.190 21:45:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # uname 00:28:45.190 21:45:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:45.190 21:45:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1592186 00:28:45.190 21:45:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:28:45.190 21:45:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:28:45.190 21:45:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1592186' 00:28:45.190 killing process with pid 1592186 00:28:45.190 21:45:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # kill 1592186 00:28:45.190 21:45:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # wait 1592186 00:28:45.448 21:45:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:28:45.448 21:45:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:45.448 21:45:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:28:45.448 21:45:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:45.448 21:45:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:28:45.448 21:45:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:45.448 21:45:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:45.448 rmmod nvme_tcp 00:28:45.448 rmmod nvme_fabrics 00:28:45.449 rmmod nvme_keyring 00:28:45.449 21:45:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:45.449 21:45:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:28:45.449 21:45:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:28:45.449 21:45:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1591909 ']' 00:28:45.449 21:45:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1591909 00:28:45.449 21:45:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@949 -- # '[' -z 1591909 ']' 00:28:45.449 21:45:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # kill -0 1591909 00:28:45.449 21:45:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # uname 00:28:45.449 21:45:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:45.449 21:45:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1591909 00:28:45.449 21:45:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:45.449 21:45:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:45.449 21:45:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1591909' 00:28:45.449 killing process with pid 1591909 00:28:45.449 21:45:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # kill 1591909 00:28:45.449 21:45:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # wait 1591909 00:28:45.707 21:45:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:45.707 21:45:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:45.707 21:45:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:45.707 21:45:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:45.707 21:45:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:45.707 21:45:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:45.707 21:45:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:45.707 21:45:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:48.241 21:45:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:48.241 00:28:48.241 real 0m21.646s 00:28:48.241 user 0m25.997s 00:28:48.241 sys 0m6.016s 00:28:48.241 21:45:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:48.241 21:45:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:48.241 ************************************ 00:28:48.241 END TEST nvmf_discovery_remove_ifc 00:28:48.241 ************************************ 00:28:48.241 21:45:47 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:48.241 21:45:47 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:28:48.241 21:45:47 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:48.241 21:45:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:48.241 ************************************ 00:28:48.241 START TEST nvmf_identify_kernel_target 00:28:48.241 ************************************ 00:28:48.241 21:45:48 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:48.241 * Looking for test storage... 00:28:48.241 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:48.241 21:45:48 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:48.241 21:45:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:28:48.241 21:45:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:48.241 21:45:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:48.241 21:45:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:48.241 21:45:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:48.241 21:45:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:48.241 21:45:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:48.241 21:45:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:48.241 21:45:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:48.241 21:45:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:48.241 21:45:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:48.241 21:45:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:28:48.241 21:45:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:28:48.241 21:45:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:48.241 21:45:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:48.241 21:45:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:48.241 21:45:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:48.241 21:45:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:48.241 21:45:48 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:48.241 21:45:48 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:48.241 21:45:48 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:48.242 21:45:48 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.242 21:45:48 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.242 21:45:48 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.242 21:45:48 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:28:48.242 21:45:48 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.242 21:45:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:28:48.242 21:45:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:48.242 21:45:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:48.242 21:45:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:48.242 21:45:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:48.242 21:45:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:48.242 21:45:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:48.242 21:45:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:48.242 21:45:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:48.242 21:45:48 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:28:48.242 21:45:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:48.242 21:45:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:48.242 21:45:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:48.242 21:45:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:48.242 21:45:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:48.242 21:45:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:48.242 21:45:48 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:48.242 21:45:48 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:48.242 21:45:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:48.242 21:45:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:48.242 21:45:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:28:48.242 21:45:48 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:54.816 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:54.816 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:54.816 Found net devices under 0000:af:00.0: cvl_0_0 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:54.816 Found net devices under 0000:af:00.1: cvl_0_1 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:54.816 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:54.816 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:28:54.816 00:28:54.816 --- 10.0.0.2 ping statistics --- 00:28:54.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:54.816 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:54.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:54.816 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.260 ms 00:28:54.816 00:28:54.816 --- 10.0.0.1 ping statistics --- 00:28:54.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:54.816 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:28:54.816 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:54.817 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:28:54.817 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:54.817 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:54.817 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:54.817 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:54.817 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:54.817 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:54.817 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:54.817 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:28:54.817 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:28:54.817 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:28:54.817 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:54.817 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:54.817 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:54.817 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:54.817 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:54.817 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:54.817 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:54.817 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:54.817 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:54.817 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:28:54.817 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:54.817 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:54.817 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:28:54.817 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:54.817 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:54.817 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:54.817 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:28:54.817 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:28:54.817 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:28:54.817 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:54.817 21:45:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:57.421 Waiting for block devices as requested 00:28:57.421 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:28:57.681 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:57.681 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:57.681 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:57.681 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:57.940 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:57.940 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:57.940 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:58.199 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:58.199 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:58.199 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:58.457 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:58.457 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:58.457 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:58.457 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:58.716 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:58.716 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:58.716 21:45:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:58.716 21:45:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:58.716 21:45:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:28:58.716 21:45:58 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:28:58.716 21:45:58 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:58.716 21:45:58 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:28:58.716 21:45:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:28:58.716 21:45:58 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:28:58.716 21:45:58 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:58.976 No valid GPT data, bailing 00:28:58.976 21:45:59 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:58.976 21:45:59 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:28:58.976 21:45:59 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:28:58.976 21:45:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:28:58.976 21:45:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:28:58.976 21:45:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:58.976 21:45:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:58.976 21:45:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:58.976 21:45:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:58.976 21:45:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:28:58.976 21:45:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:28:58.976 21:45:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:28:58.976 21:45:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:28:58.976 21:45:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:28:58.976 21:45:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:28:58.976 21:45:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:28:58.976 21:45:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:58.976 21:45:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:28:58.976 00:28:58.976 Discovery Log Number of Records 2, Generation counter 2 00:28:58.976 =====Discovery Log Entry 0====== 00:28:58.976 trtype: tcp 00:28:58.976 adrfam: ipv4 00:28:58.976 subtype: current discovery subsystem 00:28:58.976 treq: not specified, sq flow control disable supported 00:28:58.976 portid: 1 00:28:58.976 trsvcid: 4420 00:28:58.976 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:58.976 traddr: 10.0.0.1 00:28:58.976 eflags: none 00:28:58.976 sectype: none 00:28:58.976 =====Discovery Log Entry 1====== 00:28:58.976 trtype: tcp 00:28:58.976 adrfam: ipv4 00:28:58.976 subtype: nvme subsystem 00:28:58.976 treq: not specified, sq flow control disable supported 00:28:58.976 portid: 1 00:28:58.976 trsvcid: 4420 00:28:58.976 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:58.976 traddr: 10.0.0.1 00:28:58.976 eflags: none 00:28:58.976 sectype: none 00:28:58.976 21:45:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:28:58.976 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:28:58.976 EAL: No free 2048 kB hugepages reported on node 1 00:28:58.976 ===================================================== 00:28:58.976 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:58.976 ===================================================== 00:28:58.976 Controller Capabilities/Features 00:28:58.976 ================================ 00:28:58.976 Vendor ID: 0000 00:28:58.976 Subsystem Vendor ID: 0000 00:28:58.976 Serial Number: 004e9026ded6c5d7cb98 00:28:58.976 Model Number: Linux 00:28:58.976 Firmware Version: 6.7.0-68 00:28:58.976 Recommended Arb Burst: 0 00:28:58.976 IEEE OUI Identifier: 00 00 00 00:28:58.976 Multi-path I/O 00:28:58.976 May have multiple subsystem ports: No 00:28:58.976 May have multiple controllers: No 00:28:58.976 Associated with SR-IOV VF: No 00:28:58.976 Max Data Transfer Size: Unlimited 00:28:58.976 Max Number of Namespaces: 0 00:28:58.976 Max Number of I/O Queues: 1024 00:28:58.976 NVMe Specification Version (VS): 1.3 00:28:58.976 NVMe Specification Version (Identify): 1.3 00:28:58.976 Maximum Queue Entries: 1024 00:28:58.976 Contiguous Queues Required: No 00:28:58.976 Arbitration Mechanisms Supported 00:28:58.976 Weighted Round Robin: Not Supported 00:28:58.976 Vendor Specific: Not Supported 00:28:58.976 Reset Timeout: 7500 ms 00:28:58.976 Doorbell Stride: 4 bytes 00:28:58.976 NVM Subsystem Reset: Not Supported 00:28:58.976 Command Sets Supported 00:28:58.976 NVM Command Set: Supported 00:28:58.976 Boot Partition: Not Supported 00:28:58.976 Memory Page Size Minimum: 4096 bytes 00:28:58.976 Memory Page Size Maximum: 4096 bytes 00:28:58.976 Persistent Memory Region: Not Supported 00:28:58.976 Optional Asynchronous Events Supported 00:28:58.976 Namespace Attribute Notices: Not Supported 00:28:58.976 Firmware Activation Notices: Not Supported 00:28:58.976 ANA Change Notices: Not Supported 00:28:58.976 PLE Aggregate Log Change Notices: Not Supported 00:28:58.976 LBA Status Info Alert Notices: Not Supported 00:28:58.976 EGE Aggregate Log Change Notices: Not Supported 00:28:58.976 Normal NVM Subsystem Shutdown event: Not Supported 00:28:58.976 Zone Descriptor Change Notices: Not Supported 00:28:58.976 Discovery Log Change Notices: Supported 00:28:58.976 Controller Attributes 00:28:58.976 128-bit Host Identifier: Not Supported 00:28:58.976 Non-Operational Permissive Mode: Not Supported 00:28:58.976 NVM Sets: Not Supported 00:28:58.976 Read Recovery Levels: Not Supported 00:28:58.976 Endurance Groups: Not Supported 00:28:58.976 Predictable Latency Mode: Not Supported 00:28:58.976 Traffic Based Keep ALive: Not Supported 00:28:58.976 Namespace Granularity: Not Supported 00:28:58.976 SQ Associations: Not Supported 00:28:58.976 UUID List: Not Supported 00:28:58.976 Multi-Domain Subsystem: Not Supported 00:28:58.976 Fixed Capacity Management: Not Supported 00:28:58.976 Variable Capacity Management: Not Supported 00:28:58.976 Delete Endurance Group: Not Supported 00:28:58.976 Delete NVM Set: Not Supported 00:28:58.976 Extended LBA Formats Supported: Not Supported 00:28:58.976 Flexible Data Placement Supported: Not Supported 00:28:58.976 00:28:58.976 Controller Memory Buffer Support 00:28:58.976 ================================ 00:28:58.976 Supported: No 00:28:58.976 00:28:58.976 Persistent Memory Region Support 00:28:58.976 ================================ 00:28:58.976 Supported: No 00:28:58.976 00:28:58.976 Admin Command Set Attributes 00:28:58.976 ============================ 00:28:58.976 Security Send/Receive: Not Supported 00:28:58.976 Format NVM: Not Supported 00:28:58.976 Firmware Activate/Download: Not Supported 00:28:58.976 Namespace Management: Not Supported 00:28:58.976 Device Self-Test: Not Supported 00:28:58.977 Directives: Not Supported 00:28:58.977 NVMe-MI: Not Supported 00:28:58.977 Virtualization Management: Not Supported 00:28:58.977 Doorbell Buffer Config: Not Supported 00:28:58.977 Get LBA Status Capability: Not Supported 00:28:58.977 Command & Feature Lockdown Capability: Not Supported 00:28:58.977 Abort Command Limit: 1 00:28:58.977 Async Event Request Limit: 1 00:28:58.977 Number of Firmware Slots: N/A 00:28:58.977 Firmware Slot 1 Read-Only: N/A 00:28:58.977 Firmware Activation Without Reset: N/A 00:28:58.977 Multiple Update Detection Support: N/A 00:28:58.977 Firmware Update Granularity: No Information Provided 00:28:58.977 Per-Namespace SMART Log: No 00:28:58.977 Asymmetric Namespace Access Log Page: Not Supported 00:28:58.977 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:58.977 Command Effects Log Page: Not Supported 00:28:58.977 Get Log Page Extended Data: Supported 00:28:58.977 Telemetry Log Pages: Not Supported 00:28:58.977 Persistent Event Log Pages: Not Supported 00:28:58.977 Supported Log Pages Log Page: May Support 00:28:58.977 Commands Supported & Effects Log Page: Not Supported 00:28:58.977 Feature Identifiers & Effects Log Page:May Support 00:28:58.977 NVMe-MI Commands & Effects Log Page: May Support 00:28:58.977 Data Area 4 for Telemetry Log: Not Supported 00:28:58.977 Error Log Page Entries Supported: 1 00:28:58.977 Keep Alive: Not Supported 00:28:58.977 00:28:58.977 NVM Command Set Attributes 00:28:58.977 ========================== 00:28:58.977 Submission Queue Entry Size 00:28:58.977 Max: 1 00:28:58.977 Min: 1 00:28:58.977 Completion Queue Entry Size 00:28:58.977 Max: 1 00:28:58.977 Min: 1 00:28:58.977 Number of Namespaces: 0 00:28:58.977 Compare Command: Not Supported 00:28:58.977 Write Uncorrectable Command: Not Supported 00:28:58.977 Dataset Management Command: Not Supported 00:28:58.977 Write Zeroes Command: Not Supported 00:28:58.977 Set Features Save Field: Not Supported 00:28:58.977 Reservations: Not Supported 00:28:58.977 Timestamp: Not Supported 00:28:58.977 Copy: Not Supported 00:28:58.977 Volatile Write Cache: Not Present 00:28:58.977 Atomic Write Unit (Normal): 1 00:28:58.977 Atomic Write Unit (PFail): 1 00:28:58.977 Atomic Compare & Write Unit: 1 00:28:58.977 Fused Compare & Write: Not Supported 00:28:58.977 Scatter-Gather List 00:28:58.977 SGL Command Set: Supported 00:28:58.977 SGL Keyed: Not Supported 00:28:58.977 SGL Bit Bucket Descriptor: Not Supported 00:28:58.977 SGL Metadata Pointer: Not Supported 00:28:58.977 Oversized SGL: Not Supported 00:28:58.977 SGL Metadata Address: Not Supported 00:28:58.977 SGL Offset: Supported 00:28:58.977 Transport SGL Data Block: Not Supported 00:28:58.977 Replay Protected Memory Block: Not Supported 00:28:58.977 00:28:58.977 Firmware Slot Information 00:28:58.977 ========================= 00:28:58.977 Active slot: 0 00:28:58.977 00:28:58.977 00:28:58.977 Error Log 00:28:58.977 ========= 00:28:58.977 00:28:58.977 Active Namespaces 00:28:58.977 ================= 00:28:58.977 Discovery Log Page 00:28:58.977 ================== 00:28:58.977 Generation Counter: 2 00:28:58.977 Number of Records: 2 00:28:58.977 Record Format: 0 00:28:58.977 00:28:58.977 Discovery Log Entry 0 00:28:58.977 ---------------------- 00:28:58.977 Transport Type: 3 (TCP) 00:28:58.977 Address Family: 1 (IPv4) 00:28:58.977 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:58.977 Entry Flags: 00:28:58.977 Duplicate Returned Information: 0 00:28:58.977 Explicit Persistent Connection Support for Discovery: 0 00:28:58.977 Transport Requirements: 00:28:58.977 Secure Channel: Not Specified 00:28:58.977 Port ID: 1 (0x0001) 00:28:58.977 Controller ID: 65535 (0xffff) 00:28:58.977 Admin Max SQ Size: 32 00:28:58.977 Transport Service Identifier: 4420 00:28:58.977 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:58.977 Transport Address: 10.0.0.1 00:28:58.977 Discovery Log Entry 1 00:28:58.977 ---------------------- 00:28:58.977 Transport Type: 3 (TCP) 00:28:58.977 Address Family: 1 (IPv4) 00:28:58.977 Subsystem Type: 2 (NVM Subsystem) 00:28:58.977 Entry Flags: 00:28:58.977 Duplicate Returned Information: 0 00:28:58.977 Explicit Persistent Connection Support for Discovery: 0 00:28:58.977 Transport Requirements: 00:28:58.977 Secure Channel: Not Specified 00:28:58.977 Port ID: 1 (0x0001) 00:28:58.977 Controller ID: 65535 (0xffff) 00:28:58.977 Admin Max SQ Size: 32 00:28:58.977 Transport Service Identifier: 4420 00:28:58.977 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:28:58.977 Transport Address: 10.0.0.1 00:28:58.977 21:45:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:59.237 EAL: No free 2048 kB hugepages reported on node 1 00:28:59.237 get_feature(0x01) failed 00:28:59.237 get_feature(0x02) failed 00:28:59.237 get_feature(0x04) failed 00:28:59.237 ===================================================== 00:28:59.237 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:59.237 ===================================================== 00:28:59.237 Controller Capabilities/Features 00:28:59.237 ================================ 00:28:59.237 Vendor ID: 0000 00:28:59.237 Subsystem Vendor ID: 0000 00:28:59.237 Serial Number: 2bc25f678347028be174 00:28:59.237 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:28:59.237 Firmware Version: 6.7.0-68 00:28:59.237 Recommended Arb Burst: 6 00:28:59.237 IEEE OUI Identifier: 00 00 00 00:28:59.237 Multi-path I/O 00:28:59.237 May have multiple subsystem ports: Yes 00:28:59.237 May have multiple controllers: Yes 00:28:59.237 Associated with SR-IOV VF: No 00:28:59.237 Max Data Transfer Size: Unlimited 00:28:59.237 Max Number of Namespaces: 1024 00:28:59.237 Max Number of I/O Queues: 128 00:28:59.237 NVMe Specification Version (VS): 1.3 00:28:59.237 NVMe Specification Version (Identify): 1.3 00:28:59.237 Maximum Queue Entries: 1024 00:28:59.237 Contiguous Queues Required: No 00:28:59.237 Arbitration Mechanisms Supported 00:28:59.237 Weighted Round Robin: Not Supported 00:28:59.237 Vendor Specific: Not Supported 00:28:59.237 Reset Timeout: 7500 ms 00:28:59.237 Doorbell Stride: 4 bytes 00:28:59.237 NVM Subsystem Reset: Not Supported 00:28:59.237 Command Sets Supported 00:28:59.237 NVM Command Set: Supported 00:28:59.237 Boot Partition: Not Supported 00:28:59.237 Memory Page Size Minimum: 4096 bytes 00:28:59.237 Memory Page Size Maximum: 4096 bytes 00:28:59.237 Persistent Memory Region: Not Supported 00:28:59.237 Optional Asynchronous Events Supported 00:28:59.237 Namespace Attribute Notices: Supported 00:28:59.237 Firmware Activation Notices: Not Supported 00:28:59.237 ANA Change Notices: Supported 00:28:59.237 PLE Aggregate Log Change Notices: Not Supported 00:28:59.237 LBA Status Info Alert Notices: Not Supported 00:28:59.237 EGE Aggregate Log Change Notices: Not Supported 00:28:59.237 Normal NVM Subsystem Shutdown event: Not Supported 00:28:59.237 Zone Descriptor Change Notices: Not Supported 00:28:59.237 Discovery Log Change Notices: Not Supported 00:28:59.237 Controller Attributes 00:28:59.237 128-bit Host Identifier: Supported 00:28:59.237 Non-Operational Permissive Mode: Not Supported 00:28:59.237 NVM Sets: Not Supported 00:28:59.237 Read Recovery Levels: Not Supported 00:28:59.237 Endurance Groups: Not Supported 00:28:59.237 Predictable Latency Mode: Not Supported 00:28:59.237 Traffic Based Keep ALive: Supported 00:28:59.237 Namespace Granularity: Not Supported 00:28:59.237 SQ Associations: Not Supported 00:28:59.237 UUID List: Not Supported 00:28:59.237 Multi-Domain Subsystem: Not Supported 00:28:59.237 Fixed Capacity Management: Not Supported 00:28:59.237 Variable Capacity Management: Not Supported 00:28:59.237 Delete Endurance Group: Not Supported 00:28:59.237 Delete NVM Set: Not Supported 00:28:59.237 Extended LBA Formats Supported: Not Supported 00:28:59.237 Flexible Data Placement Supported: Not Supported 00:28:59.237 00:28:59.237 Controller Memory Buffer Support 00:28:59.237 ================================ 00:28:59.237 Supported: No 00:28:59.237 00:28:59.237 Persistent Memory Region Support 00:28:59.237 ================================ 00:28:59.237 Supported: No 00:28:59.237 00:28:59.237 Admin Command Set Attributes 00:28:59.237 ============================ 00:28:59.237 Security Send/Receive: Not Supported 00:28:59.237 Format NVM: Not Supported 00:28:59.237 Firmware Activate/Download: Not Supported 00:28:59.237 Namespace Management: Not Supported 00:28:59.237 Device Self-Test: Not Supported 00:28:59.237 Directives: Not Supported 00:28:59.237 NVMe-MI: Not Supported 00:28:59.237 Virtualization Management: Not Supported 00:28:59.237 Doorbell Buffer Config: Not Supported 00:28:59.237 Get LBA Status Capability: Not Supported 00:28:59.237 Command & Feature Lockdown Capability: Not Supported 00:28:59.237 Abort Command Limit: 4 00:28:59.237 Async Event Request Limit: 4 00:28:59.237 Number of Firmware Slots: N/A 00:28:59.237 Firmware Slot 1 Read-Only: N/A 00:28:59.237 Firmware Activation Without Reset: N/A 00:28:59.237 Multiple Update Detection Support: N/A 00:28:59.237 Firmware Update Granularity: No Information Provided 00:28:59.237 Per-Namespace SMART Log: Yes 00:28:59.237 Asymmetric Namespace Access Log Page: Supported 00:28:59.237 ANA Transition Time : 10 sec 00:28:59.237 00:28:59.237 Asymmetric Namespace Access Capabilities 00:28:59.237 ANA Optimized State : Supported 00:28:59.237 ANA Non-Optimized State : Supported 00:28:59.237 ANA Inaccessible State : Supported 00:28:59.237 ANA Persistent Loss State : Supported 00:28:59.237 ANA Change State : Supported 00:28:59.237 ANAGRPID is not changed : No 00:28:59.237 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:28:59.237 00:28:59.237 ANA Group Identifier Maximum : 128 00:28:59.237 Number of ANA Group Identifiers : 128 00:28:59.237 Max Number of Allowed Namespaces : 1024 00:28:59.237 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:28:59.237 Command Effects Log Page: Supported 00:28:59.237 Get Log Page Extended Data: Supported 00:28:59.237 Telemetry Log Pages: Not Supported 00:28:59.237 Persistent Event Log Pages: Not Supported 00:28:59.237 Supported Log Pages Log Page: May Support 00:28:59.237 Commands Supported & Effects Log Page: Not Supported 00:28:59.237 Feature Identifiers & Effects Log Page:May Support 00:28:59.237 NVMe-MI Commands & Effects Log Page: May Support 00:28:59.237 Data Area 4 for Telemetry Log: Not Supported 00:28:59.237 Error Log Page Entries Supported: 128 00:28:59.237 Keep Alive: Supported 00:28:59.237 Keep Alive Granularity: 1000 ms 00:28:59.237 00:28:59.237 NVM Command Set Attributes 00:28:59.237 ========================== 00:28:59.237 Submission Queue Entry Size 00:28:59.237 Max: 64 00:28:59.237 Min: 64 00:28:59.237 Completion Queue Entry Size 00:28:59.237 Max: 16 00:28:59.237 Min: 16 00:28:59.237 Number of Namespaces: 1024 00:28:59.237 Compare Command: Not Supported 00:28:59.237 Write Uncorrectable Command: Not Supported 00:28:59.237 Dataset Management Command: Supported 00:28:59.237 Write Zeroes Command: Supported 00:28:59.237 Set Features Save Field: Not Supported 00:28:59.237 Reservations: Not Supported 00:28:59.237 Timestamp: Not Supported 00:28:59.237 Copy: Not Supported 00:28:59.237 Volatile Write Cache: Present 00:28:59.237 Atomic Write Unit (Normal): 1 00:28:59.237 Atomic Write Unit (PFail): 1 00:28:59.237 Atomic Compare & Write Unit: 1 00:28:59.237 Fused Compare & Write: Not Supported 00:28:59.237 Scatter-Gather List 00:28:59.237 SGL Command Set: Supported 00:28:59.237 SGL Keyed: Not Supported 00:28:59.237 SGL Bit Bucket Descriptor: Not Supported 00:28:59.237 SGL Metadata Pointer: Not Supported 00:28:59.237 Oversized SGL: Not Supported 00:28:59.237 SGL Metadata Address: Not Supported 00:28:59.237 SGL Offset: Supported 00:28:59.237 Transport SGL Data Block: Not Supported 00:28:59.237 Replay Protected Memory Block: Not Supported 00:28:59.237 00:28:59.237 Firmware Slot Information 00:28:59.237 ========================= 00:28:59.237 Active slot: 0 00:28:59.237 00:28:59.237 Asymmetric Namespace Access 00:28:59.238 =========================== 00:28:59.238 Change Count : 0 00:28:59.238 Number of ANA Group Descriptors : 1 00:28:59.238 ANA Group Descriptor : 0 00:28:59.238 ANA Group ID : 1 00:28:59.238 Number of NSID Values : 1 00:28:59.238 Change Count : 0 00:28:59.238 ANA State : 1 00:28:59.238 Namespace Identifier : 1 00:28:59.238 00:28:59.238 Commands Supported and Effects 00:28:59.238 ============================== 00:28:59.238 Admin Commands 00:28:59.238 -------------- 00:28:59.238 Get Log Page (02h): Supported 00:28:59.238 Identify (06h): Supported 00:28:59.238 Abort (08h): Supported 00:28:59.238 Set Features (09h): Supported 00:28:59.238 Get Features (0Ah): Supported 00:28:59.238 Asynchronous Event Request (0Ch): Supported 00:28:59.238 Keep Alive (18h): Supported 00:28:59.238 I/O Commands 00:28:59.238 ------------ 00:28:59.238 Flush (00h): Supported 00:28:59.238 Write (01h): Supported LBA-Change 00:28:59.238 Read (02h): Supported 00:28:59.238 Write Zeroes (08h): Supported LBA-Change 00:28:59.238 Dataset Management (09h): Supported 00:28:59.238 00:28:59.238 Error Log 00:28:59.238 ========= 00:28:59.238 Entry: 0 00:28:59.238 Error Count: 0x3 00:28:59.238 Submission Queue Id: 0x0 00:28:59.238 Command Id: 0x5 00:28:59.238 Phase Bit: 0 00:28:59.238 Status Code: 0x2 00:28:59.238 Status Code Type: 0x0 00:28:59.238 Do Not Retry: 1 00:28:59.238 Error Location: 0x28 00:28:59.238 LBA: 0x0 00:28:59.238 Namespace: 0x0 00:28:59.238 Vendor Log Page: 0x0 00:28:59.238 ----------- 00:28:59.238 Entry: 1 00:28:59.238 Error Count: 0x2 00:28:59.238 Submission Queue Id: 0x0 00:28:59.238 Command Id: 0x5 00:28:59.238 Phase Bit: 0 00:28:59.238 Status Code: 0x2 00:28:59.238 Status Code Type: 0x0 00:28:59.238 Do Not Retry: 1 00:28:59.238 Error Location: 0x28 00:28:59.238 LBA: 0x0 00:28:59.238 Namespace: 0x0 00:28:59.238 Vendor Log Page: 0x0 00:28:59.238 ----------- 00:28:59.238 Entry: 2 00:28:59.238 Error Count: 0x1 00:28:59.238 Submission Queue Id: 0x0 00:28:59.238 Command Id: 0x4 00:28:59.238 Phase Bit: 0 00:28:59.238 Status Code: 0x2 00:28:59.238 Status Code Type: 0x0 00:28:59.238 Do Not Retry: 1 00:28:59.238 Error Location: 0x28 00:28:59.238 LBA: 0x0 00:28:59.238 Namespace: 0x0 00:28:59.238 Vendor Log Page: 0x0 00:28:59.238 00:28:59.238 Number of Queues 00:28:59.238 ================ 00:28:59.238 Number of I/O Submission Queues: 128 00:28:59.238 Number of I/O Completion Queues: 128 00:28:59.238 00:28:59.238 ZNS Specific Controller Data 00:28:59.238 ============================ 00:28:59.238 Zone Append Size Limit: 0 00:28:59.238 00:28:59.238 00:28:59.238 Active Namespaces 00:28:59.238 ================= 00:28:59.238 get_feature(0x05) failed 00:28:59.238 Namespace ID:1 00:28:59.238 Command Set Identifier: NVM (00h) 00:28:59.238 Deallocate: Supported 00:28:59.238 Deallocated/Unwritten Error: Not Supported 00:28:59.238 Deallocated Read Value: Unknown 00:28:59.238 Deallocate in Write Zeroes: Not Supported 00:28:59.238 Deallocated Guard Field: 0xFFFF 00:28:59.238 Flush: Supported 00:28:59.238 Reservation: Not Supported 00:28:59.238 Namespace Sharing Capabilities: Multiple Controllers 00:28:59.238 Size (in LBAs): 1953525168 (931GiB) 00:28:59.238 Capacity (in LBAs): 1953525168 (931GiB) 00:28:59.238 Utilization (in LBAs): 1953525168 (931GiB) 00:28:59.238 UUID: c741b9ab-39bf-41ce-95da-909475e3394a 00:28:59.238 Thin Provisioning: Not Supported 00:28:59.238 Per-NS Atomic Units: Yes 00:28:59.238 Atomic Boundary Size (Normal): 0 00:28:59.238 Atomic Boundary Size (PFail): 0 00:28:59.238 Atomic Boundary Offset: 0 00:28:59.238 NGUID/EUI64 Never Reused: No 00:28:59.238 ANA group ID: 1 00:28:59.238 Namespace Write Protected: No 00:28:59.238 Number of LBA Formats: 1 00:28:59.238 Current LBA Format: LBA Format #00 00:28:59.238 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:59.238 00:28:59.238 21:45:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:28:59.238 21:45:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:59.238 21:45:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:28:59.238 21:45:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:59.238 21:45:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:28:59.238 21:45:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:59.238 21:45:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:59.238 rmmod nvme_tcp 00:28:59.238 rmmod nvme_fabrics 00:28:59.238 21:45:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:59.238 21:45:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:28:59.238 21:45:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:28:59.238 21:45:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:28:59.238 21:45:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:59.238 21:45:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:59.238 21:45:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:59.238 21:45:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:59.238 21:45:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:59.238 21:45:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:59.238 21:45:59 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:59.238 21:45:59 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:01.143 21:46:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:01.143 21:46:01 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:29:01.143 21:46:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:29:01.143 21:46:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:29:01.402 21:46:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:01.402 21:46:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:01.403 21:46:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:01.403 21:46:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:01.403 21:46:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:29:01.403 21:46:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:29:01.403 21:46:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:03.938 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:03.938 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:03.938 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:03.938 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:03.938 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:03.938 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:03.938 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:03.938 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:03.938 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:03.938 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:03.938 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:03.938 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:03.938 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:03.938 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:03.938 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:03.938 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:04.875 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:29:05.135 00:29:05.135 real 0m17.159s 00:29:05.135 user 0m4.215s 00:29:05.135 sys 0m9.186s 00:29:05.135 21:46:05 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:05.135 21:46:05 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:29:05.135 ************************************ 00:29:05.135 END TEST nvmf_identify_kernel_target 00:29:05.135 ************************************ 00:29:05.135 21:46:05 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:29:05.135 21:46:05 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:29:05.135 21:46:05 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:05.135 21:46:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:05.135 ************************************ 00:29:05.135 START TEST nvmf_auth_host 00:29:05.135 ************************************ 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:29:05.135 * Looking for test storage... 00:29:05.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:29:05.135 21:46:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:11.701 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:11.701 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:11.701 Found net devices under 0000:af:00.0: cvl_0_0 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:11.701 Found net devices under 0000:af:00.1: cvl_0_1 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:11.701 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:11.702 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:11.702 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:11.702 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:11.702 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:11.702 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:11.702 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:11.702 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:11.702 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:11.702 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:11.702 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:11.702 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:11.702 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:11.702 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:11.702 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:11.702 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:29:11.702 00:29:11.702 --- 10.0.0.2 ping statistics --- 00:29:11.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:11.702 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:29:11.702 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:11.702 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:11.702 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:29:11.702 00:29:11.702 --- 10.0.0.1 ping statistics --- 00:29:11.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:11.702 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:29:11.702 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:11.702 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:29:11.702 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:11.702 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:11.702 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:11.702 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:11.702 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:11.702 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:11.702 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:11.702 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:29:11.702 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:11.702 21:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:11.702 21:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.702 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1605518 00:29:11.702 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1605518 00:29:11.702 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:29:11.702 21:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@830 -- # '[' -z 1605518 ']' 00:29:11.702 21:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:11.702 21:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:11.702 21:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:11.702 21:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:11.702 21:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.640 21:46:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:12.640 21:46:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@863 -- # return 0 00:29:12.640 21:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:12.640 21:46:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:12.640 21:46:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.899 21:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:12.899 21:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:29:12.899 21:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:29:12.899 21:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:29:12.899 21:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:12.899 21:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:29:12.899 21:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:29:12.899 21:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:29:12.899 21:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:12.899 21:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e2b602953607d7d2e7efa95b93e5ad4e 00:29:12.899 21:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:29:12.899 21:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.srH 00:29:12.899 21:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e2b602953607d7d2e7efa95b93e5ad4e 0 00:29:12.899 21:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e2b602953607d7d2e7efa95b93e5ad4e 0 00:29:12.899 21:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:29:12.899 21:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:29:12.899 21:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e2b602953607d7d2e7efa95b93e5ad4e 00:29:12.899 21:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:29:12.899 21:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:29:12.899 21:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.srH 00:29:12.899 21:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.srH 00:29:12.899 21:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.srH 00:29:12.899 21:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:29:12.899 21:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:29:12.899 21:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:12.899 21:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:29:12.899 21:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:29:12.899 21:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:29:12.899 21:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:29:12.899 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ad6cd468a0388f34b27bde3d93251594b361c84fcb911f505832161dac46c4a3 00:29:12.899 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:29:12.899 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.y3O 00:29:12.899 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ad6cd468a0388f34b27bde3d93251594b361c84fcb911f505832161dac46c4a3 3 00:29:12.899 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ad6cd468a0388f34b27bde3d93251594b361c84fcb911f505832161dac46c4a3 3 00:29:12.899 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:29:12.899 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:29:12.899 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ad6cd468a0388f34b27bde3d93251594b361c84fcb911f505832161dac46c4a3 00:29:12.899 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:29:12.899 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:29:12.899 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.y3O 00:29:12.899 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.y3O 00:29:12.899 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.y3O 00:29:12.899 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:29:12.899 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:29:12.899 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:12.899 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:29:12.899 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:29:12.899 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:29:12.899 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:12.899 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=830bf2a6e3a7a0831c882b0f6fdf001212fc2dfc2a98ce1e 00:29:12.899 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:29:12.899 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.OlP 00:29:12.899 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 830bf2a6e3a7a0831c882b0f6fdf001212fc2dfc2a98ce1e 0 00:29:12.899 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 830bf2a6e3a7a0831c882b0f6fdf001212fc2dfc2a98ce1e 0 00:29:12.899 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:29:12.899 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:29:12.899 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=830bf2a6e3a7a0831c882b0f6fdf001212fc2dfc2a98ce1e 00:29:12.899 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:29:12.899 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:29:12.899 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.OlP 00:29:12.899 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.OlP 00:29:12.899 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.OlP 00:29:12.899 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:29:12.899 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:29:12.899 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:12.899 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:29:12.899 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:29:12.899 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:29:12.899 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:12.899 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=251fc638c92ead2be3928026dd63afa3d148cc4ded48b03b 00:29:12.899 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:29:12.899 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.1mG 00:29:12.899 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 251fc638c92ead2be3928026dd63afa3d148cc4ded48b03b 2 00:29:12.899 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 251fc638c92ead2be3928026dd63afa3d148cc4ded48b03b 2 00:29:12.899 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:29:12.899 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:29:12.900 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=251fc638c92ead2be3928026dd63afa3d148cc4ded48b03b 00:29:12.900 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:29:12.900 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:29:13.157 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.1mG 00:29:13.157 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.1mG 00:29:13.157 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.1mG 00:29:13.157 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:29:13.157 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:29:13.157 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:13.157 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:29:13.157 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:29:13.157 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:29:13.157 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:13.157 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c7c83b8d2341165411fe9350c7279537 00:29:13.157 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:29:13.157 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.F4V 00:29:13.157 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c7c83b8d2341165411fe9350c7279537 1 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c7c83b8d2341165411fe9350c7279537 1 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c7c83b8d2341165411fe9350c7279537 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.F4V 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.F4V 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.F4V 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5060281adfccea01ea74f5c32a6d1f99 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.czo 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5060281adfccea01ea74f5c32a6d1f99 1 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5060281adfccea01ea74f5c32a6d1f99 1 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5060281adfccea01ea74f5c32a6d1f99 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.czo 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.czo 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.czo 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5d529bacaa3e58332eecb21ce2c96b9d37049915134d43be 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Ek9 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5d529bacaa3e58332eecb21ce2c96b9d37049915134d43be 2 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5d529bacaa3e58332eecb21ce2c96b9d37049915134d43be 2 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5d529bacaa3e58332eecb21ce2c96b9d37049915134d43be 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Ek9 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Ek9 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Ek9 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=998c843c102b9bc3e303775963f5b627 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.V4w 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 998c843c102b9bc3e303775963f5b627 0 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 998c843c102b9bc3e303775963f5b627 0 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=998c843c102b9bc3e303775963f5b627 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:29:13.158 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:29:13.416 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.V4w 00:29:13.416 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.V4w 00:29:13.416 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.V4w 00:29:13.416 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:29:13.416 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:29:13.416 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:13.416 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:29:13.416 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:29:13.416 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:29:13.416 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:29:13.416 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9726bccba8a117b240821c75ae20828d7e3bd83be2303781a38f83bbd59e8bc1 00:29:13.416 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:29:13.416 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Iuu 00:29:13.416 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9726bccba8a117b240821c75ae20828d7e3bd83be2303781a38f83bbd59e8bc1 3 00:29:13.416 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9726bccba8a117b240821c75ae20828d7e3bd83be2303781a38f83bbd59e8bc1 3 00:29:13.416 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:29:13.416 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:29:13.416 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9726bccba8a117b240821c75ae20828d7e3bd83be2303781a38f83bbd59e8bc1 00:29:13.416 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:29:13.416 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:29:13.416 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Iuu 00:29:13.416 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Iuu 00:29:13.416 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Iuu 00:29:13.416 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:29:13.416 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1605518 00:29:13.416 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@830 -- # '[' -z 1605518 ']' 00:29:13.416 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:13.416 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:13.416 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:13.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:13.416 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:13.416 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@863 -- # return 0 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.srH 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.y3O ]] 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.y3O 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.OlP 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.1mG ]] 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1mG 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.F4V 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.czo ]] 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.czo 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Ek9 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.V4w ]] 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.V4w 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Iuu 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:13.675 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:16.962 Waiting for block devices as requested 00:29:16.962 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:29:16.962 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:16.962 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:16.962 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:17.221 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:17.221 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:17.221 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:17.480 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:17.480 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:17.480 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:17.480 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:17.739 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:17.739 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:17.739 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:17.998 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:17.998 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:17.998 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:18.565 21:46:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:29:18.565 21:46:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:18.565 21:46:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:29:18.565 21:46:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:29:18.565 21:46:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:18.565 21:46:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:29:18.566 21:46:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:29:18.566 21:46:18 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:29:18.566 21:46:18 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:18.566 No valid GPT data, bailing 00:29:18.566 21:46:18 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:18.566 21:46:18 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:29:18.566 21:46:18 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:29:18.566 21:46:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:29:18.566 21:46:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:29:18.566 21:46:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:18.566 21:46:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:29:18.825 00:29:18.825 Discovery Log Number of Records 2, Generation counter 2 00:29:18.825 =====Discovery Log Entry 0====== 00:29:18.825 trtype: tcp 00:29:18.825 adrfam: ipv4 00:29:18.825 subtype: current discovery subsystem 00:29:18.825 treq: not specified, sq flow control disable supported 00:29:18.825 portid: 1 00:29:18.825 trsvcid: 4420 00:29:18.825 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:18.825 traddr: 10.0.0.1 00:29:18.825 eflags: none 00:29:18.825 sectype: none 00:29:18.825 =====Discovery Log Entry 1====== 00:29:18.825 trtype: tcp 00:29:18.825 adrfam: ipv4 00:29:18.825 subtype: nvme subsystem 00:29:18.825 treq: not specified, sq flow control disable supported 00:29:18.825 portid: 1 00:29:18.825 trsvcid: 4420 00:29:18.825 subnqn: nqn.2024-02.io.spdk:cnode0 00:29:18.825 traddr: 10.0.0.1 00:29:18.825 eflags: none 00:29:18.825 sectype: none 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODMwYmYyYTZlM2E3YTA4MzFjODgyYjBmNmZkZjAwMTIxMmZjMmRmYzJhOThjZTFlqv2ogQ==: 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjUxZmM2MzhjOTJlYWQyYmUzOTI4MDI2ZGQ2M2FmYTNkMTQ4Y2M0ZGVkNDhiMDNihHwOiQ==: 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODMwYmYyYTZlM2E3YTA4MzFjODgyYjBmNmZkZjAwMTIxMmZjMmRmYzJhOThjZTFlqv2ogQ==: 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjUxZmM2MzhjOTJlYWQyYmUzOTI4MDI2ZGQ2M2FmYTNkMTQ4Y2M0ZGVkNDhiMDNihHwOiQ==: ]] 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjUxZmM2MzhjOTJlYWQyYmUzOTI4MDI2ZGQ2M2FmYTNkMTQ4Y2M0ZGVkNDhiMDNihHwOiQ==: 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:18.825 21:46:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.084 nvme0n1 00:29:19.084 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:19.084 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.084 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:19.084 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:19.084 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.084 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:19.084 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:19.084 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:19.084 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:19.084 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.084 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:19.084 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:19.084 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:19.084 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:19.084 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:29:19.084 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:19.084 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:19.084 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:19.084 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:19.084 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJiNjAyOTUzNjA3ZDdkMmU3ZWZhOTViOTNlNWFkNGWDKxc7: 00:29:19.085 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWQ2Y2Q0NjhhMDM4OGYzNGIyN2JkZTNkOTMyNTE1OTRiMzYxYzg0ZmNiOTExZjUwNTgzMjE2MWRhYzQ2YzRhM+zC5So=: 00:29:19.085 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:19.085 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:19.085 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJiNjAyOTUzNjA3ZDdkMmU3ZWZhOTViOTNlNWFkNGWDKxc7: 00:29:19.085 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWQ2Y2Q0NjhhMDM4OGYzNGIyN2JkZTNkOTMyNTE1OTRiMzYxYzg0ZmNiOTExZjUwNTgzMjE2MWRhYzQ2YzRhM+zC5So=: ]] 00:29:19.085 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWQ2Y2Q0NjhhMDM4OGYzNGIyN2JkZTNkOTMyNTE1OTRiMzYxYzg0ZmNiOTExZjUwNTgzMjE2MWRhYzQ2YzRhM+zC5So=: 00:29:19.085 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:29:19.085 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:19.085 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:19.085 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:19.085 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:19.085 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:19.085 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:19.085 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:19.085 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.085 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:19.085 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:19.085 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:19.085 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:19.085 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:19.085 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:19.085 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:19.085 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:19.085 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:19.085 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:19.085 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:19.085 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:19.085 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:19.085 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:19.085 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.085 nvme0n1 00:29:19.085 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:19.085 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.085 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:19.085 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:19.085 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.344 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:19.344 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:19.344 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:19.344 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:19.344 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.344 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:19.344 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:19.344 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:19.344 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:19.344 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:19.344 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:19.344 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:19.344 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODMwYmYyYTZlM2E3YTA4MzFjODgyYjBmNmZkZjAwMTIxMmZjMmRmYzJhOThjZTFlqv2ogQ==: 00:29:19.344 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjUxZmM2MzhjOTJlYWQyYmUzOTI4MDI2ZGQ2M2FmYTNkMTQ4Y2M0ZGVkNDhiMDNihHwOiQ==: 00:29:19.344 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:19.344 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:19.344 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODMwYmYyYTZlM2E3YTA4MzFjODgyYjBmNmZkZjAwMTIxMmZjMmRmYzJhOThjZTFlqv2ogQ==: 00:29:19.344 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjUxZmM2MzhjOTJlYWQyYmUzOTI4MDI2ZGQ2M2FmYTNkMTQ4Y2M0ZGVkNDhiMDNihHwOiQ==: ]] 00:29:19.344 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjUxZmM2MzhjOTJlYWQyYmUzOTI4MDI2ZGQ2M2FmYTNkMTQ4Y2M0ZGVkNDhiMDNihHwOiQ==: 00:29:19.344 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:29:19.344 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:19.344 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:19.344 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:19.344 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:19.344 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:19.344 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:19.344 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:19.344 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.344 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:19.344 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:19.344 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:19.344 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:19.344 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:19.344 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:19.344 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:19.344 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:19.344 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:19.344 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:19.344 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:19.344 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:19.344 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:19.344 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:19.344 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.344 nvme0n1 00:29:19.344 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:19.344 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.344 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:19.344 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:19.344 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.344 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:19.603 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzdjODNiOGQyMzQxMTY1NDExZmU5MzUwYzcyNzk1MzdVo5TX: 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTA2MDI4MWFkZmNjZWEwMWVhNzRmNWMzMmE2ZDFmOTkeKHEz: 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzdjODNiOGQyMzQxMTY1NDExZmU5MzUwYzcyNzk1MzdVo5TX: 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTA2MDI4MWFkZmNjZWEwMWVhNzRmNWMzMmE2ZDFmOTkeKHEz: ]] 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTA2MDI4MWFkZmNjZWEwMWVhNzRmNWMzMmE2ZDFmOTkeKHEz: 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.604 nvme0n1 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWQ1MjliYWNhYTNlNTgzMzJlZWNiMjFjZTJjOTZiOWQzNzA0OTkxNTEzNGQ0M2JlJvL88g==: 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTk4Yzg0M2MxMDJiOWJjM2UzMDM3NzU5NjNmNWI2MjfLYrzA: 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWQ1MjliYWNhYTNlNTgzMzJlZWNiMjFjZTJjOTZiOWQzNzA0OTkxNTEzNGQ0M2JlJvL88g==: 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTk4Yzg0M2MxMDJiOWJjM2UzMDM3NzU5NjNmNWI2MjfLYrzA: ]] 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTk4Yzg0M2MxMDJiOWJjM2UzMDM3NzU5NjNmNWI2MjfLYrzA: 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:29:19.604 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:19.864 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:19.864 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:19.864 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:19.864 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:19.864 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:19.864 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:19.864 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.864 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:19.864 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:19.864 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:19.864 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:19.864 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:19.864 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:19.864 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:19.864 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:19.864 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:19.864 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:19.864 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:19.864 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:19.864 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:19.864 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:19.864 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.864 nvme0n1 00:29:19.864 21:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:19.864 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.864 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:19.864 21:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:19.864 21:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.864 21:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:19.864 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:19.864 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:19.864 21:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:19.864 21:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.864 21:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:19.864 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:19.864 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:29:19.864 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:19.864 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:19.864 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:19.864 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:19.864 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTcyNmJjY2JhOGExMTdiMjQwODIxYzc1YWUyMDgyOGQ3ZTNiZDgzYmUyMzAzNzgxYTM4ZjgzYmJkNTllOGJjMepoxMs=: 00:29:19.864 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:19.864 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:19.864 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:19.864 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTcyNmJjY2JhOGExMTdiMjQwODIxYzc1YWUyMDgyOGQ3ZTNiZDgzYmUyMzAzNzgxYTM4ZjgzYmJkNTllOGJjMepoxMs=: 00:29:19.864 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:19.864 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:29:19.864 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:19.864 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:19.864 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:19.864 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:19.864 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:19.864 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:19.864 21:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:19.864 21:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.864 21:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:19.864 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:19.864 21:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:19.864 21:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:19.864 21:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:19.864 21:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:19.864 21:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:19.864 21:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:19.864 21:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:19.864 21:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:19.864 21:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:19.864 21:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:19.864 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:19.864 21:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:19.864 21:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.124 nvme0n1 00:29:20.124 21:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:20.124 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:20.124 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:20.124 21:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:20.124 21:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.124 21:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:20.124 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:20.124 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:20.124 21:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:20.124 21:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.124 21:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:20.124 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:20.124 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:20.124 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:29:20.124 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:20.124 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:20.124 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:20.124 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:20.124 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJiNjAyOTUzNjA3ZDdkMmU3ZWZhOTViOTNlNWFkNGWDKxc7: 00:29:20.124 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWQ2Y2Q0NjhhMDM4OGYzNGIyN2JkZTNkOTMyNTE1OTRiMzYxYzg0ZmNiOTExZjUwNTgzMjE2MWRhYzQ2YzRhM+zC5So=: 00:29:20.124 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:20.124 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:20.124 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJiNjAyOTUzNjA3ZDdkMmU3ZWZhOTViOTNlNWFkNGWDKxc7: 00:29:20.124 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWQ2Y2Q0NjhhMDM4OGYzNGIyN2JkZTNkOTMyNTE1OTRiMzYxYzg0ZmNiOTExZjUwNTgzMjE2MWRhYzQ2YzRhM+zC5So=: ]] 00:29:20.124 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWQ2Y2Q0NjhhMDM4OGYzNGIyN2JkZTNkOTMyNTE1OTRiMzYxYzg0ZmNiOTExZjUwNTgzMjE2MWRhYzQ2YzRhM+zC5So=: 00:29:20.124 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:29:20.124 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:20.124 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:20.124 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:20.124 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:20.124 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:20.124 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:20.124 21:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:20.124 21:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.124 21:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:20.124 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:20.124 21:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:20.124 21:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:20.124 21:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:20.124 21:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:20.124 21:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:20.124 21:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:20.124 21:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:20.124 21:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:20.124 21:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:20.124 21:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:20.124 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:20.124 21:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:20.124 21:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.383 nvme0n1 00:29:20.383 21:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:20.383 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:20.383 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:20.383 21:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:20.384 21:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.384 21:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:20.384 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:20.384 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:20.384 21:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:20.384 21:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.384 21:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:20.384 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:20.384 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:29:20.384 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:20.384 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:20.384 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:20.384 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:20.384 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODMwYmYyYTZlM2E3YTA4MzFjODgyYjBmNmZkZjAwMTIxMmZjMmRmYzJhOThjZTFlqv2ogQ==: 00:29:20.384 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjUxZmM2MzhjOTJlYWQyYmUzOTI4MDI2ZGQ2M2FmYTNkMTQ4Y2M0ZGVkNDhiMDNihHwOiQ==: 00:29:20.384 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:20.384 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:20.384 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODMwYmYyYTZlM2E3YTA4MzFjODgyYjBmNmZkZjAwMTIxMmZjMmRmYzJhOThjZTFlqv2ogQ==: 00:29:20.384 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjUxZmM2MzhjOTJlYWQyYmUzOTI4MDI2ZGQ2M2FmYTNkMTQ4Y2M0ZGVkNDhiMDNihHwOiQ==: ]] 00:29:20.384 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjUxZmM2MzhjOTJlYWQyYmUzOTI4MDI2ZGQ2M2FmYTNkMTQ4Y2M0ZGVkNDhiMDNihHwOiQ==: 00:29:20.384 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:29:20.384 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:20.384 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:20.384 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:20.384 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:20.384 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:20.384 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:20.384 21:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:20.384 21:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.384 21:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:20.384 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:20.384 21:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:20.384 21:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:20.384 21:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:20.384 21:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:20.384 21:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:20.384 21:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:20.384 21:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:20.384 21:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:20.384 21:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:20.384 21:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:20.384 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:20.384 21:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:20.384 21:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.643 nvme0n1 00:29:20.643 21:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:20.643 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:20.643 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:20.643 21:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:20.643 21:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.643 21:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:20.643 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:20.643 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:20.643 21:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:20.643 21:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.643 21:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:20.643 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:20.643 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:29:20.643 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:20.643 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:20.643 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:20.643 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:20.643 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzdjODNiOGQyMzQxMTY1NDExZmU5MzUwYzcyNzk1MzdVo5TX: 00:29:20.643 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTA2MDI4MWFkZmNjZWEwMWVhNzRmNWMzMmE2ZDFmOTkeKHEz: 00:29:20.643 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:20.643 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:20.643 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzdjODNiOGQyMzQxMTY1NDExZmU5MzUwYzcyNzk1MzdVo5TX: 00:29:20.643 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTA2MDI4MWFkZmNjZWEwMWVhNzRmNWMzMmE2ZDFmOTkeKHEz: ]] 00:29:20.643 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTA2MDI4MWFkZmNjZWEwMWVhNzRmNWMzMmE2ZDFmOTkeKHEz: 00:29:20.643 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:29:20.643 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:20.643 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:20.643 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:20.643 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:20.643 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:20.643 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:20.643 21:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:20.643 21:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.643 21:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:20.643 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:20.643 21:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:20.643 21:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:20.643 21:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:20.643 21:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:20.643 21:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:20.643 21:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:20.643 21:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:20.644 21:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:20.644 21:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:20.644 21:46:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:20.644 21:46:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:20.644 21:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:20.644 21:46:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.903 nvme0n1 00:29:20.903 21:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:20.903 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:20.903 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:20.903 21:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:20.903 21:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.903 21:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:20.903 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:20.903 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:20.903 21:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:20.903 21:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.903 21:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:20.903 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:20.903 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:29:20.903 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:20.903 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:20.903 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:20.903 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:20.903 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWQ1MjliYWNhYTNlNTgzMzJlZWNiMjFjZTJjOTZiOWQzNzA0OTkxNTEzNGQ0M2JlJvL88g==: 00:29:20.903 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTk4Yzg0M2MxMDJiOWJjM2UzMDM3NzU5NjNmNWI2MjfLYrzA: 00:29:20.903 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:20.903 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:20.903 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWQ1MjliYWNhYTNlNTgzMzJlZWNiMjFjZTJjOTZiOWQzNzA0OTkxNTEzNGQ0M2JlJvL88g==: 00:29:20.903 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTk4Yzg0M2MxMDJiOWJjM2UzMDM3NzU5NjNmNWI2MjfLYrzA: ]] 00:29:20.903 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTk4Yzg0M2MxMDJiOWJjM2UzMDM3NzU5NjNmNWI2MjfLYrzA: 00:29:20.903 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:29:20.903 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:20.903 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:20.903 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:20.903 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:20.903 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:20.903 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:20.903 21:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:20.903 21:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.903 21:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:20.903 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:20.903 21:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:20.903 21:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:20.903 21:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:20.903 21:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:20.903 21:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:20.903 21:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:20.903 21:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:20.903 21:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:20.903 21:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:20.903 21:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:20.903 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:20.903 21:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:20.903 21:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.162 nvme0n1 00:29:21.162 21:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:21.162 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:21.162 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:21.162 21:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:21.162 21:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.162 21:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:21.162 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:21.162 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:21.162 21:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:21.162 21:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.162 21:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:21.162 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:21.162 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:29:21.162 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:21.162 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:21.162 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:21.162 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:21.162 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTcyNmJjY2JhOGExMTdiMjQwODIxYzc1YWUyMDgyOGQ3ZTNiZDgzYmUyMzAzNzgxYTM4ZjgzYmJkNTllOGJjMepoxMs=: 00:29:21.162 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:21.162 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:21.162 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:21.162 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTcyNmJjY2JhOGExMTdiMjQwODIxYzc1YWUyMDgyOGQ3ZTNiZDgzYmUyMzAzNzgxYTM4ZjgzYmJkNTllOGJjMepoxMs=: 00:29:21.162 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:21.163 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:29:21.163 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:21.163 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:21.163 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:21.163 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:21.163 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:21.163 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:21.163 21:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:21.163 21:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.163 21:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:21.163 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:21.422 21:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:21.422 21:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:21.422 21:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:21.422 21:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:21.422 21:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:21.422 21:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:21.422 21:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:21.422 21:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:21.422 21:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:21.422 21:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:21.422 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:21.422 21:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:21.422 21:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.422 nvme0n1 00:29:21.422 21:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:21.422 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:21.422 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:21.422 21:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:21.422 21:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.422 21:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:21.422 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:21.422 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:21.422 21:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:21.422 21:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.422 21:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:21.422 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:21.422 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:21.422 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:29:21.422 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:21.422 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:21.422 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:21.422 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:21.422 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJiNjAyOTUzNjA3ZDdkMmU3ZWZhOTViOTNlNWFkNGWDKxc7: 00:29:21.422 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWQ2Y2Q0NjhhMDM4OGYzNGIyN2JkZTNkOTMyNTE1OTRiMzYxYzg0ZmNiOTExZjUwNTgzMjE2MWRhYzQ2YzRhM+zC5So=: 00:29:21.422 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:21.422 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:21.422 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJiNjAyOTUzNjA3ZDdkMmU3ZWZhOTViOTNlNWFkNGWDKxc7: 00:29:21.422 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWQ2Y2Q0NjhhMDM4OGYzNGIyN2JkZTNkOTMyNTE1OTRiMzYxYzg0ZmNiOTExZjUwNTgzMjE2MWRhYzQ2YzRhM+zC5So=: ]] 00:29:21.422 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWQ2Y2Q0NjhhMDM4OGYzNGIyN2JkZTNkOTMyNTE1OTRiMzYxYzg0ZmNiOTExZjUwNTgzMjE2MWRhYzQ2YzRhM+zC5So=: 00:29:21.422 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:29:21.422 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:21.422 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:21.422 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:21.422 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:21.422 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:21.422 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:21.422 21:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:21.422 21:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.681 21:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:21.681 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:21.681 21:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:21.681 21:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:21.681 21:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:21.681 21:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:21.681 21:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:21.681 21:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:21.681 21:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:21.681 21:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:21.681 21:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:21.681 21:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:21.681 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:21.681 21:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:21.681 21:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.941 nvme0n1 00:29:21.941 21:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:21.941 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:21.941 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:21.941 21:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:21.941 21:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.941 21:46:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:21.941 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:21.941 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:21.941 21:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:21.941 21:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.941 21:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:21.941 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:21.941 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:29:21.941 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:21.941 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:21.941 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:21.941 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:21.941 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODMwYmYyYTZlM2E3YTA4MzFjODgyYjBmNmZkZjAwMTIxMmZjMmRmYzJhOThjZTFlqv2ogQ==: 00:29:21.941 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjUxZmM2MzhjOTJlYWQyYmUzOTI4MDI2ZGQ2M2FmYTNkMTQ4Y2M0ZGVkNDhiMDNihHwOiQ==: 00:29:21.941 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:21.941 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:21.941 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODMwYmYyYTZlM2E3YTA4MzFjODgyYjBmNmZkZjAwMTIxMmZjMmRmYzJhOThjZTFlqv2ogQ==: 00:29:21.941 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjUxZmM2MzhjOTJlYWQyYmUzOTI4MDI2ZGQ2M2FmYTNkMTQ4Y2M0ZGVkNDhiMDNihHwOiQ==: ]] 00:29:21.941 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjUxZmM2MzhjOTJlYWQyYmUzOTI4MDI2ZGQ2M2FmYTNkMTQ4Y2M0ZGVkNDhiMDNihHwOiQ==: 00:29:21.941 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:29:21.941 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:21.941 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:21.941 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:21.941 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:21.941 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:21.941 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:21.941 21:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:21.941 21:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.941 21:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:21.941 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:21.941 21:46:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:21.941 21:46:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:21.941 21:46:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:21.941 21:46:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:21.941 21:46:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:21.941 21:46:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:21.941 21:46:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:21.941 21:46:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:21.941 21:46:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:21.941 21:46:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:21.941 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:21.941 21:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:21.941 21:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.200 nvme0n1 00:29:22.200 21:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:22.200 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:22.201 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:22.201 21:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:22.201 21:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.201 21:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:22.201 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:22.201 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:22.201 21:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:22.201 21:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.201 21:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:22.201 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:22.201 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:29:22.201 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:22.201 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:22.201 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:22.201 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:22.201 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzdjODNiOGQyMzQxMTY1NDExZmU5MzUwYzcyNzk1MzdVo5TX: 00:29:22.201 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTA2MDI4MWFkZmNjZWEwMWVhNzRmNWMzMmE2ZDFmOTkeKHEz: 00:29:22.201 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:22.201 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:22.201 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzdjODNiOGQyMzQxMTY1NDExZmU5MzUwYzcyNzk1MzdVo5TX: 00:29:22.201 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTA2MDI4MWFkZmNjZWEwMWVhNzRmNWMzMmE2ZDFmOTkeKHEz: ]] 00:29:22.201 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTA2MDI4MWFkZmNjZWEwMWVhNzRmNWMzMmE2ZDFmOTkeKHEz: 00:29:22.201 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:29:22.201 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:22.201 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:22.201 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:22.201 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:22.201 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:22.201 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:22.201 21:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:22.201 21:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.201 21:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:22.201 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:22.201 21:46:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:22.201 21:46:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:22.201 21:46:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:22.201 21:46:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:22.201 21:46:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:22.201 21:46:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:22.201 21:46:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:22.201 21:46:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:22.201 21:46:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:22.201 21:46:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:22.201 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:22.201 21:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:22.201 21:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.460 nvme0n1 00:29:22.460 21:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:22.460 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:22.460 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:22.460 21:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:22.460 21:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.460 21:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:22.719 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:22.719 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:22.719 21:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:22.719 21:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.719 21:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:22.720 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:22.720 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:29:22.720 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:22.720 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:22.720 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:22.720 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:22.720 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWQ1MjliYWNhYTNlNTgzMzJlZWNiMjFjZTJjOTZiOWQzNzA0OTkxNTEzNGQ0M2JlJvL88g==: 00:29:22.720 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTk4Yzg0M2MxMDJiOWJjM2UzMDM3NzU5NjNmNWI2MjfLYrzA: 00:29:22.720 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:22.720 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:22.720 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWQ1MjliYWNhYTNlNTgzMzJlZWNiMjFjZTJjOTZiOWQzNzA0OTkxNTEzNGQ0M2JlJvL88g==: 00:29:22.720 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTk4Yzg0M2MxMDJiOWJjM2UzMDM3NzU5NjNmNWI2MjfLYrzA: ]] 00:29:22.720 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTk4Yzg0M2MxMDJiOWJjM2UzMDM3NzU5NjNmNWI2MjfLYrzA: 00:29:22.720 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:29:22.720 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:22.720 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:22.720 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:22.720 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:22.720 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:22.720 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:22.720 21:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:22.720 21:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.720 21:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:22.720 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:22.720 21:46:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:22.720 21:46:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:22.720 21:46:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:22.720 21:46:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:22.720 21:46:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:22.720 21:46:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:22.720 21:46:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:22.720 21:46:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:22.720 21:46:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:22.720 21:46:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:22.720 21:46:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:22.720 21:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:22.720 21:46:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.978 nvme0n1 00:29:22.978 21:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:22.978 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:22.978 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:22.978 21:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:22.978 21:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.978 21:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:22.978 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:22.978 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:22.978 21:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:22.978 21:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.978 21:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:22.978 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:22.978 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:29:22.978 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:22.978 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:22.978 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:22.978 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:22.978 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTcyNmJjY2JhOGExMTdiMjQwODIxYzc1YWUyMDgyOGQ3ZTNiZDgzYmUyMzAzNzgxYTM4ZjgzYmJkNTllOGJjMepoxMs=: 00:29:22.978 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:22.978 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:22.978 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:22.978 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTcyNmJjY2JhOGExMTdiMjQwODIxYzc1YWUyMDgyOGQ3ZTNiZDgzYmUyMzAzNzgxYTM4ZjgzYmJkNTllOGJjMepoxMs=: 00:29:22.978 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:22.978 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:29:22.978 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:22.978 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:22.978 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:22.978 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:22.978 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:22.978 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:22.978 21:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:22.978 21:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.978 21:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:22.978 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:22.978 21:46:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:22.978 21:46:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:22.978 21:46:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:22.978 21:46:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:22.978 21:46:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:22.978 21:46:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:22.978 21:46:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:22.978 21:46:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:22.978 21:46:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:22.978 21:46:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:22.978 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:22.978 21:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:22.978 21:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.236 nvme0n1 00:29:23.236 21:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:23.236 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:23.236 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:23.236 21:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:23.236 21:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.236 21:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:23.236 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:23.236 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:23.236 21:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:23.237 21:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.237 21:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:23.237 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:23.237 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:23.237 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:29:23.237 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:23.237 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:23.237 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:23.237 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:23.237 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJiNjAyOTUzNjA3ZDdkMmU3ZWZhOTViOTNlNWFkNGWDKxc7: 00:29:23.237 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWQ2Y2Q0NjhhMDM4OGYzNGIyN2JkZTNkOTMyNTE1OTRiMzYxYzg0ZmNiOTExZjUwNTgzMjE2MWRhYzQ2YzRhM+zC5So=: 00:29:23.237 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:23.237 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:23.237 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJiNjAyOTUzNjA3ZDdkMmU3ZWZhOTViOTNlNWFkNGWDKxc7: 00:29:23.237 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWQ2Y2Q0NjhhMDM4OGYzNGIyN2JkZTNkOTMyNTE1OTRiMzYxYzg0ZmNiOTExZjUwNTgzMjE2MWRhYzQ2YzRhM+zC5So=: ]] 00:29:23.237 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWQ2Y2Q0NjhhMDM4OGYzNGIyN2JkZTNkOTMyNTE1OTRiMzYxYzg0ZmNiOTExZjUwNTgzMjE2MWRhYzQ2YzRhM+zC5So=: 00:29:23.237 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:29:23.237 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:23.237 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:23.237 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:23.237 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:23.237 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:23.237 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:23.237 21:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:23.237 21:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.237 21:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:23.237 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:23.237 21:46:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:23.237 21:46:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:23.237 21:46:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:23.237 21:46:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:23.237 21:46:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:23.237 21:46:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:23.237 21:46:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:23.237 21:46:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:23.237 21:46:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:23.237 21:46:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:23.237 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:23.237 21:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:23.237 21:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.804 nvme0n1 00:29:23.804 21:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:23.804 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:23.804 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:23.804 21:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:23.804 21:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.804 21:46:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:23.804 21:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:23.804 21:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:23.804 21:46:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:23.804 21:46:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.804 21:46:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:23.804 21:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:23.804 21:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:29:23.804 21:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:23.804 21:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:23.804 21:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:23.804 21:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:23.804 21:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODMwYmYyYTZlM2E3YTA4MzFjODgyYjBmNmZkZjAwMTIxMmZjMmRmYzJhOThjZTFlqv2ogQ==: 00:29:23.804 21:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjUxZmM2MzhjOTJlYWQyYmUzOTI4MDI2ZGQ2M2FmYTNkMTQ4Y2M0ZGVkNDhiMDNihHwOiQ==: 00:29:23.804 21:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:23.804 21:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:23.804 21:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODMwYmYyYTZlM2E3YTA4MzFjODgyYjBmNmZkZjAwMTIxMmZjMmRmYzJhOThjZTFlqv2ogQ==: 00:29:23.804 21:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjUxZmM2MzhjOTJlYWQyYmUzOTI4MDI2ZGQ2M2FmYTNkMTQ4Y2M0ZGVkNDhiMDNihHwOiQ==: ]] 00:29:23.804 21:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjUxZmM2MzhjOTJlYWQyYmUzOTI4MDI2ZGQ2M2FmYTNkMTQ4Y2M0ZGVkNDhiMDNihHwOiQ==: 00:29:23.804 21:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:29:23.804 21:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:23.804 21:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:23.804 21:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:23.804 21:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:23.804 21:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:23.804 21:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:23.804 21:46:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:23.804 21:46:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.804 21:46:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:23.804 21:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:23.804 21:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:23.804 21:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:23.804 21:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:23.804 21:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:23.804 21:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:23.804 21:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:23.804 21:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:23.804 21:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:23.804 21:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:23.804 21:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:23.804 21:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:23.804 21:46:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:23.804 21:46:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.371 nvme0n1 00:29:24.371 21:46:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:24.371 21:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:24.372 21:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:24.372 21:46:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:24.372 21:46:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.372 21:46:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:24.372 21:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:24.372 21:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:24.372 21:46:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:24.372 21:46:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.372 21:46:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:24.372 21:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:24.372 21:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:29:24.372 21:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:24.372 21:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:24.372 21:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:24.372 21:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:24.372 21:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzdjODNiOGQyMzQxMTY1NDExZmU5MzUwYzcyNzk1MzdVo5TX: 00:29:24.372 21:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTA2MDI4MWFkZmNjZWEwMWVhNzRmNWMzMmE2ZDFmOTkeKHEz: 00:29:24.372 21:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:24.372 21:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:24.372 21:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzdjODNiOGQyMzQxMTY1NDExZmU5MzUwYzcyNzk1MzdVo5TX: 00:29:24.372 21:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTA2MDI4MWFkZmNjZWEwMWVhNzRmNWMzMmE2ZDFmOTkeKHEz: ]] 00:29:24.372 21:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTA2MDI4MWFkZmNjZWEwMWVhNzRmNWMzMmE2ZDFmOTkeKHEz: 00:29:24.372 21:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:29:24.372 21:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:24.372 21:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:24.372 21:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:24.372 21:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:24.372 21:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:24.372 21:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:24.372 21:46:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:24.372 21:46:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.372 21:46:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:24.372 21:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:24.372 21:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:24.372 21:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:24.372 21:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:24.372 21:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:24.372 21:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:24.372 21:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:24.372 21:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:24.372 21:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:24.372 21:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:24.372 21:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:24.372 21:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:24.372 21:46:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:24.372 21:46:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.940 nvme0n1 00:29:24.940 21:46:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:24.940 21:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:24.940 21:46:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:24.940 21:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:24.940 21:46:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.940 21:46:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:24.940 21:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:24.940 21:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:24.940 21:46:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:24.940 21:46:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.940 21:46:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:24.940 21:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:24.940 21:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:29:24.940 21:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:24.940 21:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:24.940 21:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:24.940 21:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:24.940 21:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWQ1MjliYWNhYTNlNTgzMzJlZWNiMjFjZTJjOTZiOWQzNzA0OTkxNTEzNGQ0M2JlJvL88g==: 00:29:24.940 21:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTk4Yzg0M2MxMDJiOWJjM2UzMDM3NzU5NjNmNWI2MjfLYrzA: 00:29:24.940 21:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:24.940 21:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:24.940 21:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWQ1MjliYWNhYTNlNTgzMzJlZWNiMjFjZTJjOTZiOWQzNzA0OTkxNTEzNGQ0M2JlJvL88g==: 00:29:24.940 21:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTk4Yzg0M2MxMDJiOWJjM2UzMDM3NzU5NjNmNWI2MjfLYrzA: ]] 00:29:24.940 21:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTk4Yzg0M2MxMDJiOWJjM2UzMDM3NzU5NjNmNWI2MjfLYrzA: 00:29:24.940 21:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:29:24.940 21:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:24.940 21:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:24.940 21:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:24.940 21:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:24.940 21:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:24.940 21:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:24.940 21:46:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:24.940 21:46:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.940 21:46:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:24.940 21:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:24.940 21:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:24.940 21:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:24.940 21:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:24.940 21:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:24.941 21:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:24.941 21:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:24.941 21:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:24.941 21:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:24.941 21:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:24.941 21:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:24.941 21:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:24.941 21:46:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:24.941 21:46:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.509 nvme0n1 00:29:25.509 21:46:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:25.509 21:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:25.509 21:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:25.509 21:46:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:25.509 21:46:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.509 21:46:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:25.509 21:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:25.509 21:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:25.509 21:46:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:25.509 21:46:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.509 21:46:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:25.509 21:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:25.509 21:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:29:25.509 21:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:25.509 21:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:25.509 21:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:25.509 21:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:25.509 21:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTcyNmJjY2JhOGExMTdiMjQwODIxYzc1YWUyMDgyOGQ3ZTNiZDgzYmUyMzAzNzgxYTM4ZjgzYmJkNTllOGJjMepoxMs=: 00:29:25.509 21:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:25.509 21:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:25.509 21:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:25.509 21:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTcyNmJjY2JhOGExMTdiMjQwODIxYzc1YWUyMDgyOGQ3ZTNiZDgzYmUyMzAzNzgxYTM4ZjgzYmJkNTllOGJjMepoxMs=: 00:29:25.509 21:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:25.509 21:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:29:25.509 21:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:25.509 21:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:25.509 21:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:25.509 21:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:25.509 21:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:25.509 21:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:25.509 21:46:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:25.509 21:46:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.509 21:46:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:25.509 21:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:25.509 21:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:25.509 21:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:25.509 21:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:25.509 21:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:25.509 21:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:25.509 21:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:25.509 21:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:25.509 21:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:25.509 21:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:25.509 21:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:25.509 21:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:25.509 21:46:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:25.509 21:46:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.077 nvme0n1 00:29:26.077 21:46:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:26.077 21:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:26.077 21:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:26.077 21:46:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:26.077 21:46:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.077 21:46:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:26.077 21:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:26.077 21:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:26.077 21:46:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:26.077 21:46:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.077 21:46:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:26.077 21:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:26.077 21:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:26.077 21:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:29:26.077 21:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:26.077 21:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:26.077 21:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:26.077 21:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:26.077 21:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJiNjAyOTUzNjA3ZDdkMmU3ZWZhOTViOTNlNWFkNGWDKxc7: 00:29:26.077 21:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWQ2Y2Q0NjhhMDM4OGYzNGIyN2JkZTNkOTMyNTE1OTRiMzYxYzg0ZmNiOTExZjUwNTgzMjE2MWRhYzQ2YzRhM+zC5So=: 00:29:26.077 21:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:26.077 21:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:26.077 21:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJiNjAyOTUzNjA3ZDdkMmU3ZWZhOTViOTNlNWFkNGWDKxc7: 00:29:26.077 21:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWQ2Y2Q0NjhhMDM4OGYzNGIyN2JkZTNkOTMyNTE1OTRiMzYxYzg0ZmNiOTExZjUwNTgzMjE2MWRhYzQ2YzRhM+zC5So=: ]] 00:29:26.077 21:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWQ2Y2Q0NjhhMDM4OGYzNGIyN2JkZTNkOTMyNTE1OTRiMzYxYzg0ZmNiOTExZjUwNTgzMjE2MWRhYzQ2YzRhM+zC5So=: 00:29:26.077 21:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:29:26.077 21:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:26.077 21:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:26.077 21:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:26.077 21:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:26.077 21:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:26.077 21:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:26.077 21:46:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:26.077 21:46:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.077 21:46:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:26.077 21:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:26.077 21:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:26.077 21:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:26.077 21:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:26.077 21:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:26.077 21:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:26.077 21:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:26.077 21:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:26.077 21:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:26.077 21:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:26.077 21:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:26.077 21:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:26.077 21:46:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:26.077 21:46:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.014 nvme0n1 00:29:27.014 21:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:27.014 21:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:27.014 21:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:27.014 21:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:27.014 21:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.014 21:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:27.014 21:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:27.014 21:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:27.014 21:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:27.014 21:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.014 21:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:27.014 21:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:27.014 21:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:29:27.014 21:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:27.014 21:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:27.014 21:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:27.014 21:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:27.014 21:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODMwYmYyYTZlM2E3YTA4MzFjODgyYjBmNmZkZjAwMTIxMmZjMmRmYzJhOThjZTFlqv2ogQ==: 00:29:27.014 21:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjUxZmM2MzhjOTJlYWQyYmUzOTI4MDI2ZGQ2M2FmYTNkMTQ4Y2M0ZGVkNDhiMDNihHwOiQ==: 00:29:27.014 21:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:27.014 21:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:27.014 21:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODMwYmYyYTZlM2E3YTA4MzFjODgyYjBmNmZkZjAwMTIxMmZjMmRmYzJhOThjZTFlqv2ogQ==: 00:29:27.014 21:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjUxZmM2MzhjOTJlYWQyYmUzOTI4MDI2ZGQ2M2FmYTNkMTQ4Y2M0ZGVkNDhiMDNihHwOiQ==: ]] 00:29:27.014 21:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjUxZmM2MzhjOTJlYWQyYmUzOTI4MDI2ZGQ2M2FmYTNkMTQ4Y2M0ZGVkNDhiMDNihHwOiQ==: 00:29:27.014 21:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:29:27.014 21:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:27.014 21:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:27.014 21:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:27.014 21:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:27.014 21:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:27.014 21:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:27.014 21:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:27.014 21:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.014 21:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:27.014 21:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:27.014 21:46:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:27.014 21:46:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:27.014 21:46:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:27.014 21:46:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:27.015 21:46:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:27.015 21:46:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:27.015 21:46:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:27.015 21:46:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:27.015 21:46:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:27.015 21:46:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:27.015 21:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:27.015 21:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:27.015 21:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.617 nvme0n1 00:29:27.617 21:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:27.617 21:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:27.617 21:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:27.617 21:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:27.617 21:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.617 21:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:27.898 21:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:27.898 21:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:27.898 21:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:27.898 21:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.898 21:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:27.898 21:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:27.898 21:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:29:27.898 21:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:27.898 21:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:27.898 21:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:27.898 21:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:27.898 21:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzdjODNiOGQyMzQxMTY1NDExZmU5MzUwYzcyNzk1MzdVo5TX: 00:29:27.898 21:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTA2MDI4MWFkZmNjZWEwMWVhNzRmNWMzMmE2ZDFmOTkeKHEz: 00:29:27.898 21:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:27.898 21:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:27.898 21:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzdjODNiOGQyMzQxMTY1NDExZmU5MzUwYzcyNzk1MzdVo5TX: 00:29:27.898 21:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTA2MDI4MWFkZmNjZWEwMWVhNzRmNWMzMmE2ZDFmOTkeKHEz: ]] 00:29:27.898 21:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTA2MDI4MWFkZmNjZWEwMWVhNzRmNWMzMmE2ZDFmOTkeKHEz: 00:29:27.898 21:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:29:27.898 21:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:27.898 21:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:27.898 21:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:27.898 21:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:27.898 21:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:27.898 21:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:27.898 21:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:27.898 21:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.898 21:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:27.898 21:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:27.898 21:46:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:27.898 21:46:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:27.898 21:46:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:27.898 21:46:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:27.898 21:46:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:27.898 21:46:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:27.898 21:46:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:27.898 21:46:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:27.898 21:46:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:27.898 21:46:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:27.898 21:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:27.898 21:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:27.898 21:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.466 nvme0n1 00:29:28.466 21:46:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:28.725 21:46:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:28.725 21:46:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:28.725 21:46:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:28.725 21:46:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.725 21:46:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:28.725 21:46:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:28.725 21:46:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:28.725 21:46:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:28.725 21:46:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.725 21:46:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:28.725 21:46:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:28.725 21:46:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:29:28.725 21:46:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:28.725 21:46:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:28.725 21:46:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:28.725 21:46:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:28.725 21:46:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWQ1MjliYWNhYTNlNTgzMzJlZWNiMjFjZTJjOTZiOWQzNzA0OTkxNTEzNGQ0M2JlJvL88g==: 00:29:28.725 21:46:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTk4Yzg0M2MxMDJiOWJjM2UzMDM3NzU5NjNmNWI2MjfLYrzA: 00:29:28.725 21:46:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:28.725 21:46:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:28.725 21:46:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWQ1MjliYWNhYTNlNTgzMzJlZWNiMjFjZTJjOTZiOWQzNzA0OTkxNTEzNGQ0M2JlJvL88g==: 00:29:28.725 21:46:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTk4Yzg0M2MxMDJiOWJjM2UzMDM3NzU5NjNmNWI2MjfLYrzA: ]] 00:29:28.725 21:46:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTk4Yzg0M2MxMDJiOWJjM2UzMDM3NzU5NjNmNWI2MjfLYrzA: 00:29:28.725 21:46:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:29:28.725 21:46:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:28.725 21:46:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:28.725 21:46:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:28.725 21:46:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:28.725 21:46:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:28.725 21:46:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:28.725 21:46:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:28.725 21:46:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.725 21:46:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:28.725 21:46:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:28.725 21:46:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:28.725 21:46:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:28.725 21:46:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:28.725 21:46:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:28.725 21:46:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:28.725 21:46:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:28.725 21:46:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:28.725 21:46:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:28.725 21:46:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:28.725 21:46:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:28.725 21:46:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:28.725 21:46:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:28.725 21:46:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.662 nvme0n1 00:29:29.662 21:46:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:29.662 21:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:29.662 21:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:29.662 21:46:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:29.662 21:46:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.662 21:46:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:29.662 21:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:29.662 21:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:29.662 21:46:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:29.662 21:46:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.662 21:46:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:29.662 21:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:29.662 21:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:29:29.662 21:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:29.662 21:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:29.662 21:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:29.662 21:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:29.662 21:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTcyNmJjY2JhOGExMTdiMjQwODIxYzc1YWUyMDgyOGQ3ZTNiZDgzYmUyMzAzNzgxYTM4ZjgzYmJkNTllOGJjMepoxMs=: 00:29:29.662 21:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:29.662 21:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:29.663 21:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:29.663 21:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTcyNmJjY2JhOGExMTdiMjQwODIxYzc1YWUyMDgyOGQ3ZTNiZDgzYmUyMzAzNzgxYTM4ZjgzYmJkNTllOGJjMepoxMs=: 00:29:29.663 21:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:29.663 21:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:29:29.663 21:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:29.663 21:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:29.663 21:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:29.663 21:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:29.663 21:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:29.663 21:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:29.663 21:46:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:29.663 21:46:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.663 21:46:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:29.663 21:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:29.663 21:46:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:29.663 21:46:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:29.663 21:46:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:29.663 21:46:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:29.663 21:46:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:29.663 21:46:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:29.663 21:46:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:29.663 21:46:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:29.663 21:46:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:29.663 21:46:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:29.663 21:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:29.663 21:46:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:29.663 21:46:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.229 nvme0n1 00:29:30.229 21:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:30.229 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:30.229 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:30.229 21:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:30.229 21:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.488 21:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:30.488 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:30.488 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:30.488 21:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:30.488 21:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.488 21:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:30.488 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:30.488 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:30.488 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:30.488 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:29:30.488 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:30.488 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:30.488 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:30.488 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:30.488 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJiNjAyOTUzNjA3ZDdkMmU3ZWZhOTViOTNlNWFkNGWDKxc7: 00:29:30.488 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWQ2Y2Q0NjhhMDM4OGYzNGIyN2JkZTNkOTMyNTE1OTRiMzYxYzg0ZmNiOTExZjUwNTgzMjE2MWRhYzQ2YzRhM+zC5So=: 00:29:30.488 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:30.488 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:30.488 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJiNjAyOTUzNjA3ZDdkMmU3ZWZhOTViOTNlNWFkNGWDKxc7: 00:29:30.488 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWQ2Y2Q0NjhhMDM4OGYzNGIyN2JkZTNkOTMyNTE1OTRiMzYxYzg0ZmNiOTExZjUwNTgzMjE2MWRhYzQ2YzRhM+zC5So=: ]] 00:29:30.488 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWQ2Y2Q0NjhhMDM4OGYzNGIyN2JkZTNkOTMyNTE1OTRiMzYxYzg0ZmNiOTExZjUwNTgzMjE2MWRhYzQ2YzRhM+zC5So=: 00:29:30.488 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:29:30.488 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:30.488 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:30.488 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:30.488 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:30.489 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:30.489 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:30.489 21:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:30.489 21:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.489 21:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:30.489 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:30.489 21:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:30.489 21:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:30.489 21:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:30.489 21:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:30.489 21:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:30.489 21:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:30.489 21:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:30.489 21:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:30.489 21:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:30.489 21:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:30.489 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:30.489 21:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:30.489 21:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.489 nvme0n1 00:29:30.489 21:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:30.489 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:30.489 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:30.489 21:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:30.489 21:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.489 21:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:30.747 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:30.747 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:30.748 21:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:30.748 21:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.748 21:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:30.748 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:30.748 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:29:30.748 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:30.748 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:30.748 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:30.748 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:30.748 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODMwYmYyYTZlM2E3YTA4MzFjODgyYjBmNmZkZjAwMTIxMmZjMmRmYzJhOThjZTFlqv2ogQ==: 00:29:30.748 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjUxZmM2MzhjOTJlYWQyYmUzOTI4MDI2ZGQ2M2FmYTNkMTQ4Y2M0ZGVkNDhiMDNihHwOiQ==: 00:29:30.748 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:30.748 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:30.748 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODMwYmYyYTZlM2E3YTA4MzFjODgyYjBmNmZkZjAwMTIxMmZjMmRmYzJhOThjZTFlqv2ogQ==: 00:29:30.748 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjUxZmM2MzhjOTJlYWQyYmUzOTI4MDI2ZGQ2M2FmYTNkMTQ4Y2M0ZGVkNDhiMDNihHwOiQ==: ]] 00:29:30.748 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjUxZmM2MzhjOTJlYWQyYmUzOTI4MDI2ZGQ2M2FmYTNkMTQ4Y2M0ZGVkNDhiMDNihHwOiQ==: 00:29:30.748 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:29:30.748 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:30.748 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:30.748 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:30.748 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:30.748 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:30.748 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:30.748 21:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:30.748 21:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.748 21:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:30.748 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:30.748 21:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:30.748 21:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:30.748 21:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:30.748 21:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:30.748 21:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:30.748 21:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:30.748 21:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:30.748 21:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:30.748 21:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:30.748 21:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:30.748 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:30.748 21:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:30.748 21:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.748 nvme0n1 00:29:30.748 21:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:30.748 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:30.748 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:30.748 21:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:30.748 21:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.748 21:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:30.748 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:30.748 21:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:30.748 21:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:30.748 21:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.748 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:30.748 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:30.748 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:29:30.748 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:30.748 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:30.748 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:30.748 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:30.748 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzdjODNiOGQyMzQxMTY1NDExZmU5MzUwYzcyNzk1MzdVo5TX: 00:29:30.748 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTA2MDI4MWFkZmNjZWEwMWVhNzRmNWMzMmE2ZDFmOTkeKHEz: 00:29:30.748 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:30.748 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:30.748 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzdjODNiOGQyMzQxMTY1NDExZmU5MzUwYzcyNzk1MzdVo5TX: 00:29:30.748 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTA2MDI4MWFkZmNjZWEwMWVhNzRmNWMzMmE2ZDFmOTkeKHEz: ]] 00:29:30.748 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTA2MDI4MWFkZmNjZWEwMWVhNzRmNWMzMmE2ZDFmOTkeKHEz: 00:29:30.748 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:29:30.748 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:30.748 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.007 nvme0n1 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWQ1MjliYWNhYTNlNTgzMzJlZWNiMjFjZTJjOTZiOWQzNzA0OTkxNTEzNGQ0M2JlJvL88g==: 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTk4Yzg0M2MxMDJiOWJjM2UzMDM3NzU5NjNmNWI2MjfLYrzA: 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWQ1MjliYWNhYTNlNTgzMzJlZWNiMjFjZTJjOTZiOWQzNzA0OTkxNTEzNGQ0M2JlJvL88g==: 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTk4Yzg0M2MxMDJiOWJjM2UzMDM3NzU5NjNmNWI2MjfLYrzA: ]] 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTk4Yzg0M2MxMDJiOWJjM2UzMDM3NzU5NjNmNWI2MjfLYrzA: 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:31.007 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.266 nvme0n1 00:29:31.266 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:31.266 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:31.266 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:31.266 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:31.266 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.266 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:31.266 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:31.266 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:31.266 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:31.266 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.266 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:31.266 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:31.266 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:29:31.266 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:31.267 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:31.267 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:31.267 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:31.267 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTcyNmJjY2JhOGExMTdiMjQwODIxYzc1YWUyMDgyOGQ3ZTNiZDgzYmUyMzAzNzgxYTM4ZjgzYmJkNTllOGJjMepoxMs=: 00:29:31.267 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:31.267 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:31.267 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:31.267 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTcyNmJjY2JhOGExMTdiMjQwODIxYzc1YWUyMDgyOGQ3ZTNiZDgzYmUyMzAzNzgxYTM4ZjgzYmJkNTllOGJjMepoxMs=: 00:29:31.267 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:31.267 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:29:31.267 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:31.267 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:31.267 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:31.267 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:31.267 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:31.267 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:31.267 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:31.267 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.267 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:31.267 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:31.267 21:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:31.267 21:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:31.267 21:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:31.267 21:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:31.267 21:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:31.267 21:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:31.267 21:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:31.267 21:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:31.267 21:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:31.267 21:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:31.267 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:31.267 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:31.267 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.526 nvme0n1 00:29:31.526 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:31.526 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:31.526 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:31.526 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:31.526 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.526 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:31.526 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:31.526 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:31.526 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:31.526 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.526 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:31.526 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:31.526 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:31.526 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:29:31.526 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:31.526 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:31.526 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:31.526 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:31.526 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJiNjAyOTUzNjA3ZDdkMmU3ZWZhOTViOTNlNWFkNGWDKxc7: 00:29:31.526 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWQ2Y2Q0NjhhMDM4OGYzNGIyN2JkZTNkOTMyNTE1OTRiMzYxYzg0ZmNiOTExZjUwNTgzMjE2MWRhYzQ2YzRhM+zC5So=: 00:29:31.526 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:31.526 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:31.526 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJiNjAyOTUzNjA3ZDdkMmU3ZWZhOTViOTNlNWFkNGWDKxc7: 00:29:31.526 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWQ2Y2Q0NjhhMDM4OGYzNGIyN2JkZTNkOTMyNTE1OTRiMzYxYzg0ZmNiOTExZjUwNTgzMjE2MWRhYzQ2YzRhM+zC5So=: ]] 00:29:31.526 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWQ2Y2Q0NjhhMDM4OGYzNGIyN2JkZTNkOTMyNTE1OTRiMzYxYzg0ZmNiOTExZjUwNTgzMjE2MWRhYzQ2YzRhM+zC5So=: 00:29:31.526 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:29:31.526 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:31.526 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:31.526 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:31.526 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:31.526 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:31.526 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:31.526 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:31.526 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.526 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:31.526 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:31.526 21:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:31.526 21:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:31.526 21:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:31.526 21:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:31.526 21:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:31.526 21:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:31.526 21:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:31.526 21:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:31.526 21:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:31.526 21:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:31.526 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:31.526 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:31.526 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.785 nvme0n1 00:29:31.785 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:31.785 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:31.785 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:31.785 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:31.785 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.785 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:31.785 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:31.785 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:31.785 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:31.785 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.785 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:31.785 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:31.785 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:29:31.785 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:31.785 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:31.785 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:31.785 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:31.785 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODMwYmYyYTZlM2E3YTA4MzFjODgyYjBmNmZkZjAwMTIxMmZjMmRmYzJhOThjZTFlqv2ogQ==: 00:29:31.785 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjUxZmM2MzhjOTJlYWQyYmUzOTI4MDI2ZGQ2M2FmYTNkMTQ4Y2M0ZGVkNDhiMDNihHwOiQ==: 00:29:31.785 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:31.785 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:31.785 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODMwYmYyYTZlM2E3YTA4MzFjODgyYjBmNmZkZjAwMTIxMmZjMmRmYzJhOThjZTFlqv2ogQ==: 00:29:31.785 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjUxZmM2MzhjOTJlYWQyYmUzOTI4MDI2ZGQ2M2FmYTNkMTQ4Y2M0ZGVkNDhiMDNihHwOiQ==: ]] 00:29:31.785 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjUxZmM2MzhjOTJlYWQyYmUzOTI4MDI2ZGQ2M2FmYTNkMTQ4Y2M0ZGVkNDhiMDNihHwOiQ==: 00:29:31.785 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:29:31.785 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:31.785 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:31.785 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:31.785 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:31.785 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:31.785 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:31.785 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:31.785 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.785 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:31.785 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:31.785 21:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:31.785 21:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:31.785 21:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:31.785 21:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:31.785 21:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:31.785 21:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:31.785 21:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:31.785 21:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:31.785 21:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:31.785 21:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:31.785 21:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:31.785 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:31.785 21:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.044 nvme0n1 00:29:32.044 21:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:32.044 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:32.044 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:32.044 21:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:32.044 21:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.044 21:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:32.044 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:32.044 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:32.044 21:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:32.044 21:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.044 21:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:32.044 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:32.044 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:29:32.044 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:32.044 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:32.044 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:32.044 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:32.044 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzdjODNiOGQyMzQxMTY1NDExZmU5MzUwYzcyNzk1MzdVo5TX: 00:29:32.044 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTA2MDI4MWFkZmNjZWEwMWVhNzRmNWMzMmE2ZDFmOTkeKHEz: 00:29:32.044 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:32.044 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:32.044 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzdjODNiOGQyMzQxMTY1NDExZmU5MzUwYzcyNzk1MzdVo5TX: 00:29:32.044 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTA2MDI4MWFkZmNjZWEwMWVhNzRmNWMzMmE2ZDFmOTkeKHEz: ]] 00:29:32.044 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTA2MDI4MWFkZmNjZWEwMWVhNzRmNWMzMmE2ZDFmOTkeKHEz: 00:29:32.044 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:29:32.044 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:32.044 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:32.044 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:32.044 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:32.044 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:32.044 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:32.044 21:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:32.044 21:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.044 21:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:32.044 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:32.044 21:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:32.044 21:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:32.044 21:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:32.044 21:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:32.044 21:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:32.044 21:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:32.044 21:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:32.044 21:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:32.044 21:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:32.044 21:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:32.044 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:32.044 21:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:32.044 21:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.303 nvme0n1 00:29:32.303 21:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:32.303 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:32.303 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:32.303 21:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:32.303 21:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.303 21:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:32.303 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:32.303 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:32.303 21:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:32.303 21:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.303 21:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:32.303 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:32.303 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:29:32.303 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:32.303 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:32.303 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:32.303 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:32.303 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWQ1MjliYWNhYTNlNTgzMzJlZWNiMjFjZTJjOTZiOWQzNzA0OTkxNTEzNGQ0M2JlJvL88g==: 00:29:32.303 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTk4Yzg0M2MxMDJiOWJjM2UzMDM3NzU5NjNmNWI2MjfLYrzA: 00:29:32.303 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:32.303 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:32.303 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWQ1MjliYWNhYTNlNTgzMzJlZWNiMjFjZTJjOTZiOWQzNzA0OTkxNTEzNGQ0M2JlJvL88g==: 00:29:32.303 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTk4Yzg0M2MxMDJiOWJjM2UzMDM3NzU5NjNmNWI2MjfLYrzA: ]] 00:29:32.303 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTk4Yzg0M2MxMDJiOWJjM2UzMDM3NzU5NjNmNWI2MjfLYrzA: 00:29:32.303 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:29:32.303 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:32.303 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:32.303 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:32.303 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:32.303 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:32.303 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:32.303 21:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:32.303 21:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.303 21:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:32.303 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:32.303 21:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:32.303 21:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:32.303 21:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:32.303 21:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:32.303 21:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:32.303 21:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:32.303 21:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:32.303 21:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:32.303 21:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:32.303 21:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:32.303 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:32.303 21:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:32.303 21:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.562 nvme0n1 00:29:32.562 21:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:32.562 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:32.562 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:32.562 21:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:32.562 21:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.562 21:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:32.562 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:32.562 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:32.562 21:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:32.562 21:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.562 21:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:32.562 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:32.562 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:29:32.562 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:32.562 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:32.562 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:32.562 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:32.562 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTcyNmJjY2JhOGExMTdiMjQwODIxYzc1YWUyMDgyOGQ3ZTNiZDgzYmUyMzAzNzgxYTM4ZjgzYmJkNTllOGJjMepoxMs=: 00:29:32.562 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:32.562 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:32.562 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:32.562 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTcyNmJjY2JhOGExMTdiMjQwODIxYzc1YWUyMDgyOGQ3ZTNiZDgzYmUyMzAzNzgxYTM4ZjgzYmJkNTllOGJjMepoxMs=: 00:29:32.562 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:32.562 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:29:32.562 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:32.562 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:32.562 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:32.562 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:32.562 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:32.562 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:32.562 21:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:32.562 21:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.562 21:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:32.562 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:32.562 21:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:32.562 21:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:32.562 21:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:32.562 21:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:32.562 21:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:32.562 21:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:32.562 21:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:32.562 21:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:32.562 21:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:32.562 21:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:32.562 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:32.562 21:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:32.562 21:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.821 nvme0n1 00:29:32.821 21:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:32.821 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:32.821 21:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:32.821 21:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:32.821 21:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.821 21:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:32.821 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:32.821 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:32.821 21:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:32.821 21:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.821 21:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:32.821 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:32.821 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:32.821 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:29:32.821 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:32.821 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:32.821 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:32.821 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:32.821 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJiNjAyOTUzNjA3ZDdkMmU3ZWZhOTViOTNlNWFkNGWDKxc7: 00:29:32.821 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWQ2Y2Q0NjhhMDM4OGYzNGIyN2JkZTNkOTMyNTE1OTRiMzYxYzg0ZmNiOTExZjUwNTgzMjE2MWRhYzQ2YzRhM+zC5So=: 00:29:32.821 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:32.821 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:32.821 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJiNjAyOTUzNjA3ZDdkMmU3ZWZhOTViOTNlNWFkNGWDKxc7: 00:29:32.821 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWQ2Y2Q0NjhhMDM4OGYzNGIyN2JkZTNkOTMyNTE1OTRiMzYxYzg0ZmNiOTExZjUwNTgzMjE2MWRhYzQ2YzRhM+zC5So=: ]] 00:29:32.821 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWQ2Y2Q0NjhhMDM4OGYzNGIyN2JkZTNkOTMyNTE1OTRiMzYxYzg0ZmNiOTExZjUwNTgzMjE2MWRhYzQ2YzRhM+zC5So=: 00:29:32.821 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:29:32.822 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:32.822 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:32.822 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:32.822 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:32.822 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:32.822 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:32.822 21:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:32.822 21:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.822 21:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:32.822 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:32.822 21:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:32.822 21:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:32.822 21:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:32.822 21:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:32.822 21:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:32.822 21:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:32.822 21:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:32.822 21:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:32.822 21:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:32.822 21:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:32.822 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:32.822 21:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:32.822 21:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.389 nvme0n1 00:29:33.389 21:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:33.389 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:33.389 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:33.389 21:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:33.390 21:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.390 21:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:33.390 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:33.390 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:33.390 21:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:33.390 21:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.390 21:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:33.390 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:33.390 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:29:33.390 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:33.390 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:33.390 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:33.390 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:33.390 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODMwYmYyYTZlM2E3YTA4MzFjODgyYjBmNmZkZjAwMTIxMmZjMmRmYzJhOThjZTFlqv2ogQ==: 00:29:33.390 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjUxZmM2MzhjOTJlYWQyYmUzOTI4MDI2ZGQ2M2FmYTNkMTQ4Y2M0ZGVkNDhiMDNihHwOiQ==: 00:29:33.390 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:33.390 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:33.390 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODMwYmYyYTZlM2E3YTA4MzFjODgyYjBmNmZkZjAwMTIxMmZjMmRmYzJhOThjZTFlqv2ogQ==: 00:29:33.390 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjUxZmM2MzhjOTJlYWQyYmUzOTI4MDI2ZGQ2M2FmYTNkMTQ4Y2M0ZGVkNDhiMDNihHwOiQ==: ]] 00:29:33.390 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjUxZmM2MzhjOTJlYWQyYmUzOTI4MDI2ZGQ2M2FmYTNkMTQ4Y2M0ZGVkNDhiMDNihHwOiQ==: 00:29:33.390 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:29:33.390 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:33.390 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:33.390 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:33.390 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:33.390 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:33.390 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:33.390 21:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:33.390 21:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.390 21:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:33.390 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:33.390 21:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:33.390 21:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:33.390 21:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:33.390 21:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:33.390 21:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:33.390 21:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:33.390 21:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:33.390 21:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:33.390 21:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:33.390 21:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:33.390 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:33.390 21:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:33.390 21:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.651 nvme0n1 00:29:33.651 21:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:33.651 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:33.651 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:33.651 21:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:33.651 21:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.651 21:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:33.651 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:33.651 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:33.651 21:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:33.651 21:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.651 21:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:33.651 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:33.651 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:29:33.651 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:33.651 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:33.651 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:33.651 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:33.651 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzdjODNiOGQyMzQxMTY1NDExZmU5MzUwYzcyNzk1MzdVo5TX: 00:29:33.651 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTA2MDI4MWFkZmNjZWEwMWVhNzRmNWMzMmE2ZDFmOTkeKHEz: 00:29:33.651 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:33.651 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:33.651 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzdjODNiOGQyMzQxMTY1NDExZmU5MzUwYzcyNzk1MzdVo5TX: 00:29:33.651 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTA2MDI4MWFkZmNjZWEwMWVhNzRmNWMzMmE2ZDFmOTkeKHEz: ]] 00:29:33.651 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTA2MDI4MWFkZmNjZWEwMWVhNzRmNWMzMmE2ZDFmOTkeKHEz: 00:29:33.652 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:29:33.652 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:33.652 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:33.652 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:33.652 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:33.652 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:33.652 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:33.652 21:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:33.652 21:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.652 21:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:33.652 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:33.652 21:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:33.652 21:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:33.652 21:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:33.652 21:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:33.652 21:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:33.652 21:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:33.652 21:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:33.652 21:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:33.652 21:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:33.652 21:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:33.652 21:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:33.652 21:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:33.652 21:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.913 nvme0n1 00:29:33.913 21:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:33.913 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:33.913 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:33.913 21:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:33.913 21:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.913 21:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:33.913 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:33.913 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:33.913 21:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:33.913 21:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.913 21:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:33.913 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:33.913 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:29:33.913 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:33.913 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:33.913 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:33.913 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:33.913 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWQ1MjliYWNhYTNlNTgzMzJlZWNiMjFjZTJjOTZiOWQzNzA0OTkxNTEzNGQ0M2JlJvL88g==: 00:29:33.913 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTk4Yzg0M2MxMDJiOWJjM2UzMDM3NzU5NjNmNWI2MjfLYrzA: 00:29:33.913 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:33.913 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:33.913 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWQ1MjliYWNhYTNlNTgzMzJlZWNiMjFjZTJjOTZiOWQzNzA0OTkxNTEzNGQ0M2JlJvL88g==: 00:29:33.913 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTk4Yzg0M2MxMDJiOWJjM2UzMDM3NzU5NjNmNWI2MjfLYrzA: ]] 00:29:33.913 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTk4Yzg0M2MxMDJiOWJjM2UzMDM3NzU5NjNmNWI2MjfLYrzA: 00:29:33.913 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:29:33.913 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:33.913 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:33.913 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:33.913 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:33.913 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:33.913 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:33.913 21:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:33.913 21:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.913 21:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:33.913 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:33.913 21:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:33.913 21:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:33.913 21:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:33.913 21:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:33.913 21:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:33.913 21:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:33.913 21:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:33.913 21:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:33.913 21:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:33.913 21:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:33.913 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:33.913 21:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:33.913 21:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.171 nvme0n1 00:29:34.172 21:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:34.172 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:34.172 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:34.172 21:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:34.172 21:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.431 21:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:34.431 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:34.431 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:34.431 21:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:34.431 21:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.431 21:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:34.431 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:34.431 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:29:34.431 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:34.431 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:34.431 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:34.431 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:34.431 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTcyNmJjY2JhOGExMTdiMjQwODIxYzc1YWUyMDgyOGQ3ZTNiZDgzYmUyMzAzNzgxYTM4ZjgzYmJkNTllOGJjMepoxMs=: 00:29:34.431 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:34.431 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:34.431 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:34.431 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTcyNmJjY2JhOGExMTdiMjQwODIxYzc1YWUyMDgyOGQ3ZTNiZDgzYmUyMzAzNzgxYTM4ZjgzYmJkNTllOGJjMepoxMs=: 00:29:34.431 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:34.431 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:29:34.431 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:34.431 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:34.431 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:34.431 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:34.431 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:34.431 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:34.431 21:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:34.431 21:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.431 21:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:34.431 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:34.431 21:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:34.431 21:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:34.431 21:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:34.431 21:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:34.431 21:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:34.431 21:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:34.431 21:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:34.431 21:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:34.431 21:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:34.431 21:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:34.431 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:34.431 21:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:34.431 21:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.691 nvme0n1 00:29:34.691 21:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:34.691 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:34.691 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:34.691 21:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:34.691 21:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.691 21:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:34.691 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:34.691 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:34.691 21:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:34.691 21:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.691 21:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:34.691 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:34.691 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:34.691 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:29:34.691 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:34.691 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:34.691 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:34.691 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:34.691 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJiNjAyOTUzNjA3ZDdkMmU3ZWZhOTViOTNlNWFkNGWDKxc7: 00:29:34.691 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWQ2Y2Q0NjhhMDM4OGYzNGIyN2JkZTNkOTMyNTE1OTRiMzYxYzg0ZmNiOTExZjUwNTgzMjE2MWRhYzQ2YzRhM+zC5So=: 00:29:34.691 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:34.691 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:34.691 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJiNjAyOTUzNjA3ZDdkMmU3ZWZhOTViOTNlNWFkNGWDKxc7: 00:29:34.691 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWQ2Y2Q0NjhhMDM4OGYzNGIyN2JkZTNkOTMyNTE1OTRiMzYxYzg0ZmNiOTExZjUwNTgzMjE2MWRhYzQ2YzRhM+zC5So=: ]] 00:29:34.691 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWQ2Y2Q0NjhhMDM4OGYzNGIyN2JkZTNkOTMyNTE1OTRiMzYxYzg0ZmNiOTExZjUwNTgzMjE2MWRhYzQ2YzRhM+zC5So=: 00:29:34.691 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:29:34.691 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:34.691 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:34.691 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:34.691 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:34.691 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:34.691 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:34.691 21:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:34.691 21:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.691 21:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:34.691 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:34.691 21:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:34.691 21:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:34.691 21:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:34.691 21:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:34.691 21:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:34.691 21:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:34.691 21:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:34.691 21:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:34.691 21:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:34.691 21:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:34.691 21:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:34.691 21:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:34.691 21:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.260 nvme0n1 00:29:35.260 21:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:35.260 21:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:35.260 21:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:35.260 21:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:35.260 21:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.260 21:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:35.260 21:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:35.260 21:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:35.260 21:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:35.260 21:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.260 21:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:35.260 21:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:35.260 21:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:29:35.260 21:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:35.260 21:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:35.260 21:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:35.260 21:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:35.260 21:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODMwYmYyYTZlM2E3YTA4MzFjODgyYjBmNmZkZjAwMTIxMmZjMmRmYzJhOThjZTFlqv2ogQ==: 00:29:35.260 21:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjUxZmM2MzhjOTJlYWQyYmUzOTI4MDI2ZGQ2M2FmYTNkMTQ4Y2M0ZGVkNDhiMDNihHwOiQ==: 00:29:35.260 21:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:35.260 21:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:35.260 21:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODMwYmYyYTZlM2E3YTA4MzFjODgyYjBmNmZkZjAwMTIxMmZjMmRmYzJhOThjZTFlqv2ogQ==: 00:29:35.260 21:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjUxZmM2MzhjOTJlYWQyYmUzOTI4MDI2ZGQ2M2FmYTNkMTQ4Y2M0ZGVkNDhiMDNihHwOiQ==: ]] 00:29:35.260 21:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjUxZmM2MzhjOTJlYWQyYmUzOTI4MDI2ZGQ2M2FmYTNkMTQ4Y2M0ZGVkNDhiMDNihHwOiQ==: 00:29:35.260 21:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:29:35.260 21:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:35.260 21:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:35.260 21:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:35.260 21:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:35.260 21:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:35.260 21:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:35.260 21:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:35.260 21:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.260 21:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:35.260 21:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:35.260 21:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:35.260 21:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:35.260 21:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:35.260 21:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:35.260 21:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:35.260 21:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:35.260 21:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:35.260 21:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:35.260 21:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:35.260 21:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:35.260 21:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:35.260 21:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:35.260 21:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.828 nvme0n1 00:29:35.828 21:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:35.828 21:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:35.828 21:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:35.828 21:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:35.828 21:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.828 21:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:35.828 21:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:35.828 21:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:35.828 21:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:35.828 21:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.828 21:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:35.828 21:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:35.828 21:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:29:35.828 21:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:35.828 21:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:35.828 21:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:35.828 21:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:35.828 21:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzdjODNiOGQyMzQxMTY1NDExZmU5MzUwYzcyNzk1MzdVo5TX: 00:29:35.828 21:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTA2MDI4MWFkZmNjZWEwMWVhNzRmNWMzMmE2ZDFmOTkeKHEz: 00:29:35.828 21:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:35.828 21:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:35.828 21:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzdjODNiOGQyMzQxMTY1NDExZmU5MzUwYzcyNzk1MzdVo5TX: 00:29:35.828 21:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTA2MDI4MWFkZmNjZWEwMWVhNzRmNWMzMmE2ZDFmOTkeKHEz: ]] 00:29:35.828 21:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTA2MDI4MWFkZmNjZWEwMWVhNzRmNWMzMmE2ZDFmOTkeKHEz: 00:29:35.828 21:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:29:35.828 21:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:35.828 21:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:35.828 21:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:35.828 21:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:35.828 21:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:35.828 21:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:35.828 21:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:35.828 21:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.828 21:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:35.828 21:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:35.829 21:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:35.829 21:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:35.829 21:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:35.829 21:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:35.829 21:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:35.829 21:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:35.829 21:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:35.829 21:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:35.829 21:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:35.829 21:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:35.829 21:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:35.829 21:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:35.829 21:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.397 nvme0n1 00:29:36.397 21:46:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:36.397 21:46:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:36.397 21:46:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:36.397 21:46:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:36.397 21:46:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.397 21:46:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:36.397 21:46:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:36.397 21:46:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:36.397 21:46:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:36.397 21:46:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.397 21:46:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:36.397 21:46:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:36.397 21:46:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:29:36.397 21:46:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:36.397 21:46:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:36.397 21:46:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:36.397 21:46:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:36.397 21:46:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWQ1MjliYWNhYTNlNTgzMzJlZWNiMjFjZTJjOTZiOWQzNzA0OTkxNTEzNGQ0M2JlJvL88g==: 00:29:36.397 21:46:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTk4Yzg0M2MxMDJiOWJjM2UzMDM3NzU5NjNmNWI2MjfLYrzA: 00:29:36.397 21:46:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:36.397 21:46:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:36.397 21:46:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWQ1MjliYWNhYTNlNTgzMzJlZWNiMjFjZTJjOTZiOWQzNzA0OTkxNTEzNGQ0M2JlJvL88g==: 00:29:36.397 21:46:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTk4Yzg0M2MxMDJiOWJjM2UzMDM3NzU5NjNmNWI2MjfLYrzA: ]] 00:29:36.397 21:46:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTk4Yzg0M2MxMDJiOWJjM2UzMDM3NzU5NjNmNWI2MjfLYrzA: 00:29:36.397 21:46:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:29:36.397 21:46:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:36.397 21:46:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:36.397 21:46:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:36.397 21:46:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:36.397 21:46:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:36.397 21:46:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:36.397 21:46:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:36.397 21:46:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.397 21:46:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:36.397 21:46:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:36.397 21:46:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:36.397 21:46:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:36.397 21:46:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:36.397 21:46:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:36.397 21:46:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:36.397 21:46:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:36.397 21:46:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:36.397 21:46:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:36.397 21:46:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:36.397 21:46:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:36.397 21:46:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:36.397 21:46:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:36.397 21:46:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.966 nvme0n1 00:29:36.966 21:46:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:36.966 21:46:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:36.966 21:46:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:36.966 21:46:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:36.966 21:46:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.966 21:46:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:36.966 21:46:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:36.966 21:46:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:36.966 21:46:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:36.966 21:46:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.966 21:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:36.966 21:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:36.966 21:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:29:36.966 21:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:36.966 21:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:36.966 21:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:36.966 21:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:36.966 21:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTcyNmJjY2JhOGExMTdiMjQwODIxYzc1YWUyMDgyOGQ3ZTNiZDgzYmUyMzAzNzgxYTM4ZjgzYmJkNTllOGJjMepoxMs=: 00:29:36.966 21:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:36.966 21:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:36.966 21:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:36.966 21:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTcyNmJjY2JhOGExMTdiMjQwODIxYzc1YWUyMDgyOGQ3ZTNiZDgzYmUyMzAzNzgxYTM4ZjgzYmJkNTllOGJjMepoxMs=: 00:29:36.966 21:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:36.966 21:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:29:36.966 21:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:36.966 21:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:36.966 21:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:36.966 21:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:36.966 21:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:36.966 21:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:36.966 21:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:36.966 21:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.966 21:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:36.966 21:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:36.966 21:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:36.966 21:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:36.966 21:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:36.966 21:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:36.966 21:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:36.966 21:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:36.966 21:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:36.966 21:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:36.966 21:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:36.966 21:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:36.966 21:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:36.966 21:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:36.966 21:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.225 nvme0n1 00:29:37.225 21:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:37.225 21:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:37.225 21:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:37.225 21:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:37.225 21:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.484 21:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:37.484 21:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:37.484 21:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:37.484 21:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:37.484 21:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.484 21:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:37.484 21:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:37.484 21:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:37.484 21:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:29:37.484 21:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:37.484 21:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:37.484 21:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:37.484 21:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:37.484 21:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJiNjAyOTUzNjA3ZDdkMmU3ZWZhOTViOTNlNWFkNGWDKxc7: 00:29:37.484 21:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWQ2Y2Q0NjhhMDM4OGYzNGIyN2JkZTNkOTMyNTE1OTRiMzYxYzg0ZmNiOTExZjUwNTgzMjE2MWRhYzQ2YzRhM+zC5So=: 00:29:37.484 21:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:37.484 21:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:37.484 21:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJiNjAyOTUzNjA3ZDdkMmU3ZWZhOTViOTNlNWFkNGWDKxc7: 00:29:37.484 21:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWQ2Y2Q0NjhhMDM4OGYzNGIyN2JkZTNkOTMyNTE1OTRiMzYxYzg0ZmNiOTExZjUwNTgzMjE2MWRhYzQ2YzRhM+zC5So=: ]] 00:29:37.484 21:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWQ2Y2Q0NjhhMDM4OGYzNGIyN2JkZTNkOTMyNTE1OTRiMzYxYzg0ZmNiOTExZjUwNTgzMjE2MWRhYzQ2YzRhM+zC5So=: 00:29:37.484 21:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:29:37.484 21:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:37.484 21:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:37.484 21:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:37.484 21:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:37.484 21:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:37.484 21:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:37.484 21:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:37.484 21:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.484 21:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:37.484 21:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:37.484 21:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:37.484 21:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:37.484 21:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:37.484 21:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:37.484 21:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:37.484 21:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:37.484 21:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:37.484 21:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:37.484 21:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:37.484 21:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:37.484 21:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:37.484 21:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:37.484 21:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.422 nvme0n1 00:29:38.422 21:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:38.422 21:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:38.422 21:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:38.422 21:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:38.422 21:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.422 21:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:38.422 21:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:38.422 21:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:38.422 21:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:38.422 21:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.422 21:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:38.422 21:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:38.422 21:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:29:38.422 21:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:38.422 21:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:38.422 21:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:38.422 21:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:38.422 21:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODMwYmYyYTZlM2E3YTA4MzFjODgyYjBmNmZkZjAwMTIxMmZjMmRmYzJhOThjZTFlqv2ogQ==: 00:29:38.422 21:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjUxZmM2MzhjOTJlYWQyYmUzOTI4MDI2ZGQ2M2FmYTNkMTQ4Y2M0ZGVkNDhiMDNihHwOiQ==: 00:29:38.422 21:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:38.422 21:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:38.422 21:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODMwYmYyYTZlM2E3YTA4MzFjODgyYjBmNmZkZjAwMTIxMmZjMmRmYzJhOThjZTFlqv2ogQ==: 00:29:38.422 21:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjUxZmM2MzhjOTJlYWQyYmUzOTI4MDI2ZGQ2M2FmYTNkMTQ4Y2M0ZGVkNDhiMDNihHwOiQ==: ]] 00:29:38.422 21:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjUxZmM2MzhjOTJlYWQyYmUzOTI4MDI2ZGQ2M2FmYTNkMTQ4Y2M0ZGVkNDhiMDNihHwOiQ==: 00:29:38.422 21:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:29:38.422 21:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:38.422 21:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:38.422 21:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:38.422 21:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:38.422 21:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:38.422 21:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:38.422 21:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:38.422 21:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.422 21:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:38.422 21:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:38.422 21:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:38.422 21:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:38.422 21:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:38.422 21:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:38.422 21:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:38.422 21:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:38.422 21:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:38.422 21:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:38.422 21:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:38.422 21:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:38.422 21:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:38.422 21:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:38.422 21:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.360 nvme0n1 00:29:39.360 21:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:39.360 21:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:39.360 21:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:39.360 21:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:39.360 21:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.360 21:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:39.360 21:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:39.360 21:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:39.360 21:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:39.360 21:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.360 21:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:39.360 21:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:39.360 21:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:29:39.360 21:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:39.360 21:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:39.360 21:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:39.360 21:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:39.360 21:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzdjODNiOGQyMzQxMTY1NDExZmU5MzUwYzcyNzk1MzdVo5TX: 00:29:39.360 21:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTA2MDI4MWFkZmNjZWEwMWVhNzRmNWMzMmE2ZDFmOTkeKHEz: 00:29:39.360 21:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:39.360 21:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:39.360 21:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzdjODNiOGQyMzQxMTY1NDExZmU5MzUwYzcyNzk1MzdVo5TX: 00:29:39.360 21:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTA2MDI4MWFkZmNjZWEwMWVhNzRmNWMzMmE2ZDFmOTkeKHEz: ]] 00:29:39.360 21:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTA2MDI4MWFkZmNjZWEwMWVhNzRmNWMzMmE2ZDFmOTkeKHEz: 00:29:39.360 21:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:29:39.360 21:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:39.360 21:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:39.360 21:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:39.360 21:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:39.360 21:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:39.360 21:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:39.360 21:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:39.360 21:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.360 21:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:39.360 21:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:39.360 21:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:39.360 21:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:39.360 21:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:39.360 21:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:39.360 21:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:39.360 21:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:39.360 21:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:39.360 21:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:39.360 21:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:39.360 21:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:39.360 21:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:39.360 21:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:39.360 21:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.929 nvme0n1 00:29:39.929 21:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:39.929 21:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:39.929 21:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:39.929 21:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:39.929 21:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.929 21:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:39.929 21:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:39.929 21:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:39.929 21:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:39.929 21:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.188 21:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:40.188 21:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:40.188 21:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:29:40.188 21:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:40.188 21:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:40.188 21:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:40.188 21:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:40.188 21:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWQ1MjliYWNhYTNlNTgzMzJlZWNiMjFjZTJjOTZiOWQzNzA0OTkxNTEzNGQ0M2JlJvL88g==: 00:29:40.188 21:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTk4Yzg0M2MxMDJiOWJjM2UzMDM3NzU5NjNmNWI2MjfLYrzA: 00:29:40.188 21:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:40.188 21:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:40.188 21:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWQ1MjliYWNhYTNlNTgzMzJlZWNiMjFjZTJjOTZiOWQzNzA0OTkxNTEzNGQ0M2JlJvL88g==: 00:29:40.188 21:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTk4Yzg0M2MxMDJiOWJjM2UzMDM3NzU5NjNmNWI2MjfLYrzA: ]] 00:29:40.188 21:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTk4Yzg0M2MxMDJiOWJjM2UzMDM3NzU5NjNmNWI2MjfLYrzA: 00:29:40.188 21:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:29:40.188 21:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:40.188 21:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:40.188 21:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:40.188 21:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:40.188 21:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:40.188 21:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:40.188 21:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:40.188 21:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.188 21:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:40.188 21:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:40.188 21:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:40.188 21:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:40.188 21:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:40.188 21:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:40.188 21:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:40.188 21:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:40.188 21:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:40.188 21:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:40.188 21:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:40.188 21:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:40.188 21:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:40.188 21:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:40.188 21:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.126 nvme0n1 00:29:41.126 21:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:41.126 21:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:41.126 21:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:41.126 21:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:41.126 21:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.126 21:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:41.126 21:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:41.126 21:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:41.126 21:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:41.126 21:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.126 21:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:41.126 21:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:41.126 21:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:29:41.126 21:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:41.126 21:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:41.126 21:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:41.126 21:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:41.126 21:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTcyNmJjY2JhOGExMTdiMjQwODIxYzc1YWUyMDgyOGQ3ZTNiZDgzYmUyMzAzNzgxYTM4ZjgzYmJkNTllOGJjMepoxMs=: 00:29:41.126 21:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:41.126 21:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:41.126 21:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:41.126 21:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTcyNmJjY2JhOGExMTdiMjQwODIxYzc1YWUyMDgyOGQ3ZTNiZDgzYmUyMzAzNzgxYTM4ZjgzYmJkNTllOGJjMepoxMs=: 00:29:41.126 21:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:41.126 21:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:29:41.126 21:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:41.126 21:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:41.126 21:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:41.126 21:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:41.126 21:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:41.126 21:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:41.126 21:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:41.126 21:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.126 21:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:41.126 21:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:41.126 21:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:41.126 21:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:41.126 21:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:41.126 21:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:41.126 21:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:41.126 21:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:41.126 21:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:41.126 21:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:41.126 21:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:41.126 21:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:41.126 21:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:41.126 21:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:41.126 21:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.694 nvme0n1 00:29:41.694 21:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:41.694 21:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:41.694 21:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:41.694 21:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:41.694 21:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.694 21:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:41.694 21:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:41.694 21:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:41.694 21:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:41.694 21:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.694 21:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:41.694 21:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:41.694 21:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:41.694 21:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:41.694 21:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:29:41.694 21:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:41.694 21:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:41.694 21:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:41.695 21:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:41.695 21:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJiNjAyOTUzNjA3ZDdkMmU3ZWZhOTViOTNlNWFkNGWDKxc7: 00:29:41.695 21:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWQ2Y2Q0NjhhMDM4OGYzNGIyN2JkZTNkOTMyNTE1OTRiMzYxYzg0ZmNiOTExZjUwNTgzMjE2MWRhYzQ2YzRhM+zC5So=: 00:29:41.695 21:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:41.695 21:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:41.695 21:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJiNjAyOTUzNjA3ZDdkMmU3ZWZhOTViOTNlNWFkNGWDKxc7: 00:29:41.695 21:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWQ2Y2Q0NjhhMDM4OGYzNGIyN2JkZTNkOTMyNTE1OTRiMzYxYzg0ZmNiOTExZjUwNTgzMjE2MWRhYzQ2YzRhM+zC5So=: ]] 00:29:41.695 21:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWQ2Y2Q0NjhhMDM4OGYzNGIyN2JkZTNkOTMyNTE1OTRiMzYxYzg0ZmNiOTExZjUwNTgzMjE2MWRhYzQ2YzRhM+zC5So=: 00:29:41.695 21:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:29:41.695 21:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:41.695 21:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:41.695 21:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:41.695 21:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:41.695 21:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:41.695 21:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:41.695 21:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:41.695 21:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.954 21:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:41.954 21:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:41.954 21:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:41.954 21:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:41.954 21:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:41.954 21:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:41.954 21:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:41.954 21:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:41.954 21:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:41.955 21:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:41.955 21:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:41.955 21:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:41.955 21:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:41.955 21:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:41.955 21:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.955 nvme0n1 00:29:41.955 21:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:41.955 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:41.955 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:41.955 21:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:41.955 21:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.955 21:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:41.955 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:41.955 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:41.955 21:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:41.955 21:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.955 21:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:41.955 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:41.955 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:29:41.955 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:41.955 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:41.955 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:41.955 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:41.955 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODMwYmYyYTZlM2E3YTA4MzFjODgyYjBmNmZkZjAwMTIxMmZjMmRmYzJhOThjZTFlqv2ogQ==: 00:29:41.955 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjUxZmM2MzhjOTJlYWQyYmUzOTI4MDI2ZGQ2M2FmYTNkMTQ4Y2M0ZGVkNDhiMDNihHwOiQ==: 00:29:41.955 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:41.955 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:41.955 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODMwYmYyYTZlM2E3YTA4MzFjODgyYjBmNmZkZjAwMTIxMmZjMmRmYzJhOThjZTFlqv2ogQ==: 00:29:41.955 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjUxZmM2MzhjOTJlYWQyYmUzOTI4MDI2ZGQ2M2FmYTNkMTQ4Y2M0ZGVkNDhiMDNihHwOiQ==: ]] 00:29:41.955 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjUxZmM2MzhjOTJlYWQyYmUzOTI4MDI2ZGQ2M2FmYTNkMTQ4Y2M0ZGVkNDhiMDNihHwOiQ==: 00:29:41.955 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:29:41.955 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:41.955 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:41.955 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:41.955 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:41.955 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:41.955 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:41.955 21:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:41.955 21:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.955 21:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:41.955 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:41.955 21:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:41.955 21:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:41.955 21:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:41.955 21:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:41.955 21:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:41.955 21:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:41.955 21:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:41.955 21:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:41.955 21:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:41.955 21:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:41.955 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:41.955 21:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:41.955 21:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.215 nvme0n1 00:29:42.215 21:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:42.215 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:42.215 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:42.215 21:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:42.215 21:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.215 21:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:42.215 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:42.215 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:42.215 21:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:42.215 21:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.215 21:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:42.215 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:42.215 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:29:42.215 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:42.215 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:42.215 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:42.215 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:42.215 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzdjODNiOGQyMzQxMTY1NDExZmU5MzUwYzcyNzk1MzdVo5TX: 00:29:42.215 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTA2MDI4MWFkZmNjZWEwMWVhNzRmNWMzMmE2ZDFmOTkeKHEz: 00:29:42.215 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:42.215 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:42.215 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzdjODNiOGQyMzQxMTY1NDExZmU5MzUwYzcyNzk1MzdVo5TX: 00:29:42.215 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTA2MDI4MWFkZmNjZWEwMWVhNzRmNWMzMmE2ZDFmOTkeKHEz: ]] 00:29:42.215 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTA2MDI4MWFkZmNjZWEwMWVhNzRmNWMzMmE2ZDFmOTkeKHEz: 00:29:42.215 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:29:42.215 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:42.215 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:42.215 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:42.215 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:42.215 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:42.215 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:42.215 21:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:42.215 21:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.215 21:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:42.215 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:42.215 21:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:42.215 21:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:42.215 21:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:42.215 21:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:42.215 21:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:42.215 21:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:42.215 21:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:42.215 21:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:42.215 21:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:42.215 21:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:42.215 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:42.215 21:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:42.215 21:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.475 nvme0n1 00:29:42.475 21:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:42.475 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:42.475 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:42.475 21:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:42.475 21:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.475 21:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:42.475 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:42.475 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:42.475 21:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:42.475 21:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.475 21:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:42.475 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:42.475 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:29:42.475 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:42.475 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:42.475 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:42.475 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:42.475 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWQ1MjliYWNhYTNlNTgzMzJlZWNiMjFjZTJjOTZiOWQzNzA0OTkxNTEzNGQ0M2JlJvL88g==: 00:29:42.475 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTk4Yzg0M2MxMDJiOWJjM2UzMDM3NzU5NjNmNWI2MjfLYrzA: 00:29:42.475 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:42.475 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:42.475 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWQ1MjliYWNhYTNlNTgzMzJlZWNiMjFjZTJjOTZiOWQzNzA0OTkxNTEzNGQ0M2JlJvL88g==: 00:29:42.475 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTk4Yzg0M2MxMDJiOWJjM2UzMDM3NzU5NjNmNWI2MjfLYrzA: ]] 00:29:42.475 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTk4Yzg0M2MxMDJiOWJjM2UzMDM3NzU5NjNmNWI2MjfLYrzA: 00:29:42.475 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:29:42.475 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:42.475 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:42.475 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:42.475 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:42.475 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:42.475 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:42.475 21:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:42.475 21:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.475 21:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:42.475 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:42.475 21:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:42.475 21:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:42.475 21:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:42.475 21:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:42.475 21:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:42.475 21:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:42.475 21:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:42.475 21:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:42.475 21:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:42.475 21:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:42.475 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:42.475 21:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:42.475 21:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.734 nvme0n1 00:29:42.735 21:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:42.735 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:42.735 21:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:42.735 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:42.735 21:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.735 21:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:42.735 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:42.735 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:42.735 21:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:42.735 21:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.735 21:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:42.735 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:42.735 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:29:42.735 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:42.735 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:42.735 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:42.735 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:42.735 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTcyNmJjY2JhOGExMTdiMjQwODIxYzc1YWUyMDgyOGQ3ZTNiZDgzYmUyMzAzNzgxYTM4ZjgzYmJkNTllOGJjMepoxMs=: 00:29:42.735 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:42.735 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:42.735 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:42.735 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTcyNmJjY2JhOGExMTdiMjQwODIxYzc1YWUyMDgyOGQ3ZTNiZDgzYmUyMzAzNzgxYTM4ZjgzYmJkNTllOGJjMepoxMs=: 00:29:42.735 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:42.735 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:29:42.735 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:42.735 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:42.735 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:42.735 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:42.735 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:42.735 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:42.735 21:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:42.735 21:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.735 21:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:42.735 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:42.735 21:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:42.735 21:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:42.735 21:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:42.735 21:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:42.735 21:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:42.735 21:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:42.735 21:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:42.735 21:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:42.735 21:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:42.735 21:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:42.735 21:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:42.735 21:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:42.735 21:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.996 nvme0n1 00:29:42.996 21:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:42.996 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:42.996 21:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:42.996 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:42.996 21:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.996 21:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:42.996 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:42.996 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:42.996 21:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:42.996 21:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.996 21:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:42.996 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:42.996 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:42.996 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:29:42.996 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:42.996 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:42.996 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:42.996 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:42.996 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJiNjAyOTUzNjA3ZDdkMmU3ZWZhOTViOTNlNWFkNGWDKxc7: 00:29:42.996 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWQ2Y2Q0NjhhMDM4OGYzNGIyN2JkZTNkOTMyNTE1OTRiMzYxYzg0ZmNiOTExZjUwNTgzMjE2MWRhYzQ2YzRhM+zC5So=: 00:29:42.996 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:42.996 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:42.996 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJiNjAyOTUzNjA3ZDdkMmU3ZWZhOTViOTNlNWFkNGWDKxc7: 00:29:42.996 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWQ2Y2Q0NjhhMDM4OGYzNGIyN2JkZTNkOTMyNTE1OTRiMzYxYzg0ZmNiOTExZjUwNTgzMjE2MWRhYzQ2YzRhM+zC5So=: ]] 00:29:42.996 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWQ2Y2Q0NjhhMDM4OGYzNGIyN2JkZTNkOTMyNTE1OTRiMzYxYzg0ZmNiOTExZjUwNTgzMjE2MWRhYzQ2YzRhM+zC5So=: 00:29:42.996 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:29:42.996 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:42.996 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:42.996 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:42.996 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:42.996 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:42.996 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:42.996 21:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:42.996 21:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.996 21:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:42.996 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:42.996 21:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:42.996 21:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:42.996 21:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:42.996 21:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:42.996 21:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:42.996 21:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:42.996 21:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:42.996 21:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:42.996 21:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:42.996 21:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:42.996 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:42.996 21:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:42.996 21:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.256 nvme0n1 00:29:43.256 21:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:43.256 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:43.256 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:43.256 21:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:43.256 21:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.256 21:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:43.256 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:43.256 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:43.256 21:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:43.256 21:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.256 21:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:43.256 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:43.256 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:29:43.256 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:43.256 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:43.256 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:43.256 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:43.256 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODMwYmYyYTZlM2E3YTA4MzFjODgyYjBmNmZkZjAwMTIxMmZjMmRmYzJhOThjZTFlqv2ogQ==: 00:29:43.256 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjUxZmM2MzhjOTJlYWQyYmUzOTI4MDI2ZGQ2M2FmYTNkMTQ4Y2M0ZGVkNDhiMDNihHwOiQ==: 00:29:43.256 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:43.256 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:43.256 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODMwYmYyYTZlM2E3YTA4MzFjODgyYjBmNmZkZjAwMTIxMmZjMmRmYzJhOThjZTFlqv2ogQ==: 00:29:43.256 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjUxZmM2MzhjOTJlYWQyYmUzOTI4MDI2ZGQ2M2FmYTNkMTQ4Y2M0ZGVkNDhiMDNihHwOiQ==: ]] 00:29:43.256 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjUxZmM2MzhjOTJlYWQyYmUzOTI4MDI2ZGQ2M2FmYTNkMTQ4Y2M0ZGVkNDhiMDNihHwOiQ==: 00:29:43.256 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:29:43.256 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:43.256 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:43.256 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:43.256 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:43.256 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:43.256 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:43.256 21:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:43.256 21:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.256 21:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:43.256 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:43.256 21:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:43.256 21:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:43.256 21:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:43.256 21:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:43.256 21:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:43.256 21:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:43.256 21:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:43.256 21:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:43.256 21:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:43.256 21:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:43.256 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:43.256 21:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:43.256 21:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.515 nvme0n1 00:29:43.515 21:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:43.515 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:43.515 21:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:43.515 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:43.515 21:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.515 21:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:43.515 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:43.515 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:43.515 21:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:43.515 21:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.515 21:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:43.515 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:43.515 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:29:43.515 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:43.515 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:43.515 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:43.515 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:43.515 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzdjODNiOGQyMzQxMTY1NDExZmU5MzUwYzcyNzk1MzdVo5TX: 00:29:43.515 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTA2MDI4MWFkZmNjZWEwMWVhNzRmNWMzMmE2ZDFmOTkeKHEz: 00:29:43.515 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:43.515 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:43.515 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzdjODNiOGQyMzQxMTY1NDExZmU5MzUwYzcyNzk1MzdVo5TX: 00:29:43.515 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTA2MDI4MWFkZmNjZWEwMWVhNzRmNWMzMmE2ZDFmOTkeKHEz: ]] 00:29:43.515 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTA2MDI4MWFkZmNjZWEwMWVhNzRmNWMzMmE2ZDFmOTkeKHEz: 00:29:43.515 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:29:43.515 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:43.515 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:43.515 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:43.515 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:43.515 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:43.515 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:43.515 21:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:43.515 21:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.515 21:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:43.515 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:43.515 21:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:43.515 21:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:43.515 21:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:43.515 21:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:43.515 21:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:43.515 21:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:43.515 21:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:43.515 21:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:43.515 21:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:43.515 21:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:43.515 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:43.515 21:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:43.515 21:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.774 nvme0n1 00:29:43.774 21:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:43.774 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:43.774 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:43.774 21:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:43.774 21:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.774 21:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:43.774 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:43.774 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:43.774 21:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:43.774 21:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.774 21:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:43.774 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:43.774 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:29:43.774 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:43.774 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:43.774 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:43.774 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:43.774 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWQ1MjliYWNhYTNlNTgzMzJlZWNiMjFjZTJjOTZiOWQzNzA0OTkxNTEzNGQ0M2JlJvL88g==: 00:29:43.774 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTk4Yzg0M2MxMDJiOWJjM2UzMDM3NzU5NjNmNWI2MjfLYrzA: 00:29:43.774 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:43.774 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:43.774 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWQ1MjliYWNhYTNlNTgzMzJlZWNiMjFjZTJjOTZiOWQzNzA0OTkxNTEzNGQ0M2JlJvL88g==: 00:29:43.774 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTk4Yzg0M2MxMDJiOWJjM2UzMDM3NzU5NjNmNWI2MjfLYrzA: ]] 00:29:43.774 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTk4Yzg0M2MxMDJiOWJjM2UzMDM3NzU5NjNmNWI2MjfLYrzA: 00:29:43.774 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:29:43.774 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:43.774 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:43.774 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:43.774 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:43.774 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:43.774 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:43.774 21:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:43.774 21:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.774 21:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:43.774 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:43.774 21:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:43.774 21:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:43.774 21:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:43.774 21:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:43.774 21:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:43.774 21:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:43.774 21:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:43.774 21:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:43.774 21:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:43.774 21:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:43.774 21:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:43.774 21:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:43.774 21:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.033 nvme0n1 00:29:44.033 21:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:44.033 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:44.033 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:44.033 21:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:44.033 21:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.033 21:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:44.033 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:44.033 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:44.033 21:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:44.033 21:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.033 21:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:44.033 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:44.033 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:29:44.033 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:44.033 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:44.033 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:44.033 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:44.033 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTcyNmJjY2JhOGExMTdiMjQwODIxYzc1YWUyMDgyOGQ3ZTNiZDgzYmUyMzAzNzgxYTM4ZjgzYmJkNTllOGJjMepoxMs=: 00:29:44.033 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:44.033 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:44.033 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:44.033 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTcyNmJjY2JhOGExMTdiMjQwODIxYzc1YWUyMDgyOGQ3ZTNiZDgzYmUyMzAzNzgxYTM4ZjgzYmJkNTllOGJjMepoxMs=: 00:29:44.033 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:44.033 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:29:44.033 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:44.033 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:44.033 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:44.033 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:44.033 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:44.033 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:44.033 21:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:44.033 21:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.033 21:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:44.033 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:44.033 21:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:44.033 21:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:44.033 21:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:44.033 21:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:44.033 21:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:44.033 21:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:44.033 21:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:44.033 21:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:44.033 21:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:44.033 21:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:44.033 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:44.033 21:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:44.033 21:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.293 nvme0n1 00:29:44.293 21:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:44.293 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:44.293 21:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:44.293 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:44.293 21:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.293 21:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:44.293 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:44.293 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:44.293 21:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:44.293 21:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.293 21:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:44.293 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:44.293 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:44.293 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:29:44.293 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:44.293 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:44.293 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:44.293 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:44.293 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJiNjAyOTUzNjA3ZDdkMmU3ZWZhOTViOTNlNWFkNGWDKxc7: 00:29:44.293 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWQ2Y2Q0NjhhMDM4OGYzNGIyN2JkZTNkOTMyNTE1OTRiMzYxYzg0ZmNiOTExZjUwNTgzMjE2MWRhYzQ2YzRhM+zC5So=: 00:29:44.293 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:44.293 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:44.293 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJiNjAyOTUzNjA3ZDdkMmU3ZWZhOTViOTNlNWFkNGWDKxc7: 00:29:44.293 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWQ2Y2Q0NjhhMDM4OGYzNGIyN2JkZTNkOTMyNTE1OTRiMzYxYzg0ZmNiOTExZjUwNTgzMjE2MWRhYzQ2YzRhM+zC5So=: ]] 00:29:44.293 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWQ2Y2Q0NjhhMDM4OGYzNGIyN2JkZTNkOTMyNTE1OTRiMzYxYzg0ZmNiOTExZjUwNTgzMjE2MWRhYzQ2YzRhM+zC5So=: 00:29:44.293 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:29:44.293 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:44.293 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:44.293 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:44.293 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:44.293 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:44.293 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:44.293 21:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:44.293 21:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.293 21:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:44.293 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:44.293 21:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:44.293 21:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:44.293 21:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:44.293 21:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:44.293 21:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:44.293 21:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:44.293 21:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:44.293 21:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:44.293 21:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:44.293 21:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:44.293 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:44.293 21:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:44.293 21:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.553 nvme0n1 00:29:44.553 21:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:44.553 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:44.553 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:44.553 21:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:44.553 21:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.553 21:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:44.553 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:44.553 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:44.553 21:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:44.553 21:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.553 21:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:44.553 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:44.553 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:29:44.553 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:44.553 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:44.553 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:44.553 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:44.553 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODMwYmYyYTZlM2E3YTA4MzFjODgyYjBmNmZkZjAwMTIxMmZjMmRmYzJhOThjZTFlqv2ogQ==: 00:29:44.553 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjUxZmM2MzhjOTJlYWQyYmUzOTI4MDI2ZGQ2M2FmYTNkMTQ4Y2M0ZGVkNDhiMDNihHwOiQ==: 00:29:44.553 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:44.553 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:44.553 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODMwYmYyYTZlM2E3YTA4MzFjODgyYjBmNmZkZjAwMTIxMmZjMmRmYzJhOThjZTFlqv2ogQ==: 00:29:44.553 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjUxZmM2MzhjOTJlYWQyYmUzOTI4MDI2ZGQ2M2FmYTNkMTQ4Y2M0ZGVkNDhiMDNihHwOiQ==: ]] 00:29:44.553 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjUxZmM2MzhjOTJlYWQyYmUzOTI4MDI2ZGQ2M2FmYTNkMTQ4Y2M0ZGVkNDhiMDNihHwOiQ==: 00:29:44.553 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:29:44.553 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:44.553 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:44.553 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:44.553 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:44.553 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:44.553 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:44.553 21:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:44.553 21:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.812 21:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:44.812 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:44.812 21:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:44.812 21:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:44.812 21:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:44.812 21:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:44.812 21:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:44.812 21:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:44.812 21:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:44.812 21:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:44.812 21:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:44.812 21:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:44.812 21:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:44.812 21:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:44.812 21:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.072 nvme0n1 00:29:45.072 21:46:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:45.072 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:45.072 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:45.072 21:46:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:45.072 21:46:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.072 21:46:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:45.072 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:45.072 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:45.072 21:46:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:45.072 21:46:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.072 21:46:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:45.072 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:45.072 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:29:45.072 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:45.072 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:45.072 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:45.072 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:45.072 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzdjODNiOGQyMzQxMTY1NDExZmU5MzUwYzcyNzk1MzdVo5TX: 00:29:45.072 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTA2MDI4MWFkZmNjZWEwMWVhNzRmNWMzMmE2ZDFmOTkeKHEz: 00:29:45.072 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:45.072 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:45.072 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzdjODNiOGQyMzQxMTY1NDExZmU5MzUwYzcyNzk1MzdVo5TX: 00:29:45.072 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTA2MDI4MWFkZmNjZWEwMWVhNzRmNWMzMmE2ZDFmOTkeKHEz: ]] 00:29:45.072 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTA2MDI4MWFkZmNjZWEwMWVhNzRmNWMzMmE2ZDFmOTkeKHEz: 00:29:45.072 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:29:45.072 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:45.072 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:45.072 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:45.072 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:45.072 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:45.072 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:45.072 21:46:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:45.072 21:46:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.072 21:46:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:45.072 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:45.072 21:46:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:45.072 21:46:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:45.072 21:46:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:45.072 21:46:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:45.072 21:46:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:45.072 21:46:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:45.072 21:46:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:45.072 21:46:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:45.072 21:46:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:45.072 21:46:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:45.072 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:45.072 21:46:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:45.072 21:46:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.331 nvme0n1 00:29:45.331 21:46:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:45.331 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:45.331 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:45.331 21:46:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:45.331 21:46:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.331 21:46:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:45.331 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:45.331 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:45.331 21:46:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:45.331 21:46:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.331 21:46:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:45.331 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:45.331 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:29:45.331 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:45.331 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:45.331 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:45.331 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:45.331 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWQ1MjliYWNhYTNlNTgzMzJlZWNiMjFjZTJjOTZiOWQzNzA0OTkxNTEzNGQ0M2JlJvL88g==: 00:29:45.331 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTk4Yzg0M2MxMDJiOWJjM2UzMDM3NzU5NjNmNWI2MjfLYrzA: 00:29:45.331 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:45.331 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:45.331 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWQ1MjliYWNhYTNlNTgzMzJlZWNiMjFjZTJjOTZiOWQzNzA0OTkxNTEzNGQ0M2JlJvL88g==: 00:29:45.331 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTk4Yzg0M2MxMDJiOWJjM2UzMDM3NzU5NjNmNWI2MjfLYrzA: ]] 00:29:45.331 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTk4Yzg0M2MxMDJiOWJjM2UzMDM3NzU5NjNmNWI2MjfLYrzA: 00:29:45.331 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:29:45.331 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:45.331 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:45.331 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:45.331 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:45.331 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:45.332 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:45.332 21:46:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:45.332 21:46:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.332 21:46:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:45.332 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:45.332 21:46:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:45.332 21:46:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:45.332 21:46:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:45.332 21:46:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:45.332 21:46:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:45.332 21:46:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:45.332 21:46:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:45.332 21:46:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:45.332 21:46:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:45.332 21:46:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:45.332 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:45.332 21:46:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:45.332 21:46:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.591 nvme0n1 00:29:45.591 21:46:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:45.591 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:45.591 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:45.591 21:46:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:45.591 21:46:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.591 21:46:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:45.591 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:45.591 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:45.591 21:46:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:45.591 21:46:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.850 21:46:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:45.850 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:45.850 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:29:45.850 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:45.850 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:45.850 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:45.850 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:45.850 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTcyNmJjY2JhOGExMTdiMjQwODIxYzc1YWUyMDgyOGQ3ZTNiZDgzYmUyMzAzNzgxYTM4ZjgzYmJkNTllOGJjMepoxMs=: 00:29:45.850 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:45.850 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:45.850 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:45.850 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTcyNmJjY2JhOGExMTdiMjQwODIxYzc1YWUyMDgyOGQ3ZTNiZDgzYmUyMzAzNzgxYTM4ZjgzYmJkNTllOGJjMepoxMs=: 00:29:45.850 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:45.850 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:29:45.850 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:45.850 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:45.850 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:45.850 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:45.850 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:45.850 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:45.850 21:46:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:45.850 21:46:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.850 21:46:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:45.850 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:45.850 21:46:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:45.850 21:46:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:45.850 21:46:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:45.850 21:46:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:45.850 21:46:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:45.850 21:46:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:45.850 21:46:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:45.850 21:46:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:45.850 21:46:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:45.850 21:46:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:45.850 21:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:45.850 21:46:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:45.850 21:46:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.109 nvme0n1 00:29:46.109 21:46:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:46.109 21:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:46.109 21:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:46.109 21:46:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:46.109 21:46:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.109 21:46:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:46.109 21:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:46.109 21:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:46.109 21:46:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:46.109 21:46:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.109 21:46:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:46.109 21:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:46.109 21:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:46.109 21:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:29:46.109 21:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:46.109 21:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:46.109 21:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:46.109 21:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:46.109 21:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJiNjAyOTUzNjA3ZDdkMmU3ZWZhOTViOTNlNWFkNGWDKxc7: 00:29:46.109 21:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWQ2Y2Q0NjhhMDM4OGYzNGIyN2JkZTNkOTMyNTE1OTRiMzYxYzg0ZmNiOTExZjUwNTgzMjE2MWRhYzQ2YzRhM+zC5So=: 00:29:46.109 21:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:46.109 21:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:46.109 21:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJiNjAyOTUzNjA3ZDdkMmU3ZWZhOTViOTNlNWFkNGWDKxc7: 00:29:46.109 21:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWQ2Y2Q0NjhhMDM4OGYzNGIyN2JkZTNkOTMyNTE1OTRiMzYxYzg0ZmNiOTExZjUwNTgzMjE2MWRhYzQ2YzRhM+zC5So=: ]] 00:29:46.109 21:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWQ2Y2Q0NjhhMDM4OGYzNGIyN2JkZTNkOTMyNTE1OTRiMzYxYzg0ZmNiOTExZjUwNTgzMjE2MWRhYzQ2YzRhM+zC5So=: 00:29:46.109 21:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:29:46.109 21:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:46.109 21:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:46.109 21:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:46.109 21:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:46.109 21:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:46.109 21:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:46.109 21:46:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:46.109 21:46:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.109 21:46:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:46.109 21:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:46.109 21:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:46.109 21:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:46.109 21:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:46.109 21:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:46.109 21:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:46.109 21:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:46.109 21:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:46.109 21:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:46.109 21:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:46.109 21:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:46.109 21:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:46.109 21:46:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:46.109 21:46:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.677 nvme0n1 00:29:46.677 21:46:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:46.677 21:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:46.677 21:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:46.677 21:46:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:46.677 21:46:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.677 21:46:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:46.677 21:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:46.677 21:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:46.677 21:46:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:46.677 21:46:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.677 21:46:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:46.677 21:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:46.677 21:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:29:46.677 21:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:46.677 21:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:46.677 21:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:46.677 21:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:46.677 21:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODMwYmYyYTZlM2E3YTA4MzFjODgyYjBmNmZkZjAwMTIxMmZjMmRmYzJhOThjZTFlqv2ogQ==: 00:29:46.677 21:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjUxZmM2MzhjOTJlYWQyYmUzOTI4MDI2ZGQ2M2FmYTNkMTQ4Y2M0ZGVkNDhiMDNihHwOiQ==: 00:29:46.677 21:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:46.677 21:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:46.677 21:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODMwYmYyYTZlM2E3YTA4MzFjODgyYjBmNmZkZjAwMTIxMmZjMmRmYzJhOThjZTFlqv2ogQ==: 00:29:46.677 21:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjUxZmM2MzhjOTJlYWQyYmUzOTI4MDI2ZGQ2M2FmYTNkMTQ4Y2M0ZGVkNDhiMDNihHwOiQ==: ]] 00:29:46.677 21:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjUxZmM2MzhjOTJlYWQyYmUzOTI4MDI2ZGQ2M2FmYTNkMTQ4Y2M0ZGVkNDhiMDNihHwOiQ==: 00:29:46.677 21:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:29:46.677 21:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:46.677 21:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:46.677 21:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:46.677 21:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:46.677 21:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:46.677 21:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:46.677 21:46:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:46.677 21:46:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.677 21:46:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:46.677 21:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:46.677 21:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:46.677 21:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:46.677 21:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:46.677 21:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:46.677 21:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:46.677 21:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:46.677 21:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:46.677 21:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:46.677 21:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:46.677 21:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:46.677 21:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:46.677 21:46:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:46.677 21:46:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.244 nvme0n1 00:29:47.244 21:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:47.244 21:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:47.244 21:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:47.244 21:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:47.244 21:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.244 21:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:47.244 21:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:47.244 21:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:47.244 21:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:47.244 21:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.244 21:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:47.244 21:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:47.244 21:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:29:47.244 21:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:47.244 21:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:47.244 21:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:47.244 21:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:47.244 21:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzdjODNiOGQyMzQxMTY1NDExZmU5MzUwYzcyNzk1MzdVo5TX: 00:29:47.244 21:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTA2MDI4MWFkZmNjZWEwMWVhNzRmNWMzMmE2ZDFmOTkeKHEz: 00:29:47.244 21:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:47.244 21:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:47.244 21:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzdjODNiOGQyMzQxMTY1NDExZmU5MzUwYzcyNzk1MzdVo5TX: 00:29:47.244 21:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTA2MDI4MWFkZmNjZWEwMWVhNzRmNWMzMmE2ZDFmOTkeKHEz: ]] 00:29:47.244 21:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTA2MDI4MWFkZmNjZWEwMWVhNzRmNWMzMmE2ZDFmOTkeKHEz: 00:29:47.244 21:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:29:47.244 21:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:47.244 21:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:47.244 21:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:47.244 21:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:47.244 21:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:47.244 21:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:47.244 21:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:47.244 21:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.244 21:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:47.244 21:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:47.244 21:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:47.244 21:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:47.244 21:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:47.244 21:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:47.244 21:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:47.244 21:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:47.244 21:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:47.244 21:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:47.244 21:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:47.244 21:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:47.244 21:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:47.244 21:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:47.244 21:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.503 nvme0n1 00:29:47.503 21:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:47.503 21:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:47.503 21:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:47.503 21:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:47.503 21:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.503 21:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:47.763 21:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:47.763 21:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:47.763 21:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:47.763 21:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.763 21:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:47.763 21:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:47.763 21:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:29:47.763 21:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:47.763 21:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:47.763 21:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:47.763 21:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:47.763 21:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWQ1MjliYWNhYTNlNTgzMzJlZWNiMjFjZTJjOTZiOWQzNzA0OTkxNTEzNGQ0M2JlJvL88g==: 00:29:47.763 21:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTk4Yzg0M2MxMDJiOWJjM2UzMDM3NzU5NjNmNWI2MjfLYrzA: 00:29:47.763 21:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:47.763 21:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:47.763 21:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWQ1MjliYWNhYTNlNTgzMzJlZWNiMjFjZTJjOTZiOWQzNzA0OTkxNTEzNGQ0M2JlJvL88g==: 00:29:47.763 21:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTk4Yzg0M2MxMDJiOWJjM2UzMDM3NzU5NjNmNWI2MjfLYrzA: ]] 00:29:47.763 21:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTk4Yzg0M2MxMDJiOWJjM2UzMDM3NzU5NjNmNWI2MjfLYrzA: 00:29:47.763 21:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:29:47.763 21:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:47.763 21:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:47.763 21:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:47.763 21:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:47.763 21:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:47.763 21:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:47.763 21:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:47.763 21:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.763 21:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:47.763 21:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:47.763 21:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:47.763 21:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:47.763 21:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:47.763 21:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:47.763 21:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:47.763 21:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:47.763 21:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:47.763 21:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:47.763 21:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:47.763 21:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:47.763 21:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:47.763 21:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:47.763 21:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.331 nvme0n1 00:29:48.331 21:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:48.331 21:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:48.331 21:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:48.331 21:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:48.331 21:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.331 21:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:48.331 21:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:48.331 21:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:48.331 21:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:48.331 21:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.331 21:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:48.331 21:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:48.331 21:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:29:48.331 21:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:48.331 21:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:48.331 21:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:48.331 21:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:48.331 21:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTcyNmJjY2JhOGExMTdiMjQwODIxYzc1YWUyMDgyOGQ3ZTNiZDgzYmUyMzAzNzgxYTM4ZjgzYmJkNTllOGJjMepoxMs=: 00:29:48.331 21:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:48.331 21:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:48.331 21:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:48.331 21:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTcyNmJjY2JhOGExMTdiMjQwODIxYzc1YWUyMDgyOGQ3ZTNiZDgzYmUyMzAzNzgxYTM4ZjgzYmJkNTllOGJjMepoxMs=: 00:29:48.331 21:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:48.331 21:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:29:48.331 21:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:48.331 21:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:48.332 21:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:48.332 21:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:48.332 21:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:48.332 21:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:48.332 21:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:48.332 21:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.332 21:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:48.332 21:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:48.332 21:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:48.332 21:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:48.332 21:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:48.332 21:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:48.332 21:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:48.332 21:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:48.332 21:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:48.332 21:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:48.332 21:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:48.332 21:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:48.332 21:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:48.332 21:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:48.332 21:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.591 nvme0n1 00:29:48.591 21:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:48.591 21:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:48.850 21:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:48.850 21:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:48.850 21:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.850 21:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:48.850 21:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:48.850 21:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:48.850 21:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:48.850 21:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.850 21:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:48.850 21:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:48.850 21:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:48.850 21:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:29:48.850 21:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:48.850 21:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:48.850 21:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:48.850 21:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:48.850 21:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJiNjAyOTUzNjA3ZDdkMmU3ZWZhOTViOTNlNWFkNGWDKxc7: 00:29:48.850 21:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWQ2Y2Q0NjhhMDM4OGYzNGIyN2JkZTNkOTMyNTE1OTRiMzYxYzg0ZmNiOTExZjUwNTgzMjE2MWRhYzQ2YzRhM+zC5So=: 00:29:48.850 21:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:48.850 21:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:48.850 21:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJiNjAyOTUzNjA3ZDdkMmU3ZWZhOTViOTNlNWFkNGWDKxc7: 00:29:48.850 21:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWQ2Y2Q0NjhhMDM4OGYzNGIyN2JkZTNkOTMyNTE1OTRiMzYxYzg0ZmNiOTExZjUwNTgzMjE2MWRhYzQ2YzRhM+zC5So=: ]] 00:29:48.850 21:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWQ2Y2Q0NjhhMDM4OGYzNGIyN2JkZTNkOTMyNTE1OTRiMzYxYzg0ZmNiOTExZjUwNTgzMjE2MWRhYzQ2YzRhM+zC5So=: 00:29:48.851 21:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:29:48.851 21:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:48.851 21:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:48.851 21:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:48.851 21:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:48.851 21:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:48.851 21:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:48.851 21:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:48.851 21:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.851 21:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:48.851 21:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:48.851 21:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:48.851 21:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:48.851 21:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:48.851 21:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:48.851 21:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:48.851 21:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:48.851 21:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:48.851 21:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:48.851 21:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:48.851 21:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:48.851 21:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:48.851 21:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:48.851 21:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.787 nvme0n1 00:29:49.787 21:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:49.787 21:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:49.787 21:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:49.787 21:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:49.787 21:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.787 21:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:49.787 21:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:49.787 21:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:49.787 21:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:49.787 21:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.787 21:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:49.787 21:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:49.787 21:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:29:49.787 21:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:49.787 21:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:49.787 21:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:49.787 21:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:49.787 21:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODMwYmYyYTZlM2E3YTA4MzFjODgyYjBmNmZkZjAwMTIxMmZjMmRmYzJhOThjZTFlqv2ogQ==: 00:29:49.787 21:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjUxZmM2MzhjOTJlYWQyYmUzOTI4MDI2ZGQ2M2FmYTNkMTQ4Y2M0ZGVkNDhiMDNihHwOiQ==: 00:29:49.787 21:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:49.787 21:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:49.787 21:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODMwYmYyYTZlM2E3YTA4MzFjODgyYjBmNmZkZjAwMTIxMmZjMmRmYzJhOThjZTFlqv2ogQ==: 00:29:49.788 21:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjUxZmM2MzhjOTJlYWQyYmUzOTI4MDI2ZGQ2M2FmYTNkMTQ4Y2M0ZGVkNDhiMDNihHwOiQ==: ]] 00:29:49.788 21:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjUxZmM2MzhjOTJlYWQyYmUzOTI4MDI2ZGQ2M2FmYTNkMTQ4Y2M0ZGVkNDhiMDNihHwOiQ==: 00:29:49.788 21:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:29:49.788 21:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:49.788 21:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:49.788 21:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:49.788 21:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:49.788 21:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:49.788 21:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:49.788 21:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:49.788 21:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.788 21:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:49.788 21:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:49.788 21:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:49.788 21:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:49.788 21:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:49.788 21:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:49.788 21:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:49.788 21:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:49.788 21:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:49.788 21:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:49.788 21:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:49.788 21:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:49.788 21:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:49.788 21:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:49.788 21:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.356 nvme0n1 00:29:50.356 21:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:50.356 21:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:50.356 21:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:50.356 21:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:50.356 21:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.356 21:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:50.356 21:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:50.356 21:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:50.356 21:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:50.356 21:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.356 21:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:50.356 21:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:50.356 21:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:29:50.356 21:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:50.356 21:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:50.356 21:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:50.356 21:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:50.356 21:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzdjODNiOGQyMzQxMTY1NDExZmU5MzUwYzcyNzk1MzdVo5TX: 00:29:50.356 21:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTA2MDI4MWFkZmNjZWEwMWVhNzRmNWMzMmE2ZDFmOTkeKHEz: 00:29:50.356 21:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:50.356 21:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:50.356 21:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzdjODNiOGQyMzQxMTY1NDExZmU5MzUwYzcyNzk1MzdVo5TX: 00:29:50.356 21:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTA2MDI4MWFkZmNjZWEwMWVhNzRmNWMzMmE2ZDFmOTkeKHEz: ]] 00:29:50.356 21:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTA2MDI4MWFkZmNjZWEwMWVhNzRmNWMzMmE2ZDFmOTkeKHEz: 00:29:50.356 21:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:29:50.356 21:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:50.356 21:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:50.356 21:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:50.356 21:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:50.356 21:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:50.356 21:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:50.356 21:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:50.356 21:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.616 21:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:50.616 21:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:50.616 21:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:50.616 21:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:50.616 21:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:50.616 21:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:50.616 21:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:50.616 21:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:50.616 21:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:50.616 21:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:50.616 21:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:50.616 21:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:50.616 21:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:50.616 21:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:50.616 21:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.184 nvme0n1 00:29:51.184 21:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:51.184 21:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:51.184 21:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:51.184 21:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:51.184 21:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.184 21:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:51.443 21:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:51.443 21:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:51.443 21:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:51.443 21:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.443 21:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:51.443 21:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:51.443 21:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:29:51.443 21:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:51.443 21:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:51.443 21:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:51.443 21:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:51.443 21:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWQ1MjliYWNhYTNlNTgzMzJlZWNiMjFjZTJjOTZiOWQzNzA0OTkxNTEzNGQ0M2JlJvL88g==: 00:29:51.443 21:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTk4Yzg0M2MxMDJiOWJjM2UzMDM3NzU5NjNmNWI2MjfLYrzA: 00:29:51.443 21:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:51.443 21:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:51.443 21:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWQ1MjliYWNhYTNlNTgzMzJlZWNiMjFjZTJjOTZiOWQzNzA0OTkxNTEzNGQ0M2JlJvL88g==: 00:29:51.443 21:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTk4Yzg0M2MxMDJiOWJjM2UzMDM3NzU5NjNmNWI2MjfLYrzA: ]] 00:29:51.443 21:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTk4Yzg0M2MxMDJiOWJjM2UzMDM3NzU5NjNmNWI2MjfLYrzA: 00:29:51.443 21:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:29:51.443 21:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:51.443 21:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:51.443 21:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:51.443 21:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:51.443 21:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:51.443 21:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:51.443 21:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:51.443 21:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.443 21:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:51.443 21:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:51.443 21:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:51.443 21:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:51.443 21:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:51.443 21:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:51.443 21:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:51.443 21:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:51.443 21:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:51.443 21:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:51.443 21:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:51.443 21:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:51.443 21:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:51.443 21:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:51.443 21:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.461 nvme0n1 00:29:52.461 21:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:52.461 21:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:52.461 21:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:52.461 21:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:52.461 21:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.461 21:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:52.461 21:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:52.461 21:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:52.461 21:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:52.461 21:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.461 21:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:52.461 21:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:52.461 21:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:29:52.461 21:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:52.461 21:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:52.461 21:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:52.461 21:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:52.461 21:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTcyNmJjY2JhOGExMTdiMjQwODIxYzc1YWUyMDgyOGQ3ZTNiZDgzYmUyMzAzNzgxYTM4ZjgzYmJkNTllOGJjMepoxMs=: 00:29:52.461 21:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:52.461 21:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:52.461 21:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:52.461 21:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTcyNmJjY2JhOGExMTdiMjQwODIxYzc1YWUyMDgyOGQ3ZTNiZDgzYmUyMzAzNzgxYTM4ZjgzYmJkNTllOGJjMepoxMs=: 00:29:52.461 21:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:52.461 21:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:29:52.461 21:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:52.461 21:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:52.461 21:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:52.461 21:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:52.461 21:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:52.461 21:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:52.461 21:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:52.461 21:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.461 21:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:52.461 21:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:52.461 21:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:52.461 21:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:52.461 21:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:52.461 21:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:52.461 21:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:52.461 21:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:52.461 21:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:52.461 21:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:52.461 21:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:52.461 21:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:52.461 21:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:52.461 21:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:52.461 21:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.027 nvme0n1 00:29:53.027 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:53.027 21:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:53.027 21:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:53.027 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:53.027 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.027 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:53.027 21:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:53.027 21:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:53.027 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:53.027 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.027 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:53.027 21:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:53.027 21:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:53.027 21:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:53.027 21:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:53.027 21:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:53.027 21:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODMwYmYyYTZlM2E3YTA4MzFjODgyYjBmNmZkZjAwMTIxMmZjMmRmYzJhOThjZTFlqv2ogQ==: 00:29:53.027 21:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjUxZmM2MzhjOTJlYWQyYmUzOTI4MDI2ZGQ2M2FmYTNkMTQ4Y2M0ZGVkNDhiMDNihHwOiQ==: 00:29:53.027 21:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:53.027 21:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:53.027 21:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODMwYmYyYTZlM2E3YTA4MzFjODgyYjBmNmZkZjAwMTIxMmZjMmRmYzJhOThjZTFlqv2ogQ==: 00:29:53.027 21:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjUxZmM2MzhjOTJlYWQyYmUzOTI4MDI2ZGQ2M2FmYTNkMTQ4Y2M0ZGVkNDhiMDNihHwOiQ==: ]] 00:29:53.027 21:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjUxZmM2MzhjOTJlYWQyYmUzOTI4MDI2ZGQ2M2FmYTNkMTQ4Y2M0ZGVkNDhiMDNihHwOiQ==: 00:29:53.027 21:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:53.027 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:53.027 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.027 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:53.027 21:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:29:53.027 21:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:53.027 21:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:53.027 21:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:53.027 21:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:53.027 21:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:53.027 21:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:53.027 21:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:53.027 21:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:53.027 21:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:53.027 21:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:53.028 21:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:53.028 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:29:53.028 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:53.028 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:29:53.028 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:53.028 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:29:53.028 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:53.028 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:53.028 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:53.028 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.028 request: 00:29:53.028 { 00:29:53.028 "name": "nvme0", 00:29:53.028 "trtype": "tcp", 00:29:53.028 "traddr": "10.0.0.1", 00:29:53.028 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:53.028 "adrfam": "ipv4", 00:29:53.028 "trsvcid": "4420", 00:29:53.028 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:53.028 "method": "bdev_nvme_attach_controller", 00:29:53.028 "req_id": 1 00:29:53.028 } 00:29:53.028 Got JSON-RPC error response 00:29:53.028 response: 00:29:53.028 { 00:29:53.028 "code": -5, 00:29:53.028 "message": "Input/output error" 00:29:53.028 } 00:29:53.028 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:29:53.028 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:29:53.028 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:53.028 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:29:53.028 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:53.028 21:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:29:53.028 21:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:29:53.028 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:53.028 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.028 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.285 request: 00:29:53.285 { 00:29:53.285 "name": "nvme0", 00:29:53.285 "trtype": "tcp", 00:29:53.285 "traddr": "10.0.0.1", 00:29:53.285 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:53.285 "adrfam": "ipv4", 00:29:53.285 "trsvcid": "4420", 00:29:53.285 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:53.285 "dhchap_key": "key2", 00:29:53.285 "method": "bdev_nvme_attach_controller", 00:29:53.285 "req_id": 1 00:29:53.285 } 00:29:53.285 Got JSON-RPC error response 00:29:53.285 response: 00:29:53.285 { 00:29:53.285 "code": -5, 00:29:53.285 "message": "Input/output error" 00:29:53.285 } 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.285 request: 00:29:53.285 { 00:29:53.285 "name": "nvme0", 00:29:53.285 "trtype": "tcp", 00:29:53.285 "traddr": "10.0.0.1", 00:29:53.285 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:53.285 "adrfam": "ipv4", 00:29:53.285 "trsvcid": "4420", 00:29:53.285 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:53.285 "dhchap_key": "key1", 00:29:53.285 "dhchap_ctrlr_key": "ckey2", 00:29:53.285 "method": "bdev_nvme_attach_controller", 00:29:53.285 "req_id": 1 00:29:53.285 } 00:29:53.285 Got JSON-RPC error response 00:29:53.285 response: 00:29:53.285 { 00:29:53.285 "code": -5, 00:29:53.285 "message": "Input/output error" 00:29:53.285 } 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:53.285 21:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:53.285 rmmod nvme_tcp 00:29:53.543 rmmod nvme_fabrics 00:29:53.543 21:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:53.543 21:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:29:53.543 21:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:29:53.543 21:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1605518 ']' 00:29:53.543 21:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1605518 00:29:53.543 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@949 -- # '[' -z 1605518 ']' 00:29:53.543 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # kill -0 1605518 00:29:53.543 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # uname 00:29:53.543 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:53.543 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1605518 00:29:53.543 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:29:53.543 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:29:53.543 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1605518' 00:29:53.543 killing process with pid 1605518 00:29:53.543 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@968 -- # kill 1605518 00:29:53.543 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@973 -- # wait 1605518 00:29:53.802 21:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:53.802 21:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:53.802 21:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:53.802 21:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:53.802 21:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:53.802 21:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:53.802 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:53.802 21:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:55.707 21:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:55.707 21:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:55.707 21:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:55.707 21:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:29:55.707 21:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:29:55.707 21:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:29:55.707 21:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:55.707 21:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:55.707 21:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:55.707 21:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:55.707 21:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:29:55.707 21:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:29:55.707 21:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:58.997 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:58.997 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:58.997 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:58.997 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:58.997 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:58.997 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:58.997 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:58.997 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:58.997 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:58.997 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:58.997 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:58.997 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:58.997 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:58.997 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:58.997 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:58.997 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:59.937 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:29:59.937 21:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.srH /tmp/spdk.key-null.OlP /tmp/spdk.key-sha256.F4V /tmp/spdk.key-sha384.Ek9 /tmp/spdk.key-sha512.Iuu /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:29:59.937 21:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:03.227 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:30:03.227 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:30:03.227 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:30:03.227 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:30:03.227 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:30:03.227 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:30:03.227 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:30:03.227 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:30:03.227 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:30:03.227 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:30:03.227 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:30:03.227 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:30:03.227 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:30:03.228 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:30:03.228 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:30:03.228 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:30:03.228 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:30:03.228 00:30:03.228 real 0m57.721s 00:30:03.228 user 0m51.995s 00:30:03.228 sys 0m13.291s 00:30:03.228 21:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:03.228 21:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.228 ************************************ 00:30:03.228 END TEST nvmf_auth_host 00:30:03.228 ************************************ 00:30:03.228 21:47:02 nvmf_tcp -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:30:03.228 21:47:02 nvmf_tcp -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:03.228 21:47:02 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:30:03.228 21:47:02 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:03.228 21:47:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:03.228 ************************************ 00:30:03.228 START TEST nvmf_digest 00:30:03.228 ************************************ 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:03.228 * Looking for test storage... 00:30:03.228 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:30:03.228 21:47:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:09.797 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:09.797 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:30:09.797 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:09.797 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:09.797 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:09.797 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:09.797 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:09.797 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:30:09.797 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:09.797 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:30:09.797 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:30:09.797 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:30:09.797 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:30:09.797 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:30:09.797 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:30:09.797 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:09.797 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:09.797 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:09.797 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:09.797 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:09.797 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:09.797 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:09.797 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:09.797 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:09.797 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:09.797 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:09.797 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:09.797 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:09.797 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:09.797 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:09.797 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:09.797 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:09.797 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:09.797 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:09.797 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:09.797 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:09.797 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:09.797 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:09.797 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:09.797 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:09.797 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:09.797 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:09.797 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:09.797 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:09.797 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:09.797 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:09.797 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:09.797 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:09.797 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:09.797 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:09.797 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:09.798 Found net devices under 0000:af:00.0: cvl_0_0 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:09.798 Found net devices under 0000:af:00.1: cvl_0_1 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:09.798 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:09.798 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:30:09.798 00:30:09.798 --- 10.0.0.2 ping statistics --- 00:30:09.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:09.798 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:09.798 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:09.798 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:30:09.798 00:30:09.798 --- 10.0.0.1 ping statistics --- 00:30:09.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:09.798 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:09.798 ************************************ 00:30:09.798 START TEST nvmf_digest_clean 00:30:09.798 ************************************ 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # run_digest 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1621186 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1621186 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 1621186 ']' 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:09.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:09.798 21:47:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:09.798 [2024-06-07 21:47:09.763022] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:30:09.798 [2024-06-07 21:47:09.763086] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:09.798 EAL: No free 2048 kB hugepages reported on node 1 00:30:09.798 [2024-06-07 21:47:09.859753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:09.798 [2024-06-07 21:47:09.949535] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:09.798 [2024-06-07 21:47:09.949576] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:09.798 [2024-06-07 21:47:09.949587] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:09.798 [2024-06-07 21:47:09.949599] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:09.798 [2024-06-07 21:47:09.949609] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:09.798 [2024-06-07 21:47:09.949632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:10.736 21:47:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:10.736 21:47:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:30:10.736 21:47:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:10.736 21:47:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:10.736 21:47:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:10.736 21:47:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:10.736 21:47:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:30:10.736 21:47:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:30:10.736 21:47:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:30:10.736 21:47:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:10.736 21:47:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:10.736 null0 00:30:10.736 [2024-06-07 21:47:10.907075] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:10.736 [2024-06-07 21:47:10.931249] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:10.736 21:47:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:10.736 21:47:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:30:10.736 21:47:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:10.736 21:47:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:10.736 21:47:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:30:10.736 21:47:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:30:10.736 21:47:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:30:10.736 21:47:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:10.736 21:47:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1621344 00:30:10.736 21:47:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1621344 /var/tmp/bperf.sock 00:30:10.736 21:47:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:10.736 21:47:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 1621344 ']' 00:30:10.736 21:47:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:10.736 21:47:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:10.736 21:47:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:10.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:10.736 21:47:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:10.736 21:47:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:10.736 [2024-06-07 21:47:10.981975] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:30:10.736 [2024-06-07 21:47:10.982018] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1621344 ] 00:30:10.996 EAL: No free 2048 kB hugepages reported on node 1 00:30:10.996 [2024-06-07 21:47:11.051625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:10.996 [2024-06-07 21:47:11.143506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:10.996 21:47:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:10.996 21:47:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:30:10.996 21:47:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:10.996 21:47:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:10.996 21:47:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:11.255 21:47:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:11.255 21:47:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:11.823 nvme0n1 00:30:11.823 21:47:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:11.823 21:47:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:11.823 Running I/O for 2 seconds... 00:30:14.356 00:30:14.356 Latency(us) 00:30:14.356 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:14.356 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:14.356 nvme0n1 : 2.01 17719.66 69.22 0.00 0.00 7214.27 3395.96 18826.71 00:30:14.356 =================================================================================================================== 00:30:14.356 Total : 17719.66 69.22 0.00 0.00 7214.27 3395.96 18826.71 00:30:14.356 0 00:30:14.356 21:47:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:14.356 21:47:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:14.356 21:47:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:14.356 21:47:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:14.356 | select(.opcode=="crc32c") 00:30:14.356 | "\(.module_name) \(.executed)"' 00:30:14.356 21:47:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:14.356 21:47:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:14.356 21:47:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:14.356 21:47:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:14.356 21:47:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:14.356 21:47:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1621344 00:30:14.356 21:47:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 1621344 ']' 00:30:14.356 21:47:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 1621344 00:30:14.356 21:47:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:30:14.356 21:47:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:14.356 21:47:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1621344 00:30:14.356 21:47:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:30:14.356 21:47:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:30:14.356 21:47:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1621344' 00:30:14.356 killing process with pid 1621344 00:30:14.356 21:47:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 1621344 00:30:14.356 Received shutdown signal, test time was about 2.000000 seconds 00:30:14.356 00:30:14.356 Latency(us) 00:30:14.356 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:14.356 =================================================================================================================== 00:30:14.356 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:14.356 21:47:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 1621344 00:30:14.356 21:47:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:30:14.356 21:47:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:14.356 21:47:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:14.356 21:47:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:30:14.356 21:47:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:30:14.356 21:47:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:30:14.356 21:47:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:14.356 21:47:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1622122 00:30:14.356 21:47:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1622122 /var/tmp/bperf.sock 00:30:14.356 21:47:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:14.356 21:47:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 1622122 ']' 00:30:14.356 21:47:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:14.356 21:47:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:14.356 21:47:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:14.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:14.356 21:47:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:14.356 21:47:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:14.356 [2024-06-07 21:47:14.573119] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:30:14.356 [2024-06-07 21:47:14.573179] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1622122 ] 00:30:14.356 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:14.356 Zero copy mechanism will not be used. 00:30:14.356 EAL: No free 2048 kB hugepages reported on node 1 00:30:14.615 [2024-06-07 21:47:14.653111] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:14.615 [2024-06-07 21:47:14.743583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:14.615 21:47:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:14.615 21:47:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:30:14.615 21:47:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:14.616 21:47:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:14.616 21:47:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:14.875 21:47:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:14.875 21:47:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:15.443 nvme0n1 00:30:15.443 21:47:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:15.443 21:47:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:15.443 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:15.443 Zero copy mechanism will not be used. 00:30:15.443 Running I/O for 2 seconds... 00:30:17.348 00:30:17.348 Latency(us) 00:30:17.348 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:17.348 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:17.348 nvme0n1 : 2.00 3425.57 428.20 0.00 0.00 4666.56 1139.43 13285.93 00:30:17.348 =================================================================================================================== 00:30:17.348 Total : 3425.57 428.20 0.00 0.00 4666.56 1139.43 13285.93 00:30:17.348 0 00:30:17.348 21:47:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:17.348 21:47:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:17.348 21:47:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:17.348 21:47:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:17.348 | select(.opcode=="crc32c") 00:30:17.348 | "\(.module_name) \(.executed)"' 00:30:17.348 21:47:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:17.607 21:47:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:17.607 21:47:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:17.607 21:47:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:17.607 21:47:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:17.607 21:47:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1622122 00:30:17.607 21:47:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 1622122 ']' 00:30:17.607 21:47:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 1622122 00:30:17.607 21:47:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:30:17.607 21:47:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:17.607 21:47:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1622122 00:30:17.607 21:47:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:30:17.607 21:47:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:30:17.607 21:47:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1622122' 00:30:17.607 killing process with pid 1622122 00:30:17.607 21:47:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 1622122 00:30:17.607 Received shutdown signal, test time was about 2.000000 seconds 00:30:17.607 00:30:17.607 Latency(us) 00:30:17.607 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:17.607 =================================================================================================================== 00:30:17.607 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:17.607 21:47:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 1622122 00:30:17.865 21:47:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:30:17.865 21:47:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:17.865 21:47:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:17.865 21:47:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:30:17.865 21:47:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:30:17.865 21:47:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:30:17.865 21:47:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:17.865 21:47:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1622658 00:30:17.865 21:47:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1622658 /var/tmp/bperf.sock 00:30:17.865 21:47:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:17.865 21:47:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 1622658 ']' 00:30:17.865 21:47:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:17.865 21:47:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:17.865 21:47:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:17.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:17.865 21:47:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:17.865 21:47:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:17.865 [2024-06-07 21:47:18.009978] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:30:17.865 [2024-06-07 21:47:18.010048] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1622658 ] 00:30:17.865 EAL: No free 2048 kB hugepages reported on node 1 00:30:17.865 [2024-06-07 21:47:18.089498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:18.123 [2024-06-07 21:47:18.180200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:18.123 21:47:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:18.123 21:47:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:30:18.123 21:47:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:18.123 21:47:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:18.123 21:47:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:18.381 21:47:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:18.381 21:47:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:18.640 nvme0n1 00:30:18.640 21:47:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:18.640 21:47:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:18.898 Running I/O for 2 seconds... 00:30:20.802 00:30:20.802 Latency(us) 00:30:20.802 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:20.802 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:20.802 nvme0n1 : 2.00 18661.94 72.90 0.00 0.00 6847.12 2904.44 11319.85 00:30:20.802 =================================================================================================================== 00:30:20.802 Total : 18661.94 72.90 0.00 0.00 6847.12 2904.44 11319.85 00:30:20.802 0 00:30:20.802 21:47:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:20.802 21:47:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:20.802 21:47:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:20.802 21:47:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:20.802 | select(.opcode=="crc32c") 00:30:20.802 | "\(.module_name) \(.executed)"' 00:30:20.802 21:47:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:21.061 21:47:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:21.061 21:47:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:21.061 21:47:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:21.061 21:47:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:21.061 21:47:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1622658 00:30:21.061 21:47:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 1622658 ']' 00:30:21.061 21:47:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 1622658 00:30:21.061 21:47:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:30:21.061 21:47:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:21.061 21:47:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1622658 00:30:21.061 21:47:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:30:21.061 21:47:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:30:21.061 21:47:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1622658' 00:30:21.061 killing process with pid 1622658 00:30:21.061 21:47:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 1622658 00:30:21.061 Received shutdown signal, test time was about 2.000000 seconds 00:30:21.061 00:30:21.061 Latency(us) 00:30:21.061 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:21.061 =================================================================================================================== 00:30:21.061 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:21.061 21:47:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 1622658 00:30:21.320 21:47:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:30:21.320 21:47:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:21.320 21:47:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:21.320 21:47:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:30:21.320 21:47:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:30:21.320 21:47:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:30:21.320 21:47:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:21.320 21:47:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1623200 00:30:21.320 21:47:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1623200 /var/tmp/bperf.sock 00:30:21.320 21:47:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:21.320 21:47:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 1623200 ']' 00:30:21.320 21:47:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:21.320 21:47:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:21.320 21:47:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:21.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:21.320 21:47:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:21.320 21:47:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:21.320 [2024-06-07 21:47:21.512625] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:30:21.320 [2024-06-07 21:47:21.512687] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1623200 ] 00:30:21.320 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:21.320 Zero copy mechanism will not be used. 00:30:21.320 EAL: No free 2048 kB hugepages reported on node 1 00:30:21.578 [2024-06-07 21:47:21.595048] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:21.578 [2024-06-07 21:47:21.677014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:21.578 21:47:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:21.578 21:47:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:30:21.578 21:47:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:21.579 21:47:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:21.579 21:47:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:21.837 21:47:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:21.837 21:47:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:22.096 nvme0n1 00:30:22.355 21:47:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:22.355 21:47:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:22.355 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:22.355 Zero copy mechanism will not be used. 00:30:22.355 Running I/O for 2 seconds... 00:30:24.258 00:30:24.258 Latency(us) 00:30:24.258 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:24.258 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:24.258 nvme0n1 : 2.00 3814.61 476.83 0.00 0.00 4186.63 3112.96 10962.39 00:30:24.258 =================================================================================================================== 00:30:24.258 Total : 3814.61 476.83 0.00 0.00 4186.63 3112.96 10962.39 00:30:24.258 0 00:30:24.258 21:47:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:24.258 21:47:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:24.258 21:47:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:24.258 21:47:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:24.258 | select(.opcode=="crc32c") 00:30:24.258 | "\(.module_name) \(.executed)"' 00:30:24.258 21:47:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:24.517 21:47:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:24.517 21:47:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:24.517 21:47:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:24.517 21:47:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:24.517 21:47:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1623200 00:30:24.517 21:47:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 1623200 ']' 00:30:24.517 21:47:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 1623200 00:30:24.517 21:47:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:30:24.517 21:47:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:24.517 21:47:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1623200 00:30:24.776 21:47:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:30:24.776 21:47:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:30:24.776 21:47:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1623200' 00:30:24.776 killing process with pid 1623200 00:30:24.776 21:47:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 1623200 00:30:24.776 Received shutdown signal, test time was about 2.000000 seconds 00:30:24.776 00:30:24.776 Latency(us) 00:30:24.776 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:24.776 =================================================================================================================== 00:30:24.776 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:24.776 21:47:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 1623200 00:30:24.776 21:47:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1621186 00:30:24.776 21:47:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 1621186 ']' 00:30:24.776 21:47:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 1621186 00:30:24.776 21:47:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:30:24.776 21:47:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:24.776 21:47:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1621186 00:30:24.776 21:47:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:30:24.776 21:47:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:30:24.776 21:47:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1621186' 00:30:24.776 killing process with pid 1621186 00:30:24.776 21:47:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 1621186 00:30:24.776 21:47:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 1621186 00:30:25.034 00:30:25.035 real 0m15.540s 00:30:25.035 user 0m30.280s 00:30:25.035 sys 0m4.073s 00:30:25.035 21:47:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:25.035 21:47:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:25.035 ************************************ 00:30:25.035 END TEST nvmf_digest_clean 00:30:25.035 ************************************ 00:30:25.035 21:47:25 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:30:25.035 21:47:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:30:25.035 21:47:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:25.035 21:47:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:25.294 ************************************ 00:30:25.294 START TEST nvmf_digest_error 00:30:25.294 ************************************ 00:30:25.294 21:47:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # run_digest_error 00:30:25.294 21:47:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:30:25.294 21:47:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:25.294 21:47:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:25.294 21:47:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:25.294 21:47:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1624015 00:30:25.294 21:47:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1624015 00:30:25.294 21:47:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:25.294 21:47:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 1624015 ']' 00:30:25.294 21:47:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:25.294 21:47:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:25.294 21:47:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:25.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:25.294 21:47:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:25.294 21:47:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:25.294 [2024-06-07 21:47:25.361731] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:30:25.294 [2024-06-07 21:47:25.361781] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:25.294 EAL: No free 2048 kB hugepages reported on node 1 00:30:25.294 [2024-06-07 21:47:25.456110] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:25.294 [2024-06-07 21:47:25.546155] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:25.294 [2024-06-07 21:47:25.546196] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:25.294 [2024-06-07 21:47:25.546207] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:25.294 [2024-06-07 21:47:25.546216] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:25.294 [2024-06-07 21:47:25.546223] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:25.294 [2024-06-07 21:47:25.546244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:26.266 21:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:26.267 21:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:30:26.267 21:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:26.267 21:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:26.267 21:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:26.267 21:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:26.267 21:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:30:26.267 21:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:26.267 21:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:26.267 [2024-06-07 21:47:26.336609] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:30:26.267 21:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:26.267 21:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:30:26.267 21:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:30:26.267 21:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:26.267 21:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:26.267 null0 00:30:26.267 [2024-06-07 21:47:26.431406] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:26.267 [2024-06-07 21:47:26.455597] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:26.267 21:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:26.267 21:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:30:26.267 21:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:26.267 21:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:30:26.267 21:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:30:26.267 21:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:30:26.267 21:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1624165 00:30:26.267 21:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1624165 /var/tmp/bperf.sock 00:30:26.267 21:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:30:26.267 21:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 1624165 ']' 00:30:26.267 21:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:26.267 21:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:26.267 21:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:26.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:26.267 21:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:26.267 21:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:26.267 [2024-06-07 21:47:26.508773] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:30:26.267 [2024-06-07 21:47:26.508835] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1624165 ] 00:30:26.548 EAL: No free 2048 kB hugepages reported on node 1 00:30:26.548 [2024-06-07 21:47:26.590894] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:26.548 [2024-06-07 21:47:26.677592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:26.548 21:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:26.548 21:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:30:26.548 21:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:26.548 21:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:26.807 21:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:26.807 21:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:26.807 21:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:26.807 21:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:26.807 21:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:26.807 21:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:27.376 nvme0n1 00:30:27.376 21:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:27.376 21:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:27.376 21:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:27.376 21:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:27.376 21:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:27.376 21:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:27.635 Running I/O for 2 seconds... 00:30:27.635 [2024-06-07 21:47:27.674370] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:27.635 [2024-06-07 21:47:27.674411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.636 [2024-06-07 21:47:27.674426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.636 [2024-06-07 21:47:27.691713] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:27.636 [2024-06-07 21:47:27.691745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.636 [2024-06-07 21:47:27.691759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.636 [2024-06-07 21:47:27.704032] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:27.636 [2024-06-07 21:47:27.704060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.636 [2024-06-07 21:47:27.704072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.636 [2024-06-07 21:47:27.719425] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:27.636 [2024-06-07 21:47:27.719454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.636 [2024-06-07 21:47:27.719467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.636 [2024-06-07 21:47:27.732916] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:27.636 [2024-06-07 21:47:27.732944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.636 [2024-06-07 21:47:27.732956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.636 [2024-06-07 21:47:27.748679] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:27.636 [2024-06-07 21:47:27.748707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.636 [2024-06-07 21:47:27.748719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.636 [2024-06-07 21:47:27.763328] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:27.636 [2024-06-07 21:47:27.763355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.636 [2024-06-07 21:47:27.763368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.636 [2024-06-07 21:47:27.777444] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:27.636 [2024-06-07 21:47:27.777471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.636 [2024-06-07 21:47:27.777483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.636 [2024-06-07 21:47:27.789658] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:27.636 [2024-06-07 21:47:27.789686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.636 [2024-06-07 21:47:27.789698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.636 [2024-06-07 21:47:27.805612] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:27.636 [2024-06-07 21:47:27.805639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.636 [2024-06-07 21:47:27.805651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.636 [2024-06-07 21:47:27.820615] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:27.636 [2024-06-07 21:47:27.820642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.636 [2024-06-07 21:47:27.820654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.636 [2024-06-07 21:47:27.835804] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:27.636 [2024-06-07 21:47:27.835830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.636 [2024-06-07 21:47:27.835841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.636 [2024-06-07 21:47:27.849498] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:27.636 [2024-06-07 21:47:27.849524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.636 [2024-06-07 21:47:27.849536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.636 [2024-06-07 21:47:27.863678] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:27.636 [2024-06-07 21:47:27.863705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:14818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.636 [2024-06-07 21:47:27.863721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.636 [2024-06-07 21:47:27.877983] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:27.636 [2024-06-07 21:47:27.878008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.636 [2024-06-07 21:47:27.878020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.636 [2024-06-07 21:47:27.890453] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:27.636 [2024-06-07 21:47:27.890479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.636 [2024-06-07 21:47:27.890491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.895 [2024-06-07 21:47:27.906102] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:27.895 [2024-06-07 21:47:27.906128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:24096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.895 [2024-06-07 21:47:27.906140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.895 [2024-06-07 21:47:27.920398] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:27.895 [2024-06-07 21:47:27.920424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.895 [2024-06-07 21:47:27.920436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.895 [2024-06-07 21:47:27.934061] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:27.895 [2024-06-07 21:47:27.934086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.895 [2024-06-07 21:47:27.934099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.895 [2024-06-07 21:47:27.948786] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:27.895 [2024-06-07 21:47:27.948811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.895 [2024-06-07 21:47:27.948823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.895 [2024-06-07 21:47:27.962952] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:27.895 [2024-06-07 21:47:27.962977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.895 [2024-06-07 21:47:27.962989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.895 [2024-06-07 21:47:27.976482] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:27.895 [2024-06-07 21:47:27.976508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:18395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.895 [2024-06-07 21:47:27.976519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.895 [2024-06-07 21:47:27.991955] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:27.895 [2024-06-07 21:47:27.991981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.895 [2024-06-07 21:47:27.991993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.895 [2024-06-07 21:47:28.009152] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:27.895 [2024-06-07 21:47:28.009179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.895 [2024-06-07 21:47:28.009191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.895 [2024-06-07 21:47:28.021446] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:27.895 [2024-06-07 21:47:28.021473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:7037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.895 [2024-06-07 21:47:28.021484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.895 [2024-06-07 21:47:28.037155] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:27.895 [2024-06-07 21:47:28.037180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.895 [2024-06-07 21:47:28.037192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.895 [2024-06-07 21:47:28.051047] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:27.895 [2024-06-07 21:47:28.051072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.895 [2024-06-07 21:47:28.051084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.895 [2024-06-07 21:47:28.065987] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:27.896 [2024-06-07 21:47:28.066013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.896 [2024-06-07 21:47:28.066031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.896 [2024-06-07 21:47:28.078258] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:27.896 [2024-06-07 21:47:28.078283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.896 [2024-06-07 21:47:28.078294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.896 [2024-06-07 21:47:28.093604] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:27.896 [2024-06-07 21:47:28.093631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.896 [2024-06-07 21:47:28.093644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.896 [2024-06-07 21:47:28.109107] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:27.896 [2024-06-07 21:47:28.109133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.896 [2024-06-07 21:47:28.109150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.896 [2024-06-07 21:47:28.120910] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:27.896 [2024-06-07 21:47:28.120937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.896 [2024-06-07 21:47:28.120949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.896 [2024-06-07 21:47:28.137267] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:27.896 [2024-06-07 21:47:28.137293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.896 [2024-06-07 21:47:28.137304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.896 [2024-06-07 21:47:28.151095] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:27.896 [2024-06-07 21:47:28.151121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.896 [2024-06-07 21:47:28.151133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.155 [2024-06-07 21:47:28.166538] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.155 [2024-06-07 21:47:28.166566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.155 [2024-06-07 21:47:28.166578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.155 [2024-06-07 21:47:28.181603] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.155 [2024-06-07 21:47:28.181629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.155 [2024-06-07 21:47:28.181641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.155 [2024-06-07 21:47:28.194491] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.155 [2024-06-07 21:47:28.194517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.155 [2024-06-07 21:47:28.194529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.155 [2024-06-07 21:47:28.209243] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.156 [2024-06-07 21:47:28.209269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.156 [2024-06-07 21:47:28.209281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.156 [2024-06-07 21:47:28.224464] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.156 [2024-06-07 21:47:28.224489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.156 [2024-06-07 21:47:28.224501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.156 [2024-06-07 21:47:28.238053] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.156 [2024-06-07 21:47:28.238083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.156 [2024-06-07 21:47:28.238095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.156 [2024-06-07 21:47:28.251100] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.156 [2024-06-07 21:47:28.251126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.156 [2024-06-07 21:47:28.251138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.156 [2024-06-07 21:47:28.266434] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.156 [2024-06-07 21:47:28.266460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.156 [2024-06-07 21:47:28.266472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.156 [2024-06-07 21:47:28.279636] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.156 [2024-06-07 21:47:28.279661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.156 [2024-06-07 21:47:28.279672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.156 [2024-06-07 21:47:28.295208] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.156 [2024-06-07 21:47:28.295233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:20677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.156 [2024-06-07 21:47:28.295245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.156 [2024-06-07 21:47:28.309524] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.156 [2024-06-07 21:47:28.309550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.156 [2024-06-07 21:47:28.309561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.156 [2024-06-07 21:47:28.323643] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.156 [2024-06-07 21:47:28.323670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:19243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.156 [2024-06-07 21:47:28.323682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.156 [2024-06-07 21:47:28.337820] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.156 [2024-06-07 21:47:28.337846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.156 [2024-06-07 21:47:28.337857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.156 [2024-06-07 21:47:28.352852] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.156 [2024-06-07 21:47:28.352878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.156 [2024-06-07 21:47:28.352890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.156 [2024-06-07 21:47:28.365204] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.156 [2024-06-07 21:47:28.365229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.156 [2024-06-07 21:47:28.365241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.156 [2024-06-07 21:47:28.382414] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.156 [2024-06-07 21:47:28.382441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.156 [2024-06-07 21:47:28.382453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.156 [2024-06-07 21:47:28.394357] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.156 [2024-06-07 21:47:28.394383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.156 [2024-06-07 21:47:28.394395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.156 [2024-06-07 21:47:28.410686] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.156 [2024-06-07 21:47:28.410711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.156 [2024-06-07 21:47:28.410723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.156 [2024-06-07 21:47:28.423321] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.156 [2024-06-07 21:47:28.423346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.156 [2024-06-07 21:47:28.423358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.416 [2024-06-07 21:47:28.440605] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.416 [2024-06-07 21:47:28.440631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.416 [2024-06-07 21:47:28.440643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.416 [2024-06-07 21:47:28.455101] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.416 [2024-06-07 21:47:28.455127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.416 [2024-06-07 21:47:28.455139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.416 [2024-06-07 21:47:28.467911] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.416 [2024-06-07 21:47:28.467938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.416 [2024-06-07 21:47:28.467950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.416 [2024-06-07 21:47:28.484116] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.416 [2024-06-07 21:47:28.484143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.416 [2024-06-07 21:47:28.484159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.416 [2024-06-07 21:47:28.496278] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.416 [2024-06-07 21:47:28.496304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.416 [2024-06-07 21:47:28.496315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.416 [2024-06-07 21:47:28.511933] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.416 [2024-06-07 21:47:28.511959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:25322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.416 [2024-06-07 21:47:28.511970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.416 [2024-06-07 21:47:28.527440] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.416 [2024-06-07 21:47:28.527466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.416 [2024-06-07 21:47:28.527477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.416 [2024-06-07 21:47:28.542880] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.416 [2024-06-07 21:47:28.542905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.416 [2024-06-07 21:47:28.542916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.416 [2024-06-07 21:47:28.555390] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.416 [2024-06-07 21:47:28.555416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.416 [2024-06-07 21:47:28.555427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.416 [2024-06-07 21:47:28.570445] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.416 [2024-06-07 21:47:28.570471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.416 [2024-06-07 21:47:28.570482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.416 [2024-06-07 21:47:28.586528] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.416 [2024-06-07 21:47:28.586554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.416 [2024-06-07 21:47:28.586566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.416 [2024-06-07 21:47:28.598984] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.416 [2024-06-07 21:47:28.599009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:5193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.416 [2024-06-07 21:47:28.599021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.416 [2024-06-07 21:47:28.614270] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.416 [2024-06-07 21:47:28.614300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.416 [2024-06-07 21:47:28.614312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.416 [2024-06-07 21:47:28.630008] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.416 [2024-06-07 21:47:28.630038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.416 [2024-06-07 21:47:28.630050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.416 [2024-06-07 21:47:28.645080] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.416 [2024-06-07 21:47:28.645106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.416 [2024-06-07 21:47:28.645117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.416 [2024-06-07 21:47:28.658333] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.416 [2024-06-07 21:47:28.658358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.416 [2024-06-07 21:47:28.658369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.416 [2024-06-07 21:47:28.672650] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.416 [2024-06-07 21:47:28.672676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.416 [2024-06-07 21:47:28.672687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.676 [2024-06-07 21:47:28.687772] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.676 [2024-06-07 21:47:28.687798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:11286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.676 [2024-06-07 21:47:28.687809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.676 [2024-06-07 21:47:28.701212] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.676 [2024-06-07 21:47:28.701238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.676 [2024-06-07 21:47:28.701250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.676 [2024-06-07 21:47:28.714874] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.676 [2024-06-07 21:47:28.714899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.676 [2024-06-07 21:47:28.714911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.676 [2024-06-07 21:47:28.729172] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.676 [2024-06-07 21:47:28.729197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:18736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.676 [2024-06-07 21:47:28.729209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.676 [2024-06-07 21:47:28.743272] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.676 [2024-06-07 21:47:28.743298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.676 [2024-06-07 21:47:28.743310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.676 [2024-06-07 21:47:28.758593] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.676 [2024-06-07 21:47:28.758618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.676 [2024-06-07 21:47:28.758629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.676 [2024-06-07 21:47:28.773214] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.676 [2024-06-07 21:47:28.773240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.676 [2024-06-07 21:47:28.773251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.676 [2024-06-07 21:47:28.786246] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.676 [2024-06-07 21:47:28.786272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.676 [2024-06-07 21:47:28.786283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.676 [2024-06-07 21:47:28.801036] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.676 [2024-06-07 21:47:28.801061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.676 [2024-06-07 21:47:28.801073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.676 [2024-06-07 21:47:28.814329] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.676 [2024-06-07 21:47:28.814355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.676 [2024-06-07 21:47:28.814366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.676 [2024-06-07 21:47:28.828314] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.676 [2024-06-07 21:47:28.828339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.676 [2024-06-07 21:47:28.828351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.676 [2024-06-07 21:47:28.844054] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.676 [2024-06-07 21:47:28.844080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.676 [2024-06-07 21:47:28.844091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.676 [2024-06-07 21:47:28.857639] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.676 [2024-06-07 21:47:28.857665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.676 [2024-06-07 21:47:28.857679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.677 [2024-06-07 21:47:28.871755] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.677 [2024-06-07 21:47:28.871780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.677 [2024-06-07 21:47:28.871791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.677 [2024-06-07 21:47:28.885387] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.677 [2024-06-07 21:47:28.885412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.677 [2024-06-07 21:47:28.885423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.677 [2024-06-07 21:47:28.901284] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.677 [2024-06-07 21:47:28.901310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.677 [2024-06-07 21:47:28.901322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.677 [2024-06-07 21:47:28.914497] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.677 [2024-06-07 21:47:28.914523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.677 [2024-06-07 21:47:28.914534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.677 [2024-06-07 21:47:28.928093] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.677 [2024-06-07 21:47:28.928119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.677 [2024-06-07 21:47:28.928131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.677 [2024-06-07 21:47:28.942919] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.677 [2024-06-07 21:47:28.942944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.677 [2024-06-07 21:47:28.942956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.936 [2024-06-07 21:47:28.958250] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.936 [2024-06-07 21:47:28.958274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.936 [2024-06-07 21:47:28.958286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.936 [2024-06-07 21:47:28.970816] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.936 [2024-06-07 21:47:28.970841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.936 [2024-06-07 21:47:28.970852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.936 [2024-06-07 21:47:28.986583] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.936 [2024-06-07 21:47:28.986609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.936 [2024-06-07 21:47:28.986620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.936 [2024-06-07 21:47:29.001574] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.936 [2024-06-07 21:47:29.001599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.936 [2024-06-07 21:47:29.001610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.936 [2024-06-07 21:47:29.014228] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.936 [2024-06-07 21:47:29.014254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.937 [2024-06-07 21:47:29.014266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.937 [2024-06-07 21:47:29.028390] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.937 [2024-06-07 21:47:29.028416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.937 [2024-06-07 21:47:29.028427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.937 [2024-06-07 21:47:29.043926] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.937 [2024-06-07 21:47:29.043952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.937 [2024-06-07 21:47:29.043964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.937 [2024-06-07 21:47:29.058329] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.937 [2024-06-07 21:47:29.058355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.937 [2024-06-07 21:47:29.058367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.937 [2024-06-07 21:47:29.071818] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.937 [2024-06-07 21:47:29.071844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:7185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.937 [2024-06-07 21:47:29.071855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.937 [2024-06-07 21:47:29.088341] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.937 [2024-06-07 21:47:29.088366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:18188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.937 [2024-06-07 21:47:29.088377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.937 [2024-06-07 21:47:29.101695] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.937 [2024-06-07 21:47:29.101720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.937 [2024-06-07 21:47:29.101736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.937 [2024-06-07 21:47:29.115782] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.937 [2024-06-07 21:47:29.115808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.937 [2024-06-07 21:47:29.115820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.937 [2024-06-07 21:47:29.130020] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.937 [2024-06-07 21:47:29.130051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.937 [2024-06-07 21:47:29.130063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.937 [2024-06-07 21:47:29.144023] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.937 [2024-06-07 21:47:29.144055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.937 [2024-06-07 21:47:29.144067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.937 [2024-06-07 21:47:29.157310] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.937 [2024-06-07 21:47:29.157336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.937 [2024-06-07 21:47:29.157347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.937 [2024-06-07 21:47:29.172313] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.937 [2024-06-07 21:47:29.172339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.937 [2024-06-07 21:47:29.172351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.937 [2024-06-07 21:47:29.187393] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.937 [2024-06-07 21:47:29.187419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.937 [2024-06-07 21:47:29.187431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.937 [2024-06-07 21:47:29.202775] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:28.937 [2024-06-07 21:47:29.202800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.937 [2024-06-07 21:47:29.202812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.197 [2024-06-07 21:47:29.216118] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:29.197 [2024-06-07 21:47:29.216144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:24600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.197 [2024-06-07 21:47:29.216155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.197 [2024-06-07 21:47:29.230971] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:29.197 [2024-06-07 21:47:29.231000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:24217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.197 [2024-06-07 21:47:29.231012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.197 [2024-06-07 21:47:29.243539] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:29.197 [2024-06-07 21:47:29.243564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.197 [2024-06-07 21:47:29.243576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.197 [2024-06-07 21:47:29.258980] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:29.197 [2024-06-07 21:47:29.259007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.197 [2024-06-07 21:47:29.259018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.197 [2024-06-07 21:47:29.274049] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:29.198 [2024-06-07 21:47:29.274076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.198 [2024-06-07 21:47:29.274088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.198 [2024-06-07 21:47:29.286308] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:29.198 [2024-06-07 21:47:29.286335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.198 [2024-06-07 21:47:29.286347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.198 [2024-06-07 21:47:29.301368] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:29.198 [2024-06-07 21:47:29.301395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.198 [2024-06-07 21:47:29.301406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.198 [2024-06-07 21:47:29.318225] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:29.198 [2024-06-07 21:47:29.318252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.198 [2024-06-07 21:47:29.318264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.198 [2024-06-07 21:47:29.330565] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:29.198 [2024-06-07 21:47:29.330591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.198 [2024-06-07 21:47:29.330602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.198 [2024-06-07 21:47:29.346418] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:29.198 [2024-06-07 21:47:29.346444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.198 [2024-06-07 21:47:29.346456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.198 [2024-06-07 21:47:29.361592] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:29.198 [2024-06-07 21:47:29.361617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:13533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.198 [2024-06-07 21:47:29.361629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.198 [2024-06-07 21:47:29.374156] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:29.198 [2024-06-07 21:47:29.374183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.198 [2024-06-07 21:47:29.374194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.198 [2024-06-07 21:47:29.388278] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:29.198 [2024-06-07 21:47:29.388305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.198 [2024-06-07 21:47:29.388316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.198 [2024-06-07 21:47:29.405575] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:29.198 [2024-06-07 21:47:29.405601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:17233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.198 [2024-06-07 21:47:29.405612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.198 [2024-06-07 21:47:29.419215] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:29.198 [2024-06-07 21:47:29.419241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.198 [2024-06-07 21:47:29.419252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.198 [2024-06-07 21:47:29.434010] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:29.198 [2024-06-07 21:47:29.434044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.198 [2024-06-07 21:47:29.434056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.198 [2024-06-07 21:47:29.446632] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:29.198 [2024-06-07 21:47:29.446658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.198 [2024-06-07 21:47:29.446669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.198 [2024-06-07 21:47:29.461700] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:29.198 [2024-06-07 21:47:29.461726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.198 [2024-06-07 21:47:29.461738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.459 [2024-06-07 21:47:29.475050] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:29.459 [2024-06-07 21:47:29.475076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.459 [2024-06-07 21:47:29.475095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.459 [2024-06-07 21:47:29.488778] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:29.459 [2024-06-07 21:47:29.488805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.459 [2024-06-07 21:47:29.488817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.459 [2024-06-07 21:47:29.504418] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:29.459 [2024-06-07 21:47:29.504444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.459 [2024-06-07 21:47:29.504455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.459 [2024-06-07 21:47:29.519057] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:29.459 [2024-06-07 21:47:29.519083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.459 [2024-06-07 21:47:29.519094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.459 [2024-06-07 21:47:29.532960] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:29.459 [2024-06-07 21:47:29.532986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.459 [2024-06-07 21:47:29.532998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.459 [2024-06-07 21:47:29.547983] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:29.459 [2024-06-07 21:47:29.548010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.459 [2024-06-07 21:47:29.548021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.459 [2024-06-07 21:47:29.562921] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:29.459 [2024-06-07 21:47:29.562947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.459 [2024-06-07 21:47:29.562958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.459 [2024-06-07 21:47:29.576040] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:29.459 [2024-06-07 21:47:29.576066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.459 [2024-06-07 21:47:29.576077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.459 [2024-06-07 21:47:29.592425] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:29.459 [2024-06-07 21:47:29.592451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.459 [2024-06-07 21:47:29.592463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.459 [2024-06-07 21:47:29.606994] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:29.459 [2024-06-07 21:47:29.607024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.459 [2024-06-07 21:47:29.607042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.459 [2024-06-07 21:47:29.620344] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:29.459 [2024-06-07 21:47:29.620369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:14121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.459 [2024-06-07 21:47:29.620380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.459 [2024-06-07 21:47:29.635235] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:29.459 [2024-06-07 21:47:29.635261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.459 [2024-06-07 21:47:29.635272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.459 [2024-06-07 21:47:29.649204] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:29.459 [2024-06-07 21:47:29.649229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.459 [2024-06-07 21:47:29.649241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.459 [2024-06-07 21:47:29.663636] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9edba0) 00:30:29.459 [2024-06-07 21:47:29.663662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:15166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.459 [2024-06-07 21:47:29.663674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.459 00:30:29.459 Latency(us) 00:30:29.459 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:29.459 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:29.459 nvme0n1 : 2.01 17711.00 69.18 0.00 0.00 7217.44 3961.95 20137.43 00:30:29.459 =================================================================================================================== 00:30:29.459 Total : 17711.00 69.18 0.00 0.00 7217.44 3961.95 20137.43 00:30:29.459 0 00:30:29.459 21:47:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:29.459 21:47:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:29.459 21:47:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:29.459 21:47:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:29.459 | .driver_specific 00:30:29.459 | .nvme_error 00:30:29.459 | .status_code 00:30:29.459 | .command_transient_transport_error' 00:30:29.719 21:47:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 139 > 0 )) 00:30:29.719 21:47:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1624165 00:30:29.719 21:47:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 1624165 ']' 00:30:29.719 21:47:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 1624165 00:30:29.719 21:47:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:30:29.719 21:47:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:29.719 21:47:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1624165 00:30:29.978 21:47:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:30:29.978 21:47:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:30:29.978 21:47:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1624165' 00:30:29.978 killing process with pid 1624165 00:30:29.978 21:47:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 1624165 00:30:29.978 Received shutdown signal, test time was about 2.000000 seconds 00:30:29.978 00:30:29.978 Latency(us) 00:30:29.978 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:29.978 =================================================================================================================== 00:30:29.978 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:29.978 21:47:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 1624165 00:30:29.978 21:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:30:29.978 21:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:29.978 21:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:30:29.978 21:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:30:29.978 21:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:30:29.978 21:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1624830 00:30:29.978 21:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1624830 /var/tmp/bperf.sock 00:30:29.978 21:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:30:29.978 21:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 1624830 ']' 00:30:29.978 21:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:29.978 21:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:29.978 21:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:29.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:29.978 21:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:29.978 21:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:29.978 [2024-06-07 21:47:30.244265] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:30:29.978 [2024-06-07 21:47:30.244327] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1624830 ] 00:30:29.978 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:29.978 Zero copy mechanism will not be used. 00:30:30.237 EAL: No free 2048 kB hugepages reported on node 1 00:30:30.237 [2024-06-07 21:47:30.324340] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:30.237 [2024-06-07 21:47:30.410241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:30.237 21:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:30.237 21:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:30:30.237 21:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:30.237 21:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:30.496 21:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:30.496 21:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:30.496 21:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:30.496 21:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:30.496 21:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:30.496 21:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:30.755 nvme0n1 00:30:30.755 21:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:30.755 21:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:30.755 21:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:30.755 21:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:30.755 21:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:30.755 21:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:31.015 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:31.015 Zero copy mechanism will not be used. 00:30:31.015 Running I/O for 2 seconds... 00:30:31.015 [2024-06-07 21:47:31.113004] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.015 [2024-06-07 21:47:31.113055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.015 [2024-06-07 21:47:31.113071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.015 [2024-06-07 21:47:31.125228] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.015 [2024-06-07 21:47:31.125260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.015 [2024-06-07 21:47:31.125274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.015 [2024-06-07 21:47:31.136258] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.015 [2024-06-07 21:47:31.136287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.015 [2024-06-07 21:47:31.136300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.015 [2024-06-07 21:47:31.146676] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.015 [2024-06-07 21:47:31.146706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.015 [2024-06-07 21:47:31.146718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.015 [2024-06-07 21:47:31.155911] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.015 [2024-06-07 21:47:31.155941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.015 [2024-06-07 21:47:31.155960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.015 [2024-06-07 21:47:31.166760] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.015 [2024-06-07 21:47:31.166790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.015 [2024-06-07 21:47:31.166803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.015 [2024-06-07 21:47:31.177208] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.015 [2024-06-07 21:47:31.177237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.015 [2024-06-07 21:47:31.177249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.015 [2024-06-07 21:47:31.187239] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.015 [2024-06-07 21:47:31.187268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.015 [2024-06-07 21:47:31.187280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.015 [2024-06-07 21:47:31.197566] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.015 [2024-06-07 21:47:31.197594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.015 [2024-06-07 21:47:31.197606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.015 [2024-06-07 21:47:31.206982] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.015 [2024-06-07 21:47:31.207009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.015 [2024-06-07 21:47:31.207021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.015 [2024-06-07 21:47:31.216569] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.015 [2024-06-07 21:47:31.216599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.015 [2024-06-07 21:47:31.216611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.015 [2024-06-07 21:47:31.225918] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.015 [2024-06-07 21:47:31.225947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.015 [2024-06-07 21:47:31.225959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.015 [2024-06-07 21:47:31.234932] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.015 [2024-06-07 21:47:31.234959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.015 [2024-06-07 21:47:31.234971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.015 [2024-06-07 21:47:31.243587] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.015 [2024-06-07 21:47:31.243619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.015 [2024-06-07 21:47:31.243631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.015 [2024-06-07 21:47:31.252344] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.015 [2024-06-07 21:47:31.252371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.015 [2024-06-07 21:47:31.252383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.015 [2024-06-07 21:47:31.261275] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.015 [2024-06-07 21:47:31.261302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.015 [2024-06-07 21:47:31.261313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.015 [2024-06-07 21:47:31.270023] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.015 [2024-06-07 21:47:31.270057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.015 [2024-06-07 21:47:31.270069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.015 [2024-06-07 21:47:31.279311] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.015 [2024-06-07 21:47:31.279337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.015 [2024-06-07 21:47:31.279349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.275 [2024-06-07 21:47:31.289129] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.275 [2024-06-07 21:47:31.289157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.275 [2024-06-07 21:47:31.289169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.275 [2024-06-07 21:47:31.298464] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.275 [2024-06-07 21:47:31.298492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.275 [2024-06-07 21:47:31.298504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.275 [2024-06-07 21:47:31.307950] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.275 [2024-06-07 21:47:31.307977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.275 [2024-06-07 21:47:31.307990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.275 [2024-06-07 21:47:31.317068] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.275 [2024-06-07 21:47:31.317094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.275 [2024-06-07 21:47:31.317106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.275 [2024-06-07 21:47:31.326130] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.275 [2024-06-07 21:47:31.326157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.275 [2024-06-07 21:47:31.326169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.275 [2024-06-07 21:47:31.334783] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.275 [2024-06-07 21:47:31.334810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.275 [2024-06-07 21:47:31.334822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.275 [2024-06-07 21:47:31.343359] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.275 [2024-06-07 21:47:31.343386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.275 [2024-06-07 21:47:31.343398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.275 [2024-06-07 21:47:31.351996] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.275 [2024-06-07 21:47:31.352023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.275 [2024-06-07 21:47:31.352043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.275 [2024-06-07 21:47:31.360662] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.275 [2024-06-07 21:47:31.360689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.275 [2024-06-07 21:47:31.360701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.275 [2024-06-07 21:47:31.369683] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.275 [2024-06-07 21:47:31.369710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.275 [2024-06-07 21:47:31.369723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.275 [2024-06-07 21:47:31.378847] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.275 [2024-06-07 21:47:31.378874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.275 [2024-06-07 21:47:31.378886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.275 [2024-06-07 21:47:31.388020] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.275 [2024-06-07 21:47:31.388054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.275 [2024-06-07 21:47:31.388065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.276 [2024-06-07 21:47:31.398216] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.276 [2024-06-07 21:47:31.398242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.276 [2024-06-07 21:47:31.398258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.276 [2024-06-07 21:47:31.407765] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.276 [2024-06-07 21:47:31.407791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.276 [2024-06-07 21:47:31.407803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.276 [2024-06-07 21:47:31.416288] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.276 [2024-06-07 21:47:31.416315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.276 [2024-06-07 21:47:31.416326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.276 [2024-06-07 21:47:31.424887] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.276 [2024-06-07 21:47:31.424914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.276 [2024-06-07 21:47:31.424925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.276 [2024-06-07 21:47:31.433564] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.276 [2024-06-07 21:47:31.433591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.276 [2024-06-07 21:47:31.433602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.276 [2024-06-07 21:47:31.442187] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.276 [2024-06-07 21:47:31.442213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.276 [2024-06-07 21:47:31.442225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.276 [2024-06-07 21:47:31.450899] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.276 [2024-06-07 21:47:31.450925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.276 [2024-06-07 21:47:31.450937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.276 [2024-06-07 21:47:31.459640] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.276 [2024-06-07 21:47:31.459666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.276 [2024-06-07 21:47:31.459678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.276 [2024-06-07 21:47:31.468321] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.276 [2024-06-07 21:47:31.468359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.276 [2024-06-07 21:47:31.468371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.276 [2024-06-07 21:47:31.476988] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.276 [2024-06-07 21:47:31.477015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.276 [2024-06-07 21:47:31.477034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.276 [2024-06-07 21:47:31.485561] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.276 [2024-06-07 21:47:31.485587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.276 [2024-06-07 21:47:31.485599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.276 [2024-06-07 21:47:31.494386] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.276 [2024-06-07 21:47:31.494413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.276 [2024-06-07 21:47:31.494425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.276 [2024-06-07 21:47:31.503080] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.276 [2024-06-07 21:47:31.503107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.276 [2024-06-07 21:47:31.503118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.276 [2024-06-07 21:47:31.511641] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.276 [2024-06-07 21:47:31.511667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.276 [2024-06-07 21:47:31.511678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.276 [2024-06-07 21:47:31.520326] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.276 [2024-06-07 21:47:31.520352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.276 [2024-06-07 21:47:31.520363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.276 [2024-06-07 21:47:31.529097] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.276 [2024-06-07 21:47:31.529123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.276 [2024-06-07 21:47:31.529134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.276 [2024-06-07 21:47:31.538277] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.276 [2024-06-07 21:47:31.538304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.276 [2024-06-07 21:47:31.538316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.536 [2024-06-07 21:47:31.548694] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.536 [2024-06-07 21:47:31.548721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.536 [2024-06-07 21:47:31.548737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.536 [2024-06-07 21:47:31.559469] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.536 [2024-06-07 21:47:31.559499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.536 [2024-06-07 21:47:31.559511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.536 [2024-06-07 21:47:31.570390] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.536 [2024-06-07 21:47:31.570417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.536 [2024-06-07 21:47:31.570430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.536 [2024-06-07 21:47:31.581254] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.536 [2024-06-07 21:47:31.581282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.536 [2024-06-07 21:47:31.581294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.536 [2024-06-07 21:47:31.592123] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.536 [2024-06-07 21:47:31.592151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.536 [2024-06-07 21:47:31.592164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.536 [2024-06-07 21:47:31.603308] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.536 [2024-06-07 21:47:31.603336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.536 [2024-06-07 21:47:31.603348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.536 [2024-06-07 21:47:31.614346] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.536 [2024-06-07 21:47:31.614374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.536 [2024-06-07 21:47:31.614387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.536 [2024-06-07 21:47:31.623947] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.536 [2024-06-07 21:47:31.623976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.536 [2024-06-07 21:47:31.623988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.536 [2024-06-07 21:47:31.634393] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.536 [2024-06-07 21:47:31.634422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.536 [2024-06-07 21:47:31.634434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.536 [2024-06-07 21:47:31.644553] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.536 [2024-06-07 21:47:31.644586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.536 [2024-06-07 21:47:31.644598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.536 [2024-06-07 21:47:31.655865] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.536 [2024-06-07 21:47:31.655893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.536 [2024-06-07 21:47:31.655905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.536 [2024-06-07 21:47:31.665872] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.536 [2024-06-07 21:47:31.665900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.536 [2024-06-07 21:47:31.665911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.536 [2024-06-07 21:47:31.675827] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.536 [2024-06-07 21:47:31.675853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.536 [2024-06-07 21:47:31.675865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.537 [2024-06-07 21:47:31.685449] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.537 [2024-06-07 21:47:31.685476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.537 [2024-06-07 21:47:31.685487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.537 [2024-06-07 21:47:31.694666] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.537 [2024-06-07 21:47:31.694695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.537 [2024-06-07 21:47:31.694708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.537 [2024-06-07 21:47:31.704722] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.537 [2024-06-07 21:47:31.704751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.537 [2024-06-07 21:47:31.704764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.537 [2024-06-07 21:47:31.714729] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.537 [2024-06-07 21:47:31.714758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.537 [2024-06-07 21:47:31.714770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.537 [2024-06-07 21:47:31.723770] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.537 [2024-06-07 21:47:31.723798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.537 [2024-06-07 21:47:31.723810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.537 [2024-06-07 21:47:31.732694] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.537 [2024-06-07 21:47:31.732721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.537 [2024-06-07 21:47:31.732733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.537 [2024-06-07 21:47:31.741559] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.537 [2024-06-07 21:47:31.741586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.537 [2024-06-07 21:47:31.741597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.537 [2024-06-07 21:47:31.750206] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.537 [2024-06-07 21:47:31.750233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.537 [2024-06-07 21:47:31.750244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.537 [2024-06-07 21:47:31.758865] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.537 [2024-06-07 21:47:31.758892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.537 [2024-06-07 21:47:31.758903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.537 [2024-06-07 21:47:31.767727] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.537 [2024-06-07 21:47:31.767753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.537 [2024-06-07 21:47:31.767764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.537 [2024-06-07 21:47:31.776527] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.537 [2024-06-07 21:47:31.776554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.537 [2024-06-07 21:47:31.776565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.537 [2024-06-07 21:47:31.785165] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.537 [2024-06-07 21:47:31.785193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.537 [2024-06-07 21:47:31.785204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.537 [2024-06-07 21:47:31.793914] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.537 [2024-06-07 21:47:31.793940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.537 [2024-06-07 21:47:31.793952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.537 [2024-06-07 21:47:31.802794] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.537 [2024-06-07 21:47:31.802821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.537 [2024-06-07 21:47:31.802836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.797 [2024-06-07 21:47:31.811422] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.797 [2024-06-07 21:47:31.811448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.797 [2024-06-07 21:47:31.811460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.797 [2024-06-07 21:47:31.820173] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.797 [2024-06-07 21:47:31.820198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.797 [2024-06-07 21:47:31.820210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.797 [2024-06-07 21:47:31.828845] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.797 [2024-06-07 21:47:31.828871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.797 [2024-06-07 21:47:31.828883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.797 [2024-06-07 21:47:31.837508] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.797 [2024-06-07 21:47:31.837535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.797 [2024-06-07 21:47:31.837546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.797 [2024-06-07 21:47:31.846056] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.797 [2024-06-07 21:47:31.846081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.797 [2024-06-07 21:47:31.846093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.797 [2024-06-07 21:47:31.854639] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.797 [2024-06-07 21:47:31.854665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.797 [2024-06-07 21:47:31.854677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.797 [2024-06-07 21:47:31.863180] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.797 [2024-06-07 21:47:31.863206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.797 [2024-06-07 21:47:31.863218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.797 [2024-06-07 21:47:31.871825] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.797 [2024-06-07 21:47:31.871850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.797 [2024-06-07 21:47:31.871861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.797 [2024-06-07 21:47:31.880367] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.797 [2024-06-07 21:47:31.880398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.797 [2024-06-07 21:47:31.880409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.797 [2024-06-07 21:47:31.888996] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.797 [2024-06-07 21:47:31.889022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.797 [2024-06-07 21:47:31.889044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.797 [2024-06-07 21:47:31.897631] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.797 [2024-06-07 21:47:31.897657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.797 [2024-06-07 21:47:31.897669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.797 [2024-06-07 21:47:31.906357] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.797 [2024-06-07 21:47:31.906383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.797 [2024-06-07 21:47:31.906394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.797 [2024-06-07 21:47:31.915151] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.797 [2024-06-07 21:47:31.915177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.797 [2024-06-07 21:47:31.915189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.797 [2024-06-07 21:47:31.923764] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.797 [2024-06-07 21:47:31.923789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.797 [2024-06-07 21:47:31.923801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.797 [2024-06-07 21:47:31.932454] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.797 [2024-06-07 21:47:31.932480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.797 [2024-06-07 21:47:31.932491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.797 [2024-06-07 21:47:31.941022] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.797 [2024-06-07 21:47:31.941054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.797 [2024-06-07 21:47:31.941065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.797 [2024-06-07 21:47:31.950252] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.797 [2024-06-07 21:47:31.950279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.797 [2024-06-07 21:47:31.950290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.797 [2024-06-07 21:47:31.962077] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.797 [2024-06-07 21:47:31.962104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.797 [2024-06-07 21:47:31.962115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.797 [2024-06-07 21:47:31.975595] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.797 [2024-06-07 21:47:31.975624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.797 [2024-06-07 21:47:31.975636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.797 [2024-06-07 21:47:31.986321] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.797 [2024-06-07 21:47:31.986349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.797 [2024-06-07 21:47:31.986361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.797 [2024-06-07 21:47:31.996731] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.797 [2024-06-07 21:47:31.996758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.797 [2024-06-07 21:47:31.996770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.797 [2024-06-07 21:47:32.006581] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.797 [2024-06-07 21:47:32.006608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.798 [2024-06-07 21:47:32.006619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.798 [2024-06-07 21:47:32.016317] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.798 [2024-06-07 21:47:32.016343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.798 [2024-06-07 21:47:32.016355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.798 [2024-06-07 21:47:32.025754] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.798 [2024-06-07 21:47:32.025781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.798 [2024-06-07 21:47:32.025792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.798 [2024-06-07 21:47:32.039069] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.798 [2024-06-07 21:47:32.039096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.798 [2024-06-07 21:47:32.039108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.798 [2024-06-07 21:47:32.051657] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.798 [2024-06-07 21:47:32.051683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.798 [2024-06-07 21:47:32.051698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.798 [2024-06-07 21:47:32.063540] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:31.798 [2024-06-07 21:47:32.063567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.798 [2024-06-07 21:47:32.063579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.057 [2024-06-07 21:47:32.074294] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.057 [2024-06-07 21:47:32.074321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.057 [2024-06-07 21:47:32.074332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.057 [2024-06-07 21:47:32.085465] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.057 [2024-06-07 21:47:32.085493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.057 [2024-06-07 21:47:32.085506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.057 [2024-06-07 21:47:32.096872] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.057 [2024-06-07 21:47:32.096900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.057 [2024-06-07 21:47:32.096913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.057 [2024-06-07 21:47:32.111285] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.057 [2024-06-07 21:47:32.111314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.057 [2024-06-07 21:47:32.111326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.057 [2024-06-07 21:47:32.125050] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.057 [2024-06-07 21:47:32.125077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.057 [2024-06-07 21:47:32.125089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.057 [2024-06-07 21:47:32.136196] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.057 [2024-06-07 21:47:32.136223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.057 [2024-06-07 21:47:32.136234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.057 [2024-06-07 21:47:32.146478] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.057 [2024-06-07 21:47:32.146506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.058 [2024-06-07 21:47:32.146517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.058 [2024-06-07 21:47:32.156358] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.058 [2024-06-07 21:47:32.156387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.058 [2024-06-07 21:47:32.156399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.058 [2024-06-07 21:47:32.166621] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.058 [2024-06-07 21:47:32.166648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.058 [2024-06-07 21:47:32.166660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.058 [2024-06-07 21:47:32.175326] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.058 [2024-06-07 21:47:32.175352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.058 [2024-06-07 21:47:32.175364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.058 [2024-06-07 21:47:32.183979] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.058 [2024-06-07 21:47:32.184006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.058 [2024-06-07 21:47:32.184017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.058 [2024-06-07 21:47:32.194945] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.058 [2024-06-07 21:47:32.194971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.058 [2024-06-07 21:47:32.194982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.058 [2024-06-07 21:47:32.207540] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.058 [2024-06-07 21:47:32.207567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.058 [2024-06-07 21:47:32.207579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.058 [2024-06-07 21:47:32.220718] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.058 [2024-06-07 21:47:32.220746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.058 [2024-06-07 21:47:32.220758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.058 [2024-06-07 21:47:32.232140] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.058 [2024-06-07 21:47:32.232168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.058 [2024-06-07 21:47:32.232180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.058 [2024-06-07 21:47:32.242581] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.058 [2024-06-07 21:47:32.242609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.058 [2024-06-07 21:47:32.242625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.058 [2024-06-07 21:47:32.252431] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.058 [2024-06-07 21:47:32.252458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.058 [2024-06-07 21:47:32.252469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.058 [2024-06-07 21:47:32.262346] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.058 [2024-06-07 21:47:32.262373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.058 [2024-06-07 21:47:32.262385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.058 [2024-06-07 21:47:32.271945] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.058 [2024-06-07 21:47:32.271972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.058 [2024-06-07 21:47:32.271984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.058 [2024-06-07 21:47:32.281391] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.058 [2024-06-07 21:47:32.281418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.058 [2024-06-07 21:47:32.281429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.058 [2024-06-07 21:47:32.290226] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.058 [2024-06-07 21:47:32.290251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.058 [2024-06-07 21:47:32.290262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.058 [2024-06-07 21:47:32.303308] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.058 [2024-06-07 21:47:32.303335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.058 [2024-06-07 21:47:32.303346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.058 [2024-06-07 21:47:32.315809] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.058 [2024-06-07 21:47:32.315838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.058 [2024-06-07 21:47:32.315850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.318 [2024-06-07 21:47:32.326716] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.318 [2024-06-07 21:47:32.326743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.318 [2024-06-07 21:47:32.326755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.318 [2024-06-07 21:47:32.336306] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.318 [2024-06-07 21:47:32.336342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.318 [2024-06-07 21:47:32.336354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.318 [2024-06-07 21:47:32.345790] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.318 [2024-06-07 21:47:32.345818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.318 [2024-06-07 21:47:32.345830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.318 [2024-06-07 21:47:32.355812] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.318 [2024-06-07 21:47:32.355841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.318 [2024-06-07 21:47:32.355854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.318 [2024-06-07 21:47:32.365999] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.318 [2024-06-07 21:47:32.366035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.318 [2024-06-07 21:47:32.366048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.318 [2024-06-07 21:47:32.376095] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.318 [2024-06-07 21:47:32.376122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.318 [2024-06-07 21:47:32.376134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.318 [2024-06-07 21:47:32.385898] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.318 [2024-06-07 21:47:32.385926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.318 [2024-06-07 21:47:32.385938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.318 [2024-06-07 21:47:32.395602] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.318 [2024-06-07 21:47:32.395629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.318 [2024-06-07 21:47:32.395641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.318 [2024-06-07 21:47:32.405335] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.318 [2024-06-07 21:47:32.405362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.318 [2024-06-07 21:47:32.405375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.318 [2024-06-07 21:47:32.414477] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.318 [2024-06-07 21:47:32.414505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.318 [2024-06-07 21:47:32.414516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.318 [2024-06-07 21:47:32.424501] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.318 [2024-06-07 21:47:32.424529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.318 [2024-06-07 21:47:32.424541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.318 [2024-06-07 21:47:32.434464] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.318 [2024-06-07 21:47:32.434491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.318 [2024-06-07 21:47:32.434503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.318 [2024-06-07 21:47:32.443967] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.318 [2024-06-07 21:47:32.443995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.318 [2024-06-07 21:47:32.444007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.318 [2024-06-07 21:47:32.452808] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.318 [2024-06-07 21:47:32.452836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.318 [2024-06-07 21:47:32.452848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.318 [2024-06-07 21:47:32.461497] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.318 [2024-06-07 21:47:32.461524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.318 [2024-06-07 21:47:32.461536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.318 [2024-06-07 21:47:32.470142] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.318 [2024-06-07 21:47:32.470169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.318 [2024-06-07 21:47:32.470180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.318 [2024-06-07 21:47:32.478821] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.318 [2024-06-07 21:47:32.478848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.319 [2024-06-07 21:47:32.478860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.319 [2024-06-07 21:47:32.487558] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.319 [2024-06-07 21:47:32.487585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.319 [2024-06-07 21:47:32.487598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.319 [2024-06-07 21:47:32.496179] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.319 [2024-06-07 21:47:32.496206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.319 [2024-06-07 21:47:32.496221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.319 [2024-06-07 21:47:32.504708] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.319 [2024-06-07 21:47:32.504735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.319 [2024-06-07 21:47:32.504747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.319 [2024-06-07 21:47:32.513371] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.319 [2024-06-07 21:47:32.513398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.319 [2024-06-07 21:47:32.513409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.319 [2024-06-07 21:47:32.522529] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.319 [2024-06-07 21:47:32.522555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.319 [2024-06-07 21:47:32.522566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.319 [2024-06-07 21:47:32.531495] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.319 [2024-06-07 21:47:32.531522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.319 [2024-06-07 21:47:32.531533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.319 [2024-06-07 21:47:32.540151] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.319 [2024-06-07 21:47:32.540177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.319 [2024-06-07 21:47:32.540189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.319 [2024-06-07 21:47:32.548890] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.319 [2024-06-07 21:47:32.548916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.319 [2024-06-07 21:47:32.548928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.319 [2024-06-07 21:47:32.557770] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.319 [2024-06-07 21:47:32.557797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.319 [2024-06-07 21:47:32.557808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.319 [2024-06-07 21:47:32.566449] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.319 [2024-06-07 21:47:32.566475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.319 [2024-06-07 21:47:32.566487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.319 [2024-06-07 21:47:32.575074] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.319 [2024-06-07 21:47:32.575104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.319 [2024-06-07 21:47:32.575116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.319 [2024-06-07 21:47:32.583855] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.319 [2024-06-07 21:47:32.583882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.319 [2024-06-07 21:47:32.583893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.579 [2024-06-07 21:47:32.592508] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.579 [2024-06-07 21:47:32.592535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.579 [2024-06-07 21:47:32.592549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.579 [2024-06-07 21:47:32.601188] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.579 [2024-06-07 21:47:32.601216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.579 [2024-06-07 21:47:32.601228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.579 [2024-06-07 21:47:32.609971] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.579 [2024-06-07 21:47:32.609996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.579 [2024-06-07 21:47:32.610007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.579 [2024-06-07 21:47:32.619583] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.579 [2024-06-07 21:47:32.619611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.579 [2024-06-07 21:47:32.619623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.579 [2024-06-07 21:47:32.628345] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.579 [2024-06-07 21:47:32.628372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.579 [2024-06-07 21:47:32.628383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.579 [2024-06-07 21:47:32.636979] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.579 [2024-06-07 21:47:32.637005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.579 [2024-06-07 21:47:32.637017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.579 [2024-06-07 21:47:32.645584] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.579 [2024-06-07 21:47:32.645610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.579 [2024-06-07 21:47:32.645622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.579 [2024-06-07 21:47:32.654211] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.579 [2024-06-07 21:47:32.654238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.579 [2024-06-07 21:47:32.654249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.579 [2024-06-07 21:47:32.663455] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.579 [2024-06-07 21:47:32.663482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.579 [2024-06-07 21:47:32.663494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.579 [2024-06-07 21:47:32.673840] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.579 [2024-06-07 21:47:32.673867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.579 [2024-06-07 21:47:32.673879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.579 [2024-06-07 21:47:32.683790] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.579 [2024-06-07 21:47:32.683818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.579 [2024-06-07 21:47:32.683830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.579 [2024-06-07 21:47:32.694679] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.579 [2024-06-07 21:47:32.694707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.579 [2024-06-07 21:47:32.694720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.579 [2024-06-07 21:47:32.705652] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.579 [2024-06-07 21:47:32.705680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.579 [2024-06-07 21:47:32.705693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.579 [2024-06-07 21:47:32.715273] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.579 [2024-06-07 21:47:32.715302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.579 [2024-06-07 21:47:32.715315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.579 [2024-06-07 21:47:32.725773] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.579 [2024-06-07 21:47:32.725802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.579 [2024-06-07 21:47:32.725814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.579 [2024-06-07 21:47:32.736618] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.579 [2024-06-07 21:47:32.736647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.579 [2024-06-07 21:47:32.736663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.579 [2024-06-07 21:47:32.747769] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.579 [2024-06-07 21:47:32.747796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.580 [2024-06-07 21:47:32.747808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.580 [2024-06-07 21:47:32.757836] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.580 [2024-06-07 21:47:32.757863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.580 [2024-06-07 21:47:32.757875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.580 [2024-06-07 21:47:32.763763] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.580 [2024-06-07 21:47:32.763790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.580 [2024-06-07 21:47:32.763801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.580 [2024-06-07 21:47:32.774312] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.580 [2024-06-07 21:47:32.774340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.580 [2024-06-07 21:47:32.774352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.580 [2024-06-07 21:47:32.785066] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.580 [2024-06-07 21:47:32.785095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.580 [2024-06-07 21:47:32.785107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.580 [2024-06-07 21:47:32.794052] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.580 [2024-06-07 21:47:32.794080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.580 [2024-06-07 21:47:32.794092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.580 [2024-06-07 21:47:32.804701] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.580 [2024-06-07 21:47:32.804730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.580 [2024-06-07 21:47:32.804742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.580 [2024-06-07 21:47:32.814372] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.580 [2024-06-07 21:47:32.814399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.580 [2024-06-07 21:47:32.814411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.580 [2024-06-07 21:47:32.823392] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.580 [2024-06-07 21:47:32.823420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.580 [2024-06-07 21:47:32.823432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.580 [2024-06-07 21:47:32.833105] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.580 [2024-06-07 21:47:32.833133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.580 [2024-06-07 21:47:32.833145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.580 [2024-06-07 21:47:32.844277] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.580 [2024-06-07 21:47:32.844306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.580 [2024-06-07 21:47:32.844318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.840 [2024-06-07 21:47:32.854326] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.840 [2024-06-07 21:47:32.854353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.840 [2024-06-07 21:47:32.854365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.840 [2024-06-07 21:47:32.867374] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.840 [2024-06-07 21:47:32.867402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.840 [2024-06-07 21:47:32.867414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.840 [2024-06-07 21:47:32.881429] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.840 [2024-06-07 21:47:32.881456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.840 [2024-06-07 21:47:32.881469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.840 [2024-06-07 21:47:32.895771] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.840 [2024-06-07 21:47:32.895798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.840 [2024-06-07 21:47:32.895810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.840 [2024-06-07 21:47:32.907381] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.840 [2024-06-07 21:47:32.907408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.840 [2024-06-07 21:47:32.907420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.840 [2024-06-07 21:47:32.919790] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.840 [2024-06-07 21:47:32.919818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.840 [2024-06-07 21:47:32.919834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.840 [2024-06-07 21:47:32.929646] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.840 [2024-06-07 21:47:32.929673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.840 [2024-06-07 21:47:32.929685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.840 [2024-06-07 21:47:32.939180] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.840 [2024-06-07 21:47:32.939206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.840 [2024-06-07 21:47:32.939217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.840 [2024-06-07 21:47:32.949406] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.840 [2024-06-07 21:47:32.949434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.840 [2024-06-07 21:47:32.949445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.840 [2024-06-07 21:47:32.960190] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.840 [2024-06-07 21:47:32.960218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.840 [2024-06-07 21:47:32.960230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.840 [2024-06-07 21:47:32.970685] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.840 [2024-06-07 21:47:32.970714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.840 [2024-06-07 21:47:32.970727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.840 [2024-06-07 21:47:32.980488] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.840 [2024-06-07 21:47:32.980516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.841 [2024-06-07 21:47:32.980527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.841 [2024-06-07 21:47:32.989592] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.841 [2024-06-07 21:47:32.989619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.841 [2024-06-07 21:47:32.989630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.841 [2024-06-07 21:47:32.998271] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.841 [2024-06-07 21:47:32.998296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.841 [2024-06-07 21:47:32.998308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.841 [2024-06-07 21:47:33.007116] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.841 [2024-06-07 21:47:33.007147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.841 [2024-06-07 21:47:33.007158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.841 [2024-06-07 21:47:33.016138] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.841 [2024-06-07 21:47:33.016164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.841 [2024-06-07 21:47:33.016176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.841 [2024-06-07 21:47:33.024739] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.841 [2024-06-07 21:47:33.024766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.841 [2024-06-07 21:47:33.024777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.841 [2024-06-07 21:47:33.033325] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.841 [2024-06-07 21:47:33.033352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.841 [2024-06-07 21:47:33.033363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.841 [2024-06-07 21:47:33.042239] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.841 [2024-06-07 21:47:33.042266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.841 [2024-06-07 21:47:33.042277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.841 [2024-06-07 21:47:33.051018] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.841 [2024-06-07 21:47:33.051051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.841 [2024-06-07 21:47:33.051062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.841 [2024-06-07 21:47:33.059680] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.841 [2024-06-07 21:47:33.059706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.841 [2024-06-07 21:47:33.059717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.841 [2024-06-07 21:47:33.068428] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.841 [2024-06-07 21:47:33.068454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.841 [2024-06-07 21:47:33.068465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.841 [2024-06-07 21:47:33.077244] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.841 [2024-06-07 21:47:33.077271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.841 [2024-06-07 21:47:33.077282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.841 [2024-06-07 21:47:33.085858] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.841 [2024-06-07 21:47:33.085885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.841 [2024-06-07 21:47:33.085896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.841 [2024-06-07 21:47:33.094578] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24b7410) 00:30:32.841 [2024-06-07 21:47:33.094604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.841 [2024-06-07 21:47:33.094616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.841 00:30:32.841 Latency(us) 00:30:32.841 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:32.841 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:32.841 nvme0n1 : 2.00 3186.11 398.26 0.00 0.00 5017.86 822.92 14596.65 00:30:32.841 =================================================================================================================== 00:30:32.841 Total : 3186.11 398.26 0.00 0.00 5017.86 822.92 14596.65 00:30:32.841 0 00:30:33.101 21:47:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:33.101 21:47:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:33.101 21:47:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:33.101 21:47:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:33.101 | .driver_specific 00:30:33.101 | .nvme_error 00:30:33.101 | .status_code 00:30:33.101 | .command_transient_transport_error' 00:30:33.101 21:47:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 205 > 0 )) 00:30:33.101 21:47:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1624830 00:30:33.101 21:47:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 1624830 ']' 00:30:33.101 21:47:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 1624830 00:30:33.101 21:47:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:30:33.101 21:47:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:33.101 21:47:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1624830 00:30:33.101 21:47:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:30:33.101 21:47:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:30:33.101 21:47:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1624830' 00:30:33.101 killing process with pid 1624830 00:30:33.101 21:47:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 1624830 00:30:33.101 Received shutdown signal, test time was about 2.000000 seconds 00:30:33.101 00:30:33.101 Latency(us) 00:30:33.101 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:33.101 =================================================================================================================== 00:30:33.101 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:33.101 21:47:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 1624830 00:30:33.360 21:47:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:30:33.360 21:47:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:33.360 21:47:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:30:33.360 21:47:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:30:33.360 21:47:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:30:33.360 21:47:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:30:33.360 21:47:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1625373 00:30:33.360 21:47:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1625373 /var/tmp/bperf.sock 00:30:33.360 21:47:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 1625373 ']' 00:30:33.361 21:47:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:33.361 21:47:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:33.361 21:47:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:33.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:33.361 21:47:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:33.361 21:47:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:33.361 [2024-06-07 21:47:33.563550] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:30:33.361 [2024-06-07 21:47:33.563609] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1625373 ] 00:30:33.361 EAL: No free 2048 kB hugepages reported on node 1 00:30:33.620 [2024-06-07 21:47:33.641375] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:33.620 [2024-06-07 21:47:33.731277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:34.187 21:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:34.187 21:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:30:34.188 21:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:34.188 21:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:34.446 21:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:34.446 21:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:34.446 21:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:34.446 21:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:34.446 21:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:34.447 21:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:34.705 nvme0n1 00:30:34.705 21:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:34.705 21:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:34.705 21:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:34.705 21:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:34.705 21:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:34.705 21:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:34.705 Running I/O for 2 seconds... 00:30:34.705 [2024-06-07 21:47:34.941635] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190f6cc8 00:30:34.705 [2024-06-07 21:47:34.942798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.705 [2024-06-07 21:47:34.942834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:34.705 [2024-06-07 21:47:34.954998] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190fe2e8 00:30:34.705 [2024-06-07 21:47:34.956161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.705 [2024-06-07 21:47:34.956189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:34.705 [2024-06-07 21:47:34.969119] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190ed4e8 00:30:34.706 [2024-06-07 21:47:34.970250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.706 [2024-06-07 21:47:34.970274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:34.965 [2024-06-07 21:47:34.985498] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190f5378 00:30:34.965 [2024-06-07 21:47:34.987202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:3898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.965 [2024-06-07 21:47:34.987227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:34.965 [2024-06-07 21:47:34.998702] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190f57b0 00:30:34.965 [2024-06-07 21:47:35.000222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.965 [2024-06-07 21:47:35.000245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:34.965 [2024-06-07 21:47:35.012102] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190e4de8 00:30:34.965 [2024-06-07 21:47:35.013594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.965 [2024-06-07 21:47:35.013617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:34.965 [2024-06-07 21:47:35.027546] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190fdeb0 00:30:34.965 [2024-06-07 21:47:35.029259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.965 [2024-06-07 21:47:35.029283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.965 [2024-06-07 21:47:35.041362] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190fc128 00:30:34.965 [2024-06-07 21:47:35.043074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.965 [2024-06-07 21:47:35.043097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.965 [2024-06-07 21:47:35.055164] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190e88f8 00:30:34.965 [2024-06-07 21:47:35.056869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.965 [2024-06-07 21:47:35.056892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.965 [2024-06-07 21:47:35.069174] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190efae0 00:30:34.965 [2024-06-07 21:47:35.070879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.965 [2024-06-07 21:47:35.070903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.965 [2024-06-07 21:47:35.082962] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190f9b30 00:30:34.965 [2024-06-07 21:47:35.084671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:3785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.965 [2024-06-07 21:47:35.084694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.965 [2024-06-07 21:47:35.096757] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190fac10 00:30:34.965 [2024-06-07 21:47:35.098465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.965 [2024-06-07 21:47:35.098488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.965 [2024-06-07 21:47:35.110565] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190ee5c8 00:30:34.965 [2024-06-07 21:47:35.112283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.965 [2024-06-07 21:47:35.112306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.965 [2024-06-07 21:47:35.124369] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190f4f40 00:30:34.965 [2024-06-07 21:47:35.126073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.965 [2024-06-07 21:47:35.126095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.965 [2024-06-07 21:47:35.138129] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190f6020 00:30:34.965 [2024-06-07 21:47:35.139834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:15305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.965 [2024-06-07 21:47:35.139857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.965 [2024-06-07 21:47:35.151904] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190f7100 00:30:34.965 [2024-06-07 21:47:35.153638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.965 [2024-06-07 21:47:35.153661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.965 [2024-06-07 21:47:35.165718] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190f81e0 00:30:34.965 [2024-06-07 21:47:35.167425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:18879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.966 [2024-06-07 21:47:35.167452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.966 [2024-06-07 21:47:35.179493] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190e3d08 00:30:34.966 [2024-06-07 21:47:35.181194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:8215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.966 [2024-06-07 21:47:35.181217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.966 [2024-06-07 21:47:35.193267] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190e2c28 00:30:34.966 [2024-06-07 21:47:35.195021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.966 [2024-06-07 21:47:35.195048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.966 [2024-06-07 21:47:35.207045] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190e1b48 00:30:34.966 [2024-06-07 21:47:35.208770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:18015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.966 [2024-06-07 21:47:35.208793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.966 [2024-06-07 21:47:35.220817] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190e8d30 00:30:34.966 [2024-06-07 21:47:35.222539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.966 [2024-06-07 21:47:35.222561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.225 [2024-06-07 21:47:35.234619] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190f0788 00:30:35.225 [2024-06-07 21:47:35.236331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:24521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.225 [2024-06-07 21:47:35.236354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.225 [2024-06-07 21:47:35.248398] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190ff3c8 00:30:35.225 [2024-06-07 21:47:35.250104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.225 [2024-06-07 21:47:35.250127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.225 [2024-06-07 21:47:35.262161] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190fc560 00:30:35.225 [2024-06-07 21:47:35.263887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.225 [2024-06-07 21:47:35.263910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.225 [2024-06-07 21:47:35.275963] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190fb480 00:30:35.225 [2024-06-07 21:47:35.277672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.225 [2024-06-07 21:47:35.277695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.225 [2024-06-07 21:47:35.289747] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190ef6a8 00:30:35.225 [2024-06-07 21:47:35.291419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:17367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.225 [2024-06-07 21:47:35.291445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.225 [2024-06-07 21:47:35.303567] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190f96f8 00:30:35.225 [2024-06-07 21:47:35.305219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.225 [2024-06-07 21:47:35.305243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.225 [2024-06-07 21:47:35.317413] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190fa7d8 00:30:35.226 [2024-06-07 21:47:35.319127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.226 [2024-06-07 21:47:35.319150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.226 [2024-06-07 21:47:35.331165] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190eea00 00:30:35.226 [2024-06-07 21:47:35.332870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:15717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.226 [2024-06-07 21:47:35.332893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.226 [2024-06-07 21:47:35.344935] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190e8088 00:30:35.226 [2024-06-07 21:47:35.346647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.226 [2024-06-07 21:47:35.346670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.226 [2024-06-07 21:47:35.358711] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190f5be8 00:30:35.226 [2024-06-07 21:47:35.360420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:24407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.226 [2024-06-07 21:47:35.360443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.226 [2024-06-07 21:47:35.372497] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190f6cc8 00:30:35.226 [2024-06-07 21:47:35.374258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.226 [2024-06-07 21:47:35.374281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.226 [2024-06-07 21:47:35.386463] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190f7da8 00:30:35.226 [2024-06-07 21:47:35.388192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.226 [2024-06-07 21:47:35.388215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.226 [2024-06-07 21:47:35.400286] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190e4140 00:30:35.226 [2024-06-07 21:47:35.401992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.226 [2024-06-07 21:47:35.402015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.226 [2024-06-07 21:47:35.414074] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190e3060 00:30:35.226 [2024-06-07 21:47:35.415780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:10265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.226 [2024-06-07 21:47:35.415803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.226 [2024-06-07 21:47:35.427902] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190e1f80 00:30:35.226 [2024-06-07 21:47:35.429647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:17727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.226 [2024-06-07 21:47:35.429670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.226 [2024-06-07 21:47:35.441725] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190e0ea0 00:30:35.226 [2024-06-07 21:47:35.443463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.226 [2024-06-07 21:47:35.443486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.226 [2024-06-07 21:47:35.455524] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190f8a50 00:30:35.226 [2024-06-07 21:47:35.457241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.226 [2024-06-07 21:47:35.457265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.226 [2024-06-07 21:47:35.469362] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190de038 00:30:35.226 [2024-06-07 21:47:35.471095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.226 [2024-06-07 21:47:35.471119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.226 [2024-06-07 21:47:35.483147] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190fdeb0 00:30:35.226 [2024-06-07 21:47:35.484786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.226 [2024-06-07 21:47:35.484810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.486 [2024-06-07 21:47:35.496943] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190fc128 00:30:35.486 [2024-06-07 21:47:35.498678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.486 [2024-06-07 21:47:35.498701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.486 [2024-06-07 21:47:35.510764] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190e88f8 00:30:35.486 [2024-06-07 21:47:35.512472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.486 [2024-06-07 21:47:35.512496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.486 [2024-06-07 21:47:35.524575] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190efae0 00:30:35.486 [2024-06-07 21:47:35.526223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.486 [2024-06-07 21:47:35.526252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.486 [2024-06-07 21:47:35.538415] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190f9b30 00:30:35.486 [2024-06-07 21:47:35.540164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.486 [2024-06-07 21:47:35.540189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.486 [2024-06-07 21:47:35.552194] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190fac10 00:30:35.486 [2024-06-07 21:47:35.553949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.486 [2024-06-07 21:47:35.553973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.486 [2024-06-07 21:47:35.566006] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190ee5c8 00:30:35.486 [2024-06-07 21:47:35.567721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.486 [2024-06-07 21:47:35.567745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.486 [2024-06-07 21:47:35.579839] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190f4f40 00:30:35.486 [2024-06-07 21:47:35.581604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:18736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.486 [2024-06-07 21:47:35.581628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.486 [2024-06-07 21:47:35.593625] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190f6020 00:30:35.486 [2024-06-07 21:47:35.595261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.486 [2024-06-07 21:47:35.595285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.486 [2024-06-07 21:47:35.606515] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190f6020 00:30:35.486 [2024-06-07 21:47:35.608198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.486 [2024-06-07 21:47:35.608221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:35.486 [2024-06-07 21:47:35.621054] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190e9e10 00:30:35.486 [2024-06-07 21:47:35.622932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.486 [2024-06-07 21:47:35.622956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:35.486 [2024-06-07 21:47:35.635540] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190fef90 00:30:35.486 [2024-06-07 21:47:35.637611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.486 [2024-06-07 21:47:35.637635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:35.486 [2024-06-07 21:47:35.650050] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190fc560 00:30:35.486 [2024-06-07 21:47:35.652315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:9913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.486 [2024-06-07 21:47:35.652338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:35.486 [2024-06-07 21:47:35.664543] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190e3060 00:30:35.486 [2024-06-07 21:47:35.666993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.486 [2024-06-07 21:47:35.667016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.486 [2024-06-07 21:47:35.674340] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190e0a68 00:30:35.486 [2024-06-07 21:47:35.675469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.486 [2024-06-07 21:47:35.675492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:35.486 [2024-06-07 21:47:35.688861] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190e4140 00:30:35.486 [2024-06-07 21:47:35.690189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.486 [2024-06-07 21:47:35.690212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:35.486 [2024-06-07 21:47:35.702876] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190f7da8 00:30:35.486 [2024-06-07 21:47:35.704214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:15048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.486 [2024-06-07 21:47:35.704237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:35.486 [2024-06-07 21:47:35.716717] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190f6cc8 00:30:35.486 [2024-06-07 21:47:35.718077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.486 [2024-06-07 21:47:35.718100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:35.486 [2024-06-07 21:47:35.730546] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190e12d8 00:30:35.486 [2024-06-07 21:47:35.731795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.486 [2024-06-07 21:47:35.731818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:35.486 [2024-06-07 21:47:35.743427] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190eea00 00:30:35.486 [2024-06-07 21:47:35.744672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.486 [2024-06-07 21:47:35.744697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:35.747 [2024-06-07 21:47:35.757934] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190e4de8 00:30:35.747 [2024-06-07 21:47:35.759359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:6518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.747 [2024-06-07 21:47:35.759382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:35.747 [2024-06-07 21:47:35.773388] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190f0788 00:30:35.747 [2024-06-07 21:47:35.775106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.747 [2024-06-07 21:47:35.775131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.747 [2024-06-07 21:47:35.787190] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190ff3c8 00:30:35.747 [2024-06-07 21:47:35.788894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.747 [2024-06-07 21:47:35.788917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.747 [2024-06-07 21:47:35.801044] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190e8088 00:30:35.747 [2024-06-07 21:47:35.802751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.747 [2024-06-07 21:47:35.802774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.747 [2024-06-07 21:47:35.814834] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190e5220 00:30:35.747 [2024-06-07 21:47:35.816466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.747 [2024-06-07 21:47:35.816489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.747 [2024-06-07 21:47:35.828643] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190ed0b0 00:30:35.747 [2024-06-07 21:47:35.830271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.747 [2024-06-07 21:47:35.830293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.747 [2024-06-07 21:47:35.842501] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190ebfd0 00:30:35.747 [2024-06-07 21:47:35.844132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.747 [2024-06-07 21:47:35.844155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.747 [2024-06-07 21:47:35.856302] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190f57b0 00:30:35.747 [2024-06-07 21:47:35.858017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.747 [2024-06-07 21:47:35.858046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.747 [2024-06-07 21:47:35.870108] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190efae0 00:30:35.747 [2024-06-07 21:47:35.871836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.747 [2024-06-07 21:47:35.871859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.747 [2024-06-07 21:47:35.883909] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190e88f8 00:30:35.747 [2024-06-07 21:47:35.885649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.747 [2024-06-07 21:47:35.885676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.747 [2024-06-07 21:47:35.897701] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190fc128 00:30:35.747 [2024-06-07 21:47:35.899438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.747 [2024-06-07 21:47:35.899461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.747 [2024-06-07 21:47:35.911559] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190ee5c8 00:30:35.747 [2024-06-07 21:47:35.913272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.747 [2024-06-07 21:47:35.913295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.747 [2024-06-07 21:47:35.925399] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190fac10 00:30:35.747 [2024-06-07 21:47:35.927105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.747 [2024-06-07 21:47:35.927129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.747 [2024-06-07 21:47:35.939185] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190f9b30 00:30:35.747 [2024-06-07 21:47:35.940895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:3909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.748 [2024-06-07 21:47:35.940918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.748 [2024-06-07 21:47:35.952986] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190fef90 00:30:35.748 [2024-06-07 21:47:35.954721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.748 [2024-06-07 21:47:35.954744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.748 [2024-06-07 21:47:35.966757] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190e95a0 00:30:35.748 [2024-06-07 21:47:35.968499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.748 [2024-06-07 21:47:35.968523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.748 [2024-06-07 21:47:35.980554] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190ea680 00:30:35.748 [2024-06-07 21:47:35.982288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.748 [2024-06-07 21:47:35.982312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.748 [2024-06-07 21:47:35.994360] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190f8a50 00:30:35.748 [2024-06-07 21:47:35.996128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.748 [2024-06-07 21:47:35.996152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.748 [2024-06-07 21:47:36.008392] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190de038 00:30:35.748 [2024-06-07 21:47:36.010125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.748 [2024-06-07 21:47:36.010149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:36.007 [2024-06-07 21:47:36.022212] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190fdeb0 00:30:36.007 [2024-06-07 21:47:36.023933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.007 [2024-06-07 21:47:36.023955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:36.007 [2024-06-07 21:47:36.036018] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190e5ec8 00:30:36.007 [2024-06-07 21:47:36.037726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.007 [2024-06-07 21:47:36.037750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:36.007 [2024-06-07 21:47:36.049809] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190edd58 00:30:36.007 [2024-06-07 21:47:36.051528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.007 [2024-06-07 21:47:36.051551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:36.007 [2024-06-07 21:47:36.063622] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190ecc78 00:30:36.007 [2024-06-07 21:47:36.065351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.007 [2024-06-07 21:47:36.065374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:36.007 [2024-06-07 21:47:36.077409] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190ebb98 00:30:36.007 [2024-06-07 21:47:36.079126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.007 [2024-06-07 21:47:36.079149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:36.007 [2024-06-07 21:47:36.091383] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190f5378 00:30:36.007 [2024-06-07 21:47:36.093094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.007 [2024-06-07 21:47:36.093118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:36.007 [2024-06-07 21:47:36.105166] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190ef6a8 00:30:36.007 [2024-06-07 21:47:36.106893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.007 [2024-06-07 21:47:36.106916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:36.007 [2024-06-07 21:47:36.118931] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190fb480 00:30:36.007 [2024-06-07 21:47:36.120655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.008 [2024-06-07 21:47:36.120678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:36.008 [2024-06-07 21:47:36.132724] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190fc560 00:30:36.008 [2024-06-07 21:47:36.134456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.008 [2024-06-07 21:47:36.134479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:36.008 [2024-06-07 21:47:36.146525] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190eea00 00:30:36.008 [2024-06-07 21:47:36.148258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.008 [2024-06-07 21:47:36.148281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:36.008 [2024-06-07 21:47:36.160304] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190fa7d8 00:30:36.008 [2024-06-07 21:47:36.162017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.008 [2024-06-07 21:47:36.162046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:36.008 [2024-06-07 21:47:36.174106] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190f96f8 00:30:36.008 [2024-06-07 21:47:36.175837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.008 [2024-06-07 21:47:36.175860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:36.008 [2024-06-07 21:47:36.187885] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190fda78 00:30:36.008 [2024-06-07 21:47:36.189615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.008 [2024-06-07 21:47:36.189638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:36.008 [2024-06-07 21:47:36.201652] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190ea248 00:30:36.008 [2024-06-07 21:47:36.203384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.008 [2024-06-07 21:47:36.203407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:36.008 [2024-06-07 21:47:36.215464] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190e8d30 00:30:36.008 [2024-06-07 21:47:36.217183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.008 [2024-06-07 21:47:36.217206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:36.008 [2024-06-07 21:47:36.229220] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190f0788 00:30:36.008 [2024-06-07 21:47:36.230950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.008 [2024-06-07 21:47:36.230973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:36.008 [2024-06-07 21:47:36.242998] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190ff3c8 00:30:36.008 [2024-06-07 21:47:36.244727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.008 [2024-06-07 21:47:36.244754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:36.008 [2024-06-07 21:47:36.256797] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190e8088 00:30:36.008 [2024-06-07 21:47:36.258520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.008 [2024-06-07 21:47:36.258543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:36.008 [2024-06-07 21:47:36.270560] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190e5220 00:30:36.008 [2024-06-07 21:47:36.272288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.008 [2024-06-07 21:47:36.272311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:36.268 [2024-06-07 21:47:36.284344] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190ed0b0 00:30:36.268 [2024-06-07 21:47:36.286067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.268 [2024-06-07 21:47:36.286090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:36.268 [2024-06-07 21:47:36.298138] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190ebfd0 00:30:36.268 [2024-06-07 21:47:36.299770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.268 [2024-06-07 21:47:36.299794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:36.268 [2024-06-07 21:47:36.311931] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190f57b0 00:30:36.268 [2024-06-07 21:47:36.313636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.268 [2024-06-07 21:47:36.313659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:36.268 [2024-06-07 21:47:36.324836] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190f57b0 00:30:36.268 [2024-06-07 21:47:36.326449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.268 [2024-06-07 21:47:36.326471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:36.268 [2024-06-07 21:47:36.339313] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190e6300 00:30:36.268 [2024-06-07 21:47:36.341142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.268 [2024-06-07 21:47:36.341166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:36.268 [2024-06-07 21:47:36.352250] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190de470 00:30:36.268 [2024-06-07 21:47:36.353481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.268 [2024-06-07 21:47:36.353506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:36.268 [2024-06-07 21:47:36.366295] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190eea00 00:30:36.268 [2024-06-07 21:47:36.367343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.268 [2024-06-07 21:47:36.367367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:36.268 [2024-06-07 21:47:36.382140] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190f8a50 00:30:36.268 [2024-06-07 21:47:36.384475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.268 [2024-06-07 21:47:36.384498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:36.268 [2024-06-07 21:47:36.396765] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190fbcf0 00:30:36.268 [2024-06-07 21:47:36.399125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.268 [2024-06-07 21:47:36.399149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.268 [2024-06-07 21:47:36.406599] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190e0630 00:30:36.268 [2024-06-07 21:47:36.407745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.268 [2024-06-07 21:47:36.407769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:36.268 [2024-06-07 21:47:36.420596] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190f1ca0 00:30:36.268 [2024-06-07 21:47:36.421756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.268 [2024-06-07 21:47:36.421779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:36.268 [2024-06-07 21:47:36.434383] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190f2d80 00:30:36.268 [2024-06-07 21:47:36.435541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.268 [2024-06-07 21:47:36.435564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:36.268 [2024-06-07 21:47:36.448160] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190e6fa8 00:30:36.268 [2024-06-07 21:47:36.449327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.268 [2024-06-07 21:47:36.449351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:36.268 [2024-06-07 21:47:36.461936] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190f9f68 00:30:36.268 [2024-06-07 21:47:36.463105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.268 [2024-06-07 21:47:36.463128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:36.268 [2024-06-07 21:47:36.475756] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190fe720 00:30:36.268 [2024-06-07 21:47:36.476914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.268 [2024-06-07 21:47:36.476937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:36.268 [2024-06-07 21:47:36.489522] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190e99d8 00:30:36.268 [2024-06-07 21:47:36.490681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.268 [2024-06-07 21:47:36.490704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:36.268 [2024-06-07 21:47:36.503291] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190e8d30 00:30:36.268 [2024-06-07 21:47:36.504457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.268 [2024-06-07 21:47:36.504481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:36.268 [2024-06-07 21:47:36.517099] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190ed0b0 00:30:36.268 [2024-06-07 21:47:36.518260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.268 [2024-06-07 21:47:36.518283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:36.268 [2024-06-07 21:47:36.530852] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190ebfd0 00:30:36.268 [2024-06-07 21:47:36.532016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.268 [2024-06-07 21:47:36.532043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:36.528 [2024-06-07 21:47:36.544637] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190feb58 00:30:36.528 [2024-06-07 21:47:36.545823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.528 [2024-06-07 21:47:36.545847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:36.528 [2024-06-07 21:47:36.558438] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190f0350 00:30:36.528 [2024-06-07 21:47:36.559626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.528 [2024-06-07 21:47:36.559649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:36.528 [2024-06-07 21:47:36.572203] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190edd58 00:30:36.528 [2024-06-07 21:47:36.573352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.528 [2024-06-07 21:47:36.573377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:36.528 [2024-06-07 21:47:36.585997] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190e5ec8 00:30:36.528 [2024-06-07 21:47:36.587154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.528 [2024-06-07 21:47:36.587178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:36.528 [2024-06-07 21:47:36.599769] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190fdeb0 00:30:36.528 [2024-06-07 21:47:36.600926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.528 [2024-06-07 21:47:36.600953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:36.528 [2024-06-07 21:47:36.613541] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190df118 00:30:36.528 [2024-06-07 21:47:36.614698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.528 [2024-06-07 21:47:36.614722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:36.528 [2024-06-07 21:47:36.627345] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190e01f8 00:30:36.528 [2024-06-07 21:47:36.628506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.528 [2024-06-07 21:47:36.628529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:36.528 [2024-06-07 21:47:36.641108] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190fd640 00:30:36.528 [2024-06-07 21:47:36.642264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.528 [2024-06-07 21:47:36.642288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:36.528 [2024-06-07 21:47:36.654869] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190f2948 00:30:36.528 [2024-06-07 21:47:36.656038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.528 [2024-06-07 21:47:36.656062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:36.528 [2024-06-07 21:47:36.668676] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190ea248 00:30:36.528 [2024-06-07 21:47:36.669845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.528 [2024-06-07 21:47:36.669870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:36.528 [2024-06-07 21:47:36.682457] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190e6300 00:30:36.528 [2024-06-07 21:47:36.683616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.528 [2024-06-07 21:47:36.683640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:36.528 [2024-06-07 21:47:36.696244] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190f92c0 00:30:36.528 [2024-06-07 21:47:36.697411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.528 [2024-06-07 21:47:36.697435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:36.528 [2024-06-07 21:47:36.710016] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190fe2e8 00:30:36.528 [2024-06-07 21:47:36.711171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.528 [2024-06-07 21:47:36.711194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:36.528 [2024-06-07 21:47:36.723773] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190ea680 00:30:36.528 [2024-06-07 21:47:36.724946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.528 [2024-06-07 21:47:36.724974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:36.528 [2024-06-07 21:47:36.737573] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190f8618 00:30:36.528 [2024-06-07 21:47:36.738738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.528 [2024-06-07 21:47:36.738761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:36.528 [2024-06-07 21:47:36.751364] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190ecc78 00:30:36.528 [2024-06-07 21:47:36.752524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.528 [2024-06-07 21:47:36.752547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:36.528 [2024-06-07 21:47:36.765144] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190ebb98 00:30:36.528 [2024-06-07 21:47:36.766306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.528 [2024-06-07 21:47:36.766329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:36.528 [2024-06-07 21:47:36.778942] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190ddc00 00:30:36.528 [2024-06-07 21:47:36.780096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.528 [2024-06-07 21:47:36.780121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:36.528 [2024-06-07 21:47:36.792693] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190f8e88 00:30:36.528 [2024-06-07 21:47:36.793857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.528 [2024-06-07 21:47:36.793881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:36.788 [2024-06-07 21:47:36.806479] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190e5220 00:30:36.788 [2024-06-07 21:47:36.807643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.788 [2024-06-07 21:47:36.807666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:36.788 [2024-06-07 21:47:36.820271] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190e8088 00:30:36.788 [2024-06-07 21:47:36.821439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.788 [2024-06-07 21:47:36.821462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:36.788 [2024-06-07 21:47:36.834042] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190de470 00:30:36.788 [2024-06-07 21:47:36.835188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.788 [2024-06-07 21:47:36.835211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:36.788 [2024-06-07 21:47:36.847831] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190df550 00:30:36.788 [2024-06-07 21:47:36.848988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.788 [2024-06-07 21:47:36.849011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:36.788 [2024-06-07 21:47:36.861633] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190e0630 00:30:36.788 [2024-06-07 21:47:36.862789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.788 [2024-06-07 21:47:36.862812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:36.788 [2024-06-07 21:47:36.875407] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190f1ca0 00:30:36.788 [2024-06-07 21:47:36.876585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:11892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.788 [2024-06-07 21:47:36.876608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:36.788 [2024-06-07 21:47:36.889222] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190f2d80 00:30:36.788 [2024-06-07 21:47:36.890391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.788 [2024-06-07 21:47:36.890414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:36.788 [2024-06-07 21:47:36.902987] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190e6fa8 00:30:36.788 [2024-06-07 21:47:36.904159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.788 [2024-06-07 21:47:36.904183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:36.788 [2024-06-07 21:47:36.916790] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18148b0) with pdu=0x2000190f9f68 00:30:36.788 [2024-06-07 21:47:36.917929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.788 [2024-06-07 21:47:36.917952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:36.788 00:30:36.788 Latency(us) 00:30:36.788 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:36.788 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:36.788 nvme0n1 : 2.01 18416.36 71.94 0.00 0.00 6938.53 2800.17 19184.17 00:30:36.788 =================================================================================================================== 00:30:36.788 Total : 18416.36 71.94 0.00 0.00 6938.53 2800.17 19184.17 00:30:36.788 0 00:30:36.788 21:47:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:36.788 21:47:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:36.788 | .driver_specific 00:30:36.788 | .nvme_error 00:30:36.788 | .status_code 00:30:36.788 | .command_transient_transport_error' 00:30:36.788 21:47:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:36.788 21:47:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:37.048 21:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 144 > 0 )) 00:30:37.048 21:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1625373 00:30:37.048 21:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 1625373 ']' 00:30:37.048 21:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 1625373 00:30:37.048 21:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:30:37.048 21:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:37.048 21:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1625373 00:30:37.048 21:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:30:37.048 21:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:30:37.048 21:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1625373' 00:30:37.048 killing process with pid 1625373 00:30:37.048 21:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 1625373 00:30:37.048 Received shutdown signal, test time was about 2.000000 seconds 00:30:37.048 00:30:37.048 Latency(us) 00:30:37.048 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:37.048 =================================================================================================================== 00:30:37.048 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:37.048 21:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 1625373 00:30:37.307 21:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:30:37.307 21:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:37.307 21:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:30:37.307 21:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:30:37.307 21:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:30:37.307 21:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1626161 00:30:37.307 21:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1626161 /var/tmp/bperf.sock 00:30:37.307 21:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:30:37.307 21:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 1626161 ']' 00:30:37.307 21:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:37.307 21:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:37.307 21:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:37.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:37.307 21:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:37.307 21:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:37.307 [2024-06-07 21:47:37.501581] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:30:37.307 [2024-06-07 21:47:37.501641] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1626161 ] 00:30:37.307 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:37.307 Zero copy mechanism will not be used. 00:30:37.307 EAL: No free 2048 kB hugepages reported on node 1 00:30:37.566 [2024-06-07 21:47:37.581660] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:37.566 [2024-06-07 21:47:37.672280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:37.566 21:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:37.566 21:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:30:37.566 21:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:37.566 21:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:37.825 21:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:37.825 21:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:37.825 21:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:37.825 21:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:37.825 21:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:37.825 21:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:38.084 nvme0n1 00:30:38.084 21:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:38.084 21:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:38.084 21:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:38.084 21:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:38.084 21:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:38.084 21:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:38.084 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:38.084 Zero copy mechanism will not be used. 00:30:38.084 Running I/O for 2 seconds... 00:30:38.084 [2024-06-07 21:47:38.322550] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.084 [2024-06-07 21:47:38.323073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.084 [2024-06-07 21:47:38.323107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:38.084 [2024-06-07 21:47:38.334849] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.084 [2024-06-07 21:47:38.335353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.084 [2024-06-07 21:47:38.335381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:38.084 [2024-06-07 21:47:38.346111] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.084 [2024-06-07 21:47:38.346590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.084 [2024-06-07 21:47:38.346618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:38.345 [2024-06-07 21:47:38.354144] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.345 [2024-06-07 21:47:38.354595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.345 [2024-06-07 21:47:38.354627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.345 [2024-06-07 21:47:38.361807] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.345 [2024-06-07 21:47:38.362325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.345 [2024-06-07 21:47:38.362351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:38.345 [2024-06-07 21:47:38.370460] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.345 [2024-06-07 21:47:38.370891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.345 [2024-06-07 21:47:38.370916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:38.345 [2024-06-07 21:47:38.377760] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.345 [2024-06-07 21:47:38.378186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.345 [2024-06-07 21:47:38.378211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:38.345 [2024-06-07 21:47:38.384848] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.345 [2024-06-07 21:47:38.385328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.345 [2024-06-07 21:47:38.385354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.345 [2024-06-07 21:47:38.396538] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.345 [2024-06-07 21:47:38.397070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.345 [2024-06-07 21:47:38.397095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:38.345 [2024-06-07 21:47:38.410397] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.345 [2024-06-07 21:47:38.410882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.345 [2024-06-07 21:47:38.410907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:38.345 [2024-06-07 21:47:38.421068] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.345 [2024-06-07 21:47:38.421527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.345 [2024-06-07 21:47:38.421552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:38.345 [2024-06-07 21:47:38.428483] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.345 [2024-06-07 21:47:38.428981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.345 [2024-06-07 21:47:38.429005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.345 [2024-06-07 21:47:38.436003] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.345 [2024-06-07 21:47:38.436463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.345 [2024-06-07 21:47:38.436488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:38.345 [2024-06-07 21:47:38.442310] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.345 [2024-06-07 21:47:38.442755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.345 [2024-06-07 21:47:38.442780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:38.345 [2024-06-07 21:47:38.449001] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.345 [2024-06-07 21:47:38.449452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.345 [2024-06-07 21:47:38.449476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:38.345 [2024-06-07 21:47:38.455338] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.345 [2024-06-07 21:47:38.455793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.345 [2024-06-07 21:47:38.455818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.345 [2024-06-07 21:47:38.461573] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.345 [2024-06-07 21:47:38.462033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.345 [2024-06-07 21:47:38.462059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:38.345 [2024-06-07 21:47:38.467872] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.345 [2024-06-07 21:47:38.468319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.345 [2024-06-07 21:47:38.468345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:38.345 [2024-06-07 21:47:38.474117] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.345 [2024-06-07 21:47:38.474575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.345 [2024-06-07 21:47:38.474601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:38.345 [2024-06-07 21:47:38.480400] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.345 [2024-06-07 21:47:38.480854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.345 [2024-06-07 21:47:38.480878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.345 [2024-06-07 21:47:38.486669] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.345 [2024-06-07 21:47:38.487125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.345 [2024-06-07 21:47:38.487150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:38.345 [2024-06-07 21:47:38.493172] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.345 [2024-06-07 21:47:38.493617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.345 [2024-06-07 21:47:38.493641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:38.345 [2024-06-07 21:47:38.499381] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.345 [2024-06-07 21:47:38.499821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.345 [2024-06-07 21:47:38.499845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:38.345 [2024-06-07 21:47:38.505616] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.345 [2024-06-07 21:47:38.506082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.345 [2024-06-07 21:47:38.506106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.345 [2024-06-07 21:47:38.511883] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.345 [2024-06-07 21:47:38.512338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.345 [2024-06-07 21:47:38.512363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:38.345 [2024-06-07 21:47:38.518207] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.345 [2024-06-07 21:47:38.518647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.345 [2024-06-07 21:47:38.518672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:38.345 [2024-06-07 21:47:38.524359] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.345 [2024-06-07 21:47:38.524804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.345 [2024-06-07 21:47:38.524829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:38.345 [2024-06-07 21:47:38.530537] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.345 [2024-06-07 21:47:38.530973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.345 [2024-06-07 21:47:38.530998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.345 [2024-06-07 21:47:38.536700] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.346 [2024-06-07 21:47:38.537142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.346 [2024-06-07 21:47:38.537167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:38.346 [2024-06-07 21:47:38.542967] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.346 [2024-06-07 21:47:38.543415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.346 [2024-06-07 21:47:38.543445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:38.346 [2024-06-07 21:47:38.549232] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.346 [2024-06-07 21:47:38.549686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.346 [2024-06-07 21:47:38.549711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:38.346 [2024-06-07 21:47:38.555509] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.346 [2024-06-07 21:47:38.555964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.346 [2024-06-07 21:47:38.555988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.346 [2024-06-07 21:47:38.561870] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.346 [2024-06-07 21:47:38.562327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.346 [2024-06-07 21:47:38.562352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:38.346 [2024-06-07 21:47:38.568230] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.346 [2024-06-07 21:47:38.568677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.346 [2024-06-07 21:47:38.568701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:38.346 [2024-06-07 21:47:38.574677] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.346 [2024-06-07 21:47:38.575128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.346 [2024-06-07 21:47:38.575152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:38.346 [2024-06-07 21:47:38.580967] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.346 [2024-06-07 21:47:38.581417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.346 [2024-06-07 21:47:38.581442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.346 [2024-06-07 21:47:38.587277] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.346 [2024-06-07 21:47:38.587714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.346 [2024-06-07 21:47:38.587739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:38.346 [2024-06-07 21:47:38.593518] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.346 [2024-06-07 21:47:38.593964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.346 [2024-06-07 21:47:38.593989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:38.346 [2024-06-07 21:47:38.599790] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.346 [2024-06-07 21:47:38.600243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.346 [2024-06-07 21:47:38.600268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:38.346 [2024-06-07 21:47:38.606038] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.346 [2024-06-07 21:47:38.606482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.346 [2024-06-07 21:47:38.606506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.606 [2024-06-07 21:47:38.612620] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.606 [2024-06-07 21:47:38.613079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.606 [2024-06-07 21:47:38.613104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:38.606 [2024-06-07 21:47:38.618868] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.606 [2024-06-07 21:47:38.619333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.606 [2024-06-07 21:47:38.619358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:38.606 [2024-06-07 21:47:38.625204] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.606 [2024-06-07 21:47:38.625656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.606 [2024-06-07 21:47:38.625680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:38.606 [2024-06-07 21:47:38.631447] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.606 [2024-06-07 21:47:38.631897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.606 [2024-06-07 21:47:38.631922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.606 [2024-06-07 21:47:38.637780] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.606 [2024-06-07 21:47:38.638232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.606 [2024-06-07 21:47:38.638257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:38.606 [2024-06-07 21:47:38.644402] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.606 [2024-06-07 21:47:38.644856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.606 [2024-06-07 21:47:38.644880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:38.606 [2024-06-07 21:47:38.650672] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.606 [2024-06-07 21:47:38.651129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.606 [2024-06-07 21:47:38.651159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:38.606 [2024-06-07 21:47:38.656916] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.606 [2024-06-07 21:47:38.657362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.606 [2024-06-07 21:47:38.657386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.606 [2024-06-07 21:47:38.663151] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.606 [2024-06-07 21:47:38.663591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.607 [2024-06-07 21:47:38.663616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:38.607 [2024-06-07 21:47:38.669383] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.607 [2024-06-07 21:47:38.669829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.607 [2024-06-07 21:47:38.669854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:38.607 [2024-06-07 21:47:38.675597] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.607 [2024-06-07 21:47:38.676053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.607 [2024-06-07 21:47:38.676077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:38.607 [2024-06-07 21:47:38.681832] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.607 [2024-06-07 21:47:38.682281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.607 [2024-06-07 21:47:38.682306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.607 [2024-06-07 21:47:38.688096] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.607 [2024-06-07 21:47:38.688552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.607 [2024-06-07 21:47:38.688577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:38.607 [2024-06-07 21:47:38.694527] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.607 [2024-06-07 21:47:38.694979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.607 [2024-06-07 21:47:38.695003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:38.607 [2024-06-07 21:47:38.700750] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.607 [2024-06-07 21:47:38.701197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.607 [2024-06-07 21:47:38.701222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:38.607 [2024-06-07 21:47:38.706980] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.607 [2024-06-07 21:47:38.707427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.607 [2024-06-07 21:47:38.707452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.607 [2024-06-07 21:47:38.713504] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.607 [2024-06-07 21:47:38.713956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.607 [2024-06-07 21:47:38.713980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:38.607 [2024-06-07 21:47:38.721053] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.607 [2024-06-07 21:47:38.721506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.607 [2024-06-07 21:47:38.721530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:38.607 [2024-06-07 21:47:38.729267] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.607 [2024-06-07 21:47:38.729747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.607 [2024-06-07 21:47:38.729771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:38.607 [2024-06-07 21:47:38.737921] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.607 [2024-06-07 21:47:38.738390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.607 [2024-06-07 21:47:38.738415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.607 [2024-06-07 21:47:38.746644] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.607 [2024-06-07 21:47:38.747082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.607 [2024-06-07 21:47:38.747106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:38.607 [2024-06-07 21:47:38.755293] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.607 [2024-06-07 21:47:38.755735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.607 [2024-06-07 21:47:38.755759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:38.607 [2024-06-07 21:47:38.764095] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.607 [2024-06-07 21:47:38.764546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.607 [2024-06-07 21:47:38.764569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:38.607 [2024-06-07 21:47:38.772952] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.607 [2024-06-07 21:47:38.773393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.607 [2024-06-07 21:47:38.773418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.607 [2024-06-07 21:47:38.781693] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.607 [2024-06-07 21:47:38.782179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.607 [2024-06-07 21:47:38.782203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:38.607 [2024-06-07 21:47:38.790482] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.607 [2024-06-07 21:47:38.790932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.607 [2024-06-07 21:47:38.790956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:38.607 [2024-06-07 21:47:38.799207] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.607 [2024-06-07 21:47:38.799641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.607 [2024-06-07 21:47:38.799664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:38.607 [2024-06-07 21:47:38.807409] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.607 [2024-06-07 21:47:38.807862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.607 [2024-06-07 21:47:38.807887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.607 [2024-06-07 21:47:38.815772] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.607 [2024-06-07 21:47:38.816229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.607 [2024-06-07 21:47:38.816254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:38.607 [2024-06-07 21:47:38.823691] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.607 [2024-06-07 21:47:38.824144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.607 [2024-06-07 21:47:38.824169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:38.607 [2024-06-07 21:47:38.832307] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.607 [2024-06-07 21:47:38.832759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.607 [2024-06-07 21:47:38.832783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:38.607 [2024-06-07 21:47:38.839986] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.607 [2024-06-07 21:47:38.840444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.607 [2024-06-07 21:47:38.840469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.607 [2024-06-07 21:47:38.849475] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.607 [2024-06-07 21:47:38.849941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.607 [2024-06-07 21:47:38.849970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:38.607 [2024-06-07 21:47:38.858019] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.607 [2024-06-07 21:47:38.858467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.607 [2024-06-07 21:47:38.858493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:38.607 [2024-06-07 21:47:38.867603] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.607 [2024-06-07 21:47:38.868055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.607 [2024-06-07 21:47:38.868080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:38.868 [2024-06-07 21:47:38.876632] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.868 [2024-06-07 21:47:38.877097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.868 [2024-06-07 21:47:38.877122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.868 [2024-06-07 21:47:38.886096] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.868 [2024-06-07 21:47:38.886559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.868 [2024-06-07 21:47:38.886582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:38.868 [2024-06-07 21:47:38.895988] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.868 [2024-06-07 21:47:38.896452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.868 [2024-06-07 21:47:38.896477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:38.868 [2024-06-07 21:47:38.907007] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.868 [2024-06-07 21:47:38.907486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.868 [2024-06-07 21:47:38.907511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:38.868 [2024-06-07 21:47:38.916811] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.868 [2024-06-07 21:47:38.917269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.868 [2024-06-07 21:47:38.917294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.868 [2024-06-07 21:47:38.925510] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.868 [2024-06-07 21:47:38.925952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.868 [2024-06-07 21:47:38.925975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:38.868 [2024-06-07 21:47:38.934944] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.868 [2024-06-07 21:47:38.935400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.868 [2024-06-07 21:47:38.935424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:38.868 [2024-06-07 21:47:38.944213] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.868 [2024-06-07 21:47:38.944391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.868 [2024-06-07 21:47:38.944413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:38.868 [2024-06-07 21:47:38.954419] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.868 [2024-06-07 21:47:38.954897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.868 [2024-06-07 21:47:38.954922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.868 [2024-06-07 21:47:38.962889] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.868 [2024-06-07 21:47:38.963047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.868 [2024-06-07 21:47:38.963072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:38.868 [2024-06-07 21:47:38.972274] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.868 [2024-06-07 21:47:38.972743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.868 [2024-06-07 21:47:38.972768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:38.868 [2024-06-07 21:47:38.980891] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.868 [2024-06-07 21:47:38.981360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.868 [2024-06-07 21:47:38.981384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:38.868 [2024-06-07 21:47:38.988973] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.868 [2024-06-07 21:47:38.989413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.868 [2024-06-07 21:47:38.989438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.868 [2024-06-07 21:47:38.997564] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.868 [2024-06-07 21:47:38.997995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.868 [2024-06-07 21:47:38.998019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:38.869 [2024-06-07 21:47:39.004923] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.869 [2024-06-07 21:47:39.005394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.869 [2024-06-07 21:47:39.005419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:38.869 [2024-06-07 21:47:39.012112] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.869 [2024-06-07 21:47:39.012556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.869 [2024-06-07 21:47:39.012581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:38.869 [2024-06-07 21:47:39.018911] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.869 [2024-06-07 21:47:39.019360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.869 [2024-06-07 21:47:39.019384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.869 [2024-06-07 21:47:39.025570] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.869 [2024-06-07 21:47:39.026012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.869 [2024-06-07 21:47:39.026047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:38.869 [2024-06-07 21:47:39.031944] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.869 [2024-06-07 21:47:39.032378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.869 [2024-06-07 21:47:39.032403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:38.869 [2024-06-07 21:47:39.039201] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.869 [2024-06-07 21:47:39.039657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.869 [2024-06-07 21:47:39.039682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:38.869 [2024-06-07 21:47:39.046262] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.869 [2024-06-07 21:47:39.046348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.869 [2024-06-07 21:47:39.046372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.869 [2024-06-07 21:47:39.054975] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.869 [2024-06-07 21:47:39.055423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.869 [2024-06-07 21:47:39.055447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:38.869 [2024-06-07 21:47:39.063069] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.869 [2024-06-07 21:47:39.063536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.869 [2024-06-07 21:47:39.063560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:38.869 [2024-06-07 21:47:39.070783] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.869 [2024-06-07 21:47:39.071228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.869 [2024-06-07 21:47:39.071256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:38.869 [2024-06-07 21:47:39.079071] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.869 [2024-06-07 21:47:39.079529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.869 [2024-06-07 21:47:39.079553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.869 [2024-06-07 21:47:39.086506] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.869 [2024-06-07 21:47:39.086940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.869 [2024-06-07 21:47:39.086963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:38.869 [2024-06-07 21:47:39.094557] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.869 [2024-06-07 21:47:39.094983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.869 [2024-06-07 21:47:39.095007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:38.869 [2024-06-07 21:47:39.102250] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.869 [2024-06-07 21:47:39.102666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.869 [2024-06-07 21:47:39.102690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:38.869 [2024-06-07 21:47:39.109927] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.869 [2024-06-07 21:47:39.110359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.869 [2024-06-07 21:47:39.110384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.869 [2024-06-07 21:47:39.117668] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.869 [2024-06-07 21:47:39.118095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.869 [2024-06-07 21:47:39.118119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:38.869 [2024-06-07 21:47:39.125276] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.869 [2024-06-07 21:47:39.125729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.869 [2024-06-07 21:47:39.125753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:38.869 [2024-06-07 21:47:39.132979] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:38.869 [2024-06-07 21:47:39.133415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.869 [2024-06-07 21:47:39.133439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.130 [2024-06-07 21:47:39.140664] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.130 [2024-06-07 21:47:39.141095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.130 [2024-06-07 21:47:39.141119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.130 [2024-06-07 21:47:39.147225] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.130 [2024-06-07 21:47:39.147634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.130 [2024-06-07 21:47:39.147658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.130 [2024-06-07 21:47:39.154643] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.130 [2024-06-07 21:47:39.155079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.130 [2024-06-07 21:47:39.155103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.130 [2024-06-07 21:47:39.161781] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.130 [2024-06-07 21:47:39.162216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.130 [2024-06-07 21:47:39.162240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.130 [2024-06-07 21:47:39.169079] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.130 [2024-06-07 21:47:39.169504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.130 [2024-06-07 21:47:39.169529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.130 [2024-06-07 21:47:39.175652] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.130 [2024-06-07 21:47:39.176078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.130 [2024-06-07 21:47:39.176102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.130 [2024-06-07 21:47:39.181748] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.130 [2024-06-07 21:47:39.182138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.130 [2024-06-07 21:47:39.182163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.130 [2024-06-07 21:47:39.187852] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.130 [2024-06-07 21:47:39.188259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.130 [2024-06-07 21:47:39.188283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.130 [2024-06-07 21:47:39.193830] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.130 [2024-06-07 21:47:39.194234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.130 [2024-06-07 21:47:39.194258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.130 [2024-06-07 21:47:39.199768] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.130 [2024-06-07 21:47:39.200163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.130 [2024-06-07 21:47:39.200187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.130 [2024-06-07 21:47:39.205742] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.130 [2024-06-07 21:47:39.206129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.130 [2024-06-07 21:47:39.206154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.130 [2024-06-07 21:47:39.211637] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.130 [2024-06-07 21:47:39.212044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.130 [2024-06-07 21:47:39.212068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.130 [2024-06-07 21:47:39.217650] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.130 [2024-06-07 21:47:39.218054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.130 [2024-06-07 21:47:39.218078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.130 [2024-06-07 21:47:39.223592] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.130 [2024-06-07 21:47:39.223975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.130 [2024-06-07 21:47:39.224000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.130 [2024-06-07 21:47:39.229488] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.130 [2024-06-07 21:47:39.229890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.130 [2024-06-07 21:47:39.229914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.130 [2024-06-07 21:47:39.235366] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.130 [2024-06-07 21:47:39.235763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.130 [2024-06-07 21:47:39.235788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.130 [2024-06-07 21:47:39.241226] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.130 [2024-06-07 21:47:39.241630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.130 [2024-06-07 21:47:39.241654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.131 [2024-06-07 21:47:39.247116] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.131 [2024-06-07 21:47:39.247519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.131 [2024-06-07 21:47:39.247542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.131 [2024-06-07 21:47:39.252984] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.131 [2024-06-07 21:47:39.253388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.131 [2024-06-07 21:47:39.253412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.131 [2024-06-07 21:47:39.258891] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.131 [2024-06-07 21:47:39.259280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.131 [2024-06-07 21:47:39.259304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.131 [2024-06-07 21:47:39.264773] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.131 [2024-06-07 21:47:39.265162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.131 [2024-06-07 21:47:39.265186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.131 [2024-06-07 21:47:39.270853] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.131 [2024-06-07 21:47:39.271261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.131 [2024-06-07 21:47:39.271287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.131 [2024-06-07 21:47:39.276816] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.131 [2024-06-07 21:47:39.277225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.131 [2024-06-07 21:47:39.277249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.131 [2024-06-07 21:47:39.282684] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.131 [2024-06-07 21:47:39.283088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.131 [2024-06-07 21:47:39.283112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.131 [2024-06-07 21:47:39.288555] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.131 [2024-06-07 21:47:39.288950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.131 [2024-06-07 21:47:39.288974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.131 [2024-06-07 21:47:39.294431] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.131 [2024-06-07 21:47:39.294818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.131 [2024-06-07 21:47:39.294842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.131 [2024-06-07 21:47:39.300297] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.131 [2024-06-07 21:47:39.300700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.131 [2024-06-07 21:47:39.300724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.131 [2024-06-07 21:47:39.306252] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.131 [2024-06-07 21:47:39.306653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.131 [2024-06-07 21:47:39.306677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.131 [2024-06-07 21:47:39.312157] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.131 [2024-06-07 21:47:39.312555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.131 [2024-06-07 21:47:39.312579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.131 [2024-06-07 21:47:39.318126] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.131 [2024-06-07 21:47:39.318525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.131 [2024-06-07 21:47:39.318549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.131 [2024-06-07 21:47:39.324054] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.131 [2024-06-07 21:47:39.324445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.131 [2024-06-07 21:47:39.324468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.131 [2024-06-07 21:47:39.329892] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.131 [2024-06-07 21:47:39.330299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.131 [2024-06-07 21:47:39.330323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.131 [2024-06-07 21:47:39.335848] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.131 [2024-06-07 21:47:39.336253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.131 [2024-06-07 21:47:39.336277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.131 [2024-06-07 21:47:39.342489] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.131 [2024-06-07 21:47:39.342888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.131 [2024-06-07 21:47:39.342912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.131 [2024-06-07 21:47:39.349620] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.131 [2024-06-07 21:47:39.350021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.131 [2024-06-07 21:47:39.350055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.131 [2024-06-07 21:47:39.355919] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.131 [2024-06-07 21:47:39.356314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.131 [2024-06-07 21:47:39.356339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.131 [2024-06-07 21:47:39.362154] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.131 [2024-06-07 21:47:39.362539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.131 [2024-06-07 21:47:39.362564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.131 [2024-06-07 21:47:39.368284] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.131 [2024-06-07 21:47:39.368689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.131 [2024-06-07 21:47:39.368713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.131 [2024-06-07 21:47:39.374387] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.131 [2024-06-07 21:47:39.374784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.131 [2024-06-07 21:47:39.374810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.131 [2024-06-07 21:47:39.381715] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.131 [2024-06-07 21:47:39.382128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.131 [2024-06-07 21:47:39.382152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.131 [2024-06-07 21:47:39.388597] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.131 [2024-06-07 21:47:39.388985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.131 [2024-06-07 21:47:39.389009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.131 [2024-06-07 21:47:39.394998] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.131 [2024-06-07 21:47:39.395408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.131 [2024-06-07 21:47:39.395432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.392 [2024-06-07 21:47:39.401132] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.392 [2024-06-07 21:47:39.401512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.392 [2024-06-07 21:47:39.401536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.392 [2024-06-07 21:47:39.407038] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.392 [2024-06-07 21:47:39.407435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.392 [2024-06-07 21:47:39.407460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.392 [2024-06-07 21:47:39.412945] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.392 [2024-06-07 21:47:39.413349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.392 [2024-06-07 21:47:39.413373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.392 [2024-06-07 21:47:39.418842] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.392 [2024-06-07 21:47:39.419245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.392 [2024-06-07 21:47:39.419270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.392 [2024-06-07 21:47:39.424871] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.392 [2024-06-07 21:47:39.425272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.392 [2024-06-07 21:47:39.425296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.392 [2024-06-07 21:47:39.431011] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.392 [2024-06-07 21:47:39.431407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.392 [2024-06-07 21:47:39.431431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.392 [2024-06-07 21:47:39.438416] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.392 [2024-06-07 21:47:39.438804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.392 [2024-06-07 21:47:39.438829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.392 [2024-06-07 21:47:39.444723] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.392 [2024-06-07 21:47:39.445128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.392 [2024-06-07 21:47:39.445152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.392 [2024-06-07 21:47:39.450667] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.392 [2024-06-07 21:47:39.451063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.392 [2024-06-07 21:47:39.451087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.392 [2024-06-07 21:47:39.456706] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.392 [2024-06-07 21:47:39.457116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.392 [2024-06-07 21:47:39.457140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.392 [2024-06-07 21:47:39.462870] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.392 [2024-06-07 21:47:39.463279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.392 [2024-06-07 21:47:39.463303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.392 [2024-06-07 21:47:39.469596] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.392 [2024-06-07 21:47:39.469997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.392 [2024-06-07 21:47:39.470022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.392 [2024-06-07 21:47:39.476618] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.392 [2024-06-07 21:47:39.477015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.392 [2024-06-07 21:47:39.477048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.392 [2024-06-07 21:47:39.482717] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.392 [2024-06-07 21:47:39.483111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.392 [2024-06-07 21:47:39.483135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.392 [2024-06-07 21:47:39.488897] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.392 [2024-06-07 21:47:39.489280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.392 [2024-06-07 21:47:39.489304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.392 [2024-06-07 21:47:39.494765] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.392 [2024-06-07 21:47:39.495173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.392 [2024-06-07 21:47:39.495197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.392 [2024-06-07 21:47:39.501697] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.392 [2024-06-07 21:47:39.502156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.392 [2024-06-07 21:47:39.502179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.392 [2024-06-07 21:47:39.509565] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.392 [2024-06-07 21:47:39.510001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.392 [2024-06-07 21:47:39.510032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.392 [2024-06-07 21:47:39.517268] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.392 [2024-06-07 21:47:39.517654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.392 [2024-06-07 21:47:39.517686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.392 [2024-06-07 21:47:39.524927] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.392 [2024-06-07 21:47:39.525331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.392 [2024-06-07 21:47:39.525355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.392 [2024-06-07 21:47:39.532370] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.392 [2024-06-07 21:47:39.532766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.392 [2024-06-07 21:47:39.532790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.392 [2024-06-07 21:47:39.538818] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.392 [2024-06-07 21:47:39.539223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.392 [2024-06-07 21:47:39.539247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.392 [2024-06-07 21:47:39.545102] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.392 [2024-06-07 21:47:39.545492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.392 [2024-06-07 21:47:39.545517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.392 [2024-06-07 21:47:39.551704] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.392 [2024-06-07 21:47:39.552174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.392 [2024-06-07 21:47:39.552199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.392 [2024-06-07 21:47:39.560055] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.392 [2024-06-07 21:47:39.560572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.392 [2024-06-07 21:47:39.560596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.392 [2024-06-07 21:47:39.567473] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.392 [2024-06-07 21:47:39.567878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.392 [2024-06-07 21:47:39.567902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.392 [2024-06-07 21:47:39.575387] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.393 [2024-06-07 21:47:39.575810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.393 [2024-06-07 21:47:39.575834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.393 [2024-06-07 21:47:39.583627] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.393 [2024-06-07 21:47:39.584034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.393 [2024-06-07 21:47:39.584058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.393 [2024-06-07 21:47:39.591257] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.393 [2024-06-07 21:47:39.591663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.393 [2024-06-07 21:47:39.591687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.393 [2024-06-07 21:47:39.598431] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.393 [2024-06-07 21:47:39.598832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.393 [2024-06-07 21:47:39.598856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.393 [2024-06-07 21:47:39.604871] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.393 [2024-06-07 21:47:39.605289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.393 [2024-06-07 21:47:39.605313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.393 [2024-06-07 21:47:39.611129] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.393 [2024-06-07 21:47:39.611526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.393 [2024-06-07 21:47:39.611550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.393 [2024-06-07 21:47:39.617116] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.393 [2024-06-07 21:47:39.617500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.393 [2024-06-07 21:47:39.617523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.393 [2024-06-07 21:47:39.623084] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.393 [2024-06-07 21:47:39.623484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.393 [2024-06-07 21:47:39.623507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.393 [2024-06-07 21:47:39.629102] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.393 [2024-06-07 21:47:39.629498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.393 [2024-06-07 21:47:39.629521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.393 [2024-06-07 21:47:39.635148] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.393 [2024-06-07 21:47:39.635543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.393 [2024-06-07 21:47:39.635566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.393 [2024-06-07 21:47:39.641117] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.393 [2024-06-07 21:47:39.641509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.393 [2024-06-07 21:47:39.641532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.393 [2024-06-07 21:47:39.647256] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.393 [2024-06-07 21:47:39.647658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.393 [2024-06-07 21:47:39.647682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.393 [2024-06-07 21:47:39.653346] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.393 [2024-06-07 21:47:39.653750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.393 [2024-06-07 21:47:39.653774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.393 [2024-06-07 21:47:39.659510] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.653 [2024-06-07 21:47:39.659917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.653 [2024-06-07 21:47:39.659942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.653 [2024-06-07 21:47:39.665791] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.653 [2024-06-07 21:47:39.666197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.653 [2024-06-07 21:47:39.666221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.653 [2024-06-07 21:47:39.671862] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.653 [2024-06-07 21:47:39.672251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.653 [2024-06-07 21:47:39.672276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.653 [2024-06-07 21:47:39.677732] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.653 [2024-06-07 21:47:39.678139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.653 [2024-06-07 21:47:39.678163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.653 [2024-06-07 21:47:39.683599] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.653 [2024-06-07 21:47:39.684001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.653 [2024-06-07 21:47:39.684030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.653 [2024-06-07 21:47:39.689571] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.653 [2024-06-07 21:47:39.689971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.653 [2024-06-07 21:47:39.690000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.653 [2024-06-07 21:47:39.695465] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.653 [2024-06-07 21:47:39.695855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.653 [2024-06-07 21:47:39.695879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.653 [2024-06-07 21:47:39.701308] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.653 [2024-06-07 21:47:39.701691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.653 [2024-06-07 21:47:39.701715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.653 [2024-06-07 21:47:39.707141] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.653 [2024-06-07 21:47:39.707547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.653 [2024-06-07 21:47:39.707572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.653 [2024-06-07 21:47:39.713014] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.654 [2024-06-07 21:47:39.713423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.654 [2024-06-07 21:47:39.713447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.654 [2024-06-07 21:47:39.718939] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.654 [2024-06-07 21:47:39.719346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.654 [2024-06-07 21:47:39.719370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.654 [2024-06-07 21:47:39.724816] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.654 [2024-06-07 21:47:39.725210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.654 [2024-06-07 21:47:39.725235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.654 [2024-06-07 21:47:39.730651] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.654 [2024-06-07 21:47:39.731057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.654 [2024-06-07 21:47:39.731081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.654 [2024-06-07 21:47:39.736526] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.654 [2024-06-07 21:47:39.736923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.654 [2024-06-07 21:47:39.736948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.654 [2024-06-07 21:47:39.742421] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.654 [2024-06-07 21:47:39.742822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.654 [2024-06-07 21:47:39.742847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.654 [2024-06-07 21:47:39.748215] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.654 [2024-06-07 21:47:39.748613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.654 [2024-06-07 21:47:39.748637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.654 [2024-06-07 21:47:39.754071] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.654 [2024-06-07 21:47:39.754470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.654 [2024-06-07 21:47:39.754494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.654 [2024-06-07 21:47:39.759998] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.654 [2024-06-07 21:47:39.760391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.654 [2024-06-07 21:47:39.760415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.654 [2024-06-07 21:47:39.765974] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.654 [2024-06-07 21:47:39.766376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.654 [2024-06-07 21:47:39.766400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.654 [2024-06-07 21:47:39.771818] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.654 [2024-06-07 21:47:39.772223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.654 [2024-06-07 21:47:39.772247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.654 [2024-06-07 21:47:39.777738] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.654 [2024-06-07 21:47:39.778135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.654 [2024-06-07 21:47:39.778161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.654 [2024-06-07 21:47:39.783631] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.654 [2024-06-07 21:47:39.784033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.654 [2024-06-07 21:47:39.784057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.654 [2024-06-07 21:47:39.789496] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.654 [2024-06-07 21:47:39.789885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.654 [2024-06-07 21:47:39.789915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.654 [2024-06-07 21:47:39.795402] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.654 [2024-06-07 21:47:39.795790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.654 [2024-06-07 21:47:39.795814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.654 [2024-06-07 21:47:39.801293] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.654 [2024-06-07 21:47:39.801694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.654 [2024-06-07 21:47:39.801718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.654 [2024-06-07 21:47:39.807205] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.654 [2024-06-07 21:47:39.807603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.654 [2024-06-07 21:47:39.807627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.654 [2024-06-07 21:47:39.813072] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.654 [2024-06-07 21:47:39.813465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.654 [2024-06-07 21:47:39.813490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.654 [2024-06-07 21:47:39.819434] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.654 [2024-06-07 21:47:39.819818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.654 [2024-06-07 21:47:39.819842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.654 [2024-06-07 21:47:39.825285] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.654 [2024-06-07 21:47:39.825665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.654 [2024-06-07 21:47:39.825690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.654 [2024-06-07 21:47:39.831142] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.654 [2024-06-07 21:47:39.831542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.654 [2024-06-07 21:47:39.831567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.654 [2024-06-07 21:47:39.837039] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.654 [2024-06-07 21:47:39.837440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.654 [2024-06-07 21:47:39.837464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.654 [2024-06-07 21:47:39.842925] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.654 [2024-06-07 21:47:39.843328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.654 [2024-06-07 21:47:39.843353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.654 [2024-06-07 21:47:39.848863] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.654 [2024-06-07 21:47:39.849247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.654 [2024-06-07 21:47:39.849272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.654 [2024-06-07 21:47:39.854832] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.654 [2024-06-07 21:47:39.855241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.654 [2024-06-07 21:47:39.855265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.654 [2024-06-07 21:47:39.860679] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.654 [2024-06-07 21:47:39.861078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.654 [2024-06-07 21:47:39.861103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.654 [2024-06-07 21:47:39.866761] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.654 [2024-06-07 21:47:39.867155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.654 [2024-06-07 21:47:39.867179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.654 [2024-06-07 21:47:39.873687] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.655 [2024-06-07 21:47:39.874155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.655 [2024-06-07 21:47:39.874179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.655 [2024-06-07 21:47:39.880783] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.655 [2024-06-07 21:47:39.881185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.655 [2024-06-07 21:47:39.881209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.655 [2024-06-07 21:47:39.887273] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.655 [2024-06-07 21:47:39.887649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.655 [2024-06-07 21:47:39.887673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.655 [2024-06-07 21:47:39.892909] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.655 [2024-06-07 21:47:39.893286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.655 [2024-06-07 21:47:39.893311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.655 [2024-06-07 21:47:39.899665] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.655 [2024-06-07 21:47:39.900033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.655 [2024-06-07 21:47:39.900057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.655 [2024-06-07 21:47:39.906154] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.655 [2024-06-07 21:47:39.906540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.655 [2024-06-07 21:47:39.906564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.655 [2024-06-07 21:47:39.912049] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.655 [2024-06-07 21:47:39.912427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.655 [2024-06-07 21:47:39.912451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.655 [2024-06-07 21:47:39.917807] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.655 [2024-06-07 21:47:39.918191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.655 [2024-06-07 21:47:39.918216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.915 [2024-06-07 21:47:39.923821] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.915 [2024-06-07 21:47:39.924199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.915 [2024-06-07 21:47:39.924223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.915 [2024-06-07 21:47:39.930038] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.915 [2024-06-07 21:47:39.930411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.915 [2024-06-07 21:47:39.930435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.915 [2024-06-07 21:47:39.937501] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.915 [2024-06-07 21:47:39.937878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.915 [2024-06-07 21:47:39.937902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.915 [2024-06-07 21:47:39.943469] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.915 [2024-06-07 21:47:39.943830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.915 [2024-06-07 21:47:39.943855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.915 [2024-06-07 21:47:39.949391] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.915 [2024-06-07 21:47:39.949746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.915 [2024-06-07 21:47:39.949775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.915 [2024-06-07 21:47:39.955098] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.915 [2024-06-07 21:47:39.955480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.915 [2024-06-07 21:47:39.955504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.915 [2024-06-07 21:47:39.961127] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.915 [2024-06-07 21:47:39.961532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.915 [2024-06-07 21:47:39.961555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.915 [2024-06-07 21:47:39.968302] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.915 [2024-06-07 21:47:39.968678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.915 [2024-06-07 21:47:39.968701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.915 [2024-06-07 21:47:39.975648] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.915 [2024-06-07 21:47:39.976022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.915 [2024-06-07 21:47:39.976054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.915 [2024-06-07 21:47:39.982991] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.915 [2024-06-07 21:47:39.983370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.915 [2024-06-07 21:47:39.983394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.915 [2024-06-07 21:47:39.990486] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.915 [2024-06-07 21:47:39.990919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.915 [2024-06-07 21:47:39.990944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.915 [2024-06-07 21:47:39.998100] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.915 [2024-06-07 21:47:39.998556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.915 [2024-06-07 21:47:39.998580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.916 [2024-06-07 21:47:40.005442] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.916 [2024-06-07 21:47:40.005904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.916 [2024-06-07 21:47:40.005928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.916 [2024-06-07 21:47:40.012343] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.916 [2024-06-07 21:47:40.012712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.916 [2024-06-07 21:47:40.012738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.916 [2024-06-07 21:47:40.018406] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.916 [2024-06-07 21:47:40.018779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.916 [2024-06-07 21:47:40.018803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.916 [2024-06-07 21:47:40.024420] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.916 [2024-06-07 21:47:40.024791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.916 [2024-06-07 21:47:40.024815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.916 [2024-06-07 21:47:40.031912] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.916 [2024-06-07 21:47:40.032270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.916 [2024-06-07 21:47:40.032296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.916 [2024-06-07 21:47:40.037460] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.916 [2024-06-07 21:47:40.037834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.916 [2024-06-07 21:47:40.037859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.916 [2024-06-07 21:47:40.043096] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.916 [2024-06-07 21:47:40.043466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.916 [2024-06-07 21:47:40.043491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.916 [2024-06-07 21:47:40.048657] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.916 [2024-06-07 21:47:40.049020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.916 [2024-06-07 21:47:40.049052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.916 [2024-06-07 21:47:40.054316] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.916 [2024-06-07 21:47:40.054695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.916 [2024-06-07 21:47:40.054719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.916 [2024-06-07 21:47:40.059947] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.916 [2024-06-07 21:47:40.060334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.916 [2024-06-07 21:47:40.060358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.916 [2024-06-07 21:47:40.065549] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.916 [2024-06-07 21:47:40.065930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.916 [2024-06-07 21:47:40.065954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.916 [2024-06-07 21:47:40.071124] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.916 [2024-06-07 21:47:40.071494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.916 [2024-06-07 21:47:40.071519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.916 [2024-06-07 21:47:40.076731] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.916 [2024-06-07 21:47:40.077113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.916 [2024-06-07 21:47:40.077138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.916 [2024-06-07 21:47:40.082295] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.916 [2024-06-07 21:47:40.082667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.916 [2024-06-07 21:47:40.082691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.916 [2024-06-07 21:47:40.088012] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.916 [2024-06-07 21:47:40.088402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.916 [2024-06-07 21:47:40.088426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.916 [2024-06-07 21:47:40.093587] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.916 [2024-06-07 21:47:40.093962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.916 [2024-06-07 21:47:40.093987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.916 [2024-06-07 21:47:40.099196] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.916 [2024-06-07 21:47:40.099584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.916 [2024-06-07 21:47:40.099609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.916 [2024-06-07 21:47:40.104745] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.916 [2024-06-07 21:47:40.105130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.916 [2024-06-07 21:47:40.105154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.916 [2024-06-07 21:47:40.110403] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.916 [2024-06-07 21:47:40.110776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.916 [2024-06-07 21:47:40.110805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.916 [2024-06-07 21:47:40.115995] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.916 [2024-06-07 21:47:40.116388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.916 [2024-06-07 21:47:40.116413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.916 [2024-06-07 21:47:40.121573] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.916 [2024-06-07 21:47:40.121945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.916 [2024-06-07 21:47:40.121969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.916 [2024-06-07 21:47:40.127174] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.916 [2024-06-07 21:47:40.127549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.916 [2024-06-07 21:47:40.127573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.916 [2024-06-07 21:47:40.132744] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.916 [2024-06-07 21:47:40.133109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.916 [2024-06-07 21:47:40.133133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.916 [2024-06-07 21:47:40.138345] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.916 [2024-06-07 21:47:40.138724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.916 [2024-06-07 21:47:40.138748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.916 [2024-06-07 21:47:40.143927] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.916 [2024-06-07 21:47:40.144310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.916 [2024-06-07 21:47:40.144334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.916 [2024-06-07 21:47:40.149504] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.916 [2024-06-07 21:47:40.149877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.916 [2024-06-07 21:47:40.149901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.916 [2024-06-07 21:47:40.155057] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.916 [2024-06-07 21:47:40.155437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.916 [2024-06-07 21:47:40.155461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.916 [2024-06-07 21:47:40.160642] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.916 [2024-06-07 21:47:40.161022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.916 [2024-06-07 21:47:40.161055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.916 [2024-06-07 21:47:40.166207] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.916 [2024-06-07 21:47:40.166586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.916 [2024-06-07 21:47:40.166610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.916 [2024-06-07 21:47:40.171787] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.916 [2024-06-07 21:47:40.172171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.916 [2024-06-07 21:47:40.172194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.916 [2024-06-07 21:47:40.177420] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:39.916 [2024-06-07 21:47:40.177785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.916 [2024-06-07 21:47:40.177809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:40.176 [2024-06-07 21:47:40.183003] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:40.176 [2024-06-07 21:47:40.183390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.176 [2024-06-07 21:47:40.183415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.176 [2024-06-07 21:47:40.188616] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:40.176 [2024-06-07 21:47:40.188995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.176 [2024-06-07 21:47:40.189019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:40.176 [2024-06-07 21:47:40.194341] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:40.176 [2024-06-07 21:47:40.194714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.176 [2024-06-07 21:47:40.194738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:40.176 [2024-06-07 21:47:40.199957] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:40.176 [2024-06-07 21:47:40.200321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.176 [2024-06-07 21:47:40.200344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:40.176 [2024-06-07 21:47:40.205539] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:40.176 [2024-06-07 21:47:40.205917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.176 [2024-06-07 21:47:40.205946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.176 [2024-06-07 21:47:40.211137] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:40.176 [2024-06-07 21:47:40.211507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.176 [2024-06-07 21:47:40.211532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:40.176 [2024-06-07 21:47:40.216682] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:40.176 [2024-06-07 21:47:40.217061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.176 [2024-06-07 21:47:40.217086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:40.176 [2024-06-07 21:47:40.222282] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:40.176 [2024-06-07 21:47:40.222654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.176 [2024-06-07 21:47:40.222678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:40.176 [2024-06-07 21:47:40.228082] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:40.176 [2024-06-07 21:47:40.228447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.176 [2024-06-07 21:47:40.228471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.176 [2024-06-07 21:47:40.233689] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:40.176 [2024-06-07 21:47:40.234070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.176 [2024-06-07 21:47:40.234094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:40.176 [2024-06-07 21:47:40.239531] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:40.177 [2024-06-07 21:47:40.239914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.177 [2024-06-07 21:47:40.239938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:40.177 [2024-06-07 21:47:40.245392] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:40.177 [2024-06-07 21:47:40.245761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.177 [2024-06-07 21:47:40.245785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:40.177 [2024-06-07 21:47:40.251267] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:40.177 [2024-06-07 21:47:40.251633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.177 [2024-06-07 21:47:40.251657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.177 [2024-06-07 21:47:40.257074] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:40.177 [2024-06-07 21:47:40.257453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.177 [2024-06-07 21:47:40.257477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:40.177 [2024-06-07 21:47:40.262868] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:40.177 [2024-06-07 21:47:40.263240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.177 [2024-06-07 21:47:40.263264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:40.177 [2024-06-07 21:47:40.268721] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:40.177 [2024-06-07 21:47:40.269084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.177 [2024-06-07 21:47:40.269108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:40.177 [2024-06-07 21:47:40.274600] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:40.177 [2024-06-07 21:47:40.274975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.177 [2024-06-07 21:47:40.274999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.177 [2024-06-07 21:47:40.280539] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:40.177 [2024-06-07 21:47:40.280906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.177 [2024-06-07 21:47:40.280930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:40.177 [2024-06-07 21:47:40.286200] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:40.177 [2024-06-07 21:47:40.286557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.177 [2024-06-07 21:47:40.286581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:40.177 [2024-06-07 21:47:40.292736] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:40.177 [2024-06-07 21:47:40.293144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.177 [2024-06-07 21:47:40.293169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:40.177 [2024-06-07 21:47:40.299357] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:40.177 [2024-06-07 21:47:40.299737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.177 [2024-06-07 21:47:40.299761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.177 [2024-06-07 21:47:40.305467] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1649a20) with pdu=0x2000190fef90 00:30:40.177 [2024-06-07 21:47:40.305839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.177 [2024-06-07 21:47:40.305863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:40.177 00:30:40.177 Latency(us) 00:30:40.177 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:40.177 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:40.177 nvme0n1 : 2.00 4607.77 575.97 0.00 0.00 3465.93 2472.49 14954.12 00:30:40.177 =================================================================================================================== 00:30:40.177 Total : 4607.77 575.97 0.00 0.00 3465.93 2472.49 14954.12 00:30:40.177 0 00:30:40.177 21:47:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:40.177 21:47:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:40.177 21:47:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:40.177 | .driver_specific 00:30:40.177 | .nvme_error 00:30:40.177 | .status_code 00:30:40.177 | .command_transient_transport_error' 00:30:40.177 21:47:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:40.436 21:47:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 297 > 0 )) 00:30:40.436 21:47:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1626161 00:30:40.436 21:47:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 1626161 ']' 00:30:40.436 21:47:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 1626161 00:30:40.437 21:47:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:30:40.437 21:47:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:40.437 21:47:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1626161 00:30:40.437 21:47:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:30:40.437 21:47:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:30:40.437 21:47:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1626161' 00:30:40.437 killing process with pid 1626161 00:30:40.437 21:47:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 1626161 00:30:40.437 Received shutdown signal, test time was about 2.000000 seconds 00:30:40.437 00:30:40.437 Latency(us) 00:30:40.437 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:40.437 =================================================================================================================== 00:30:40.437 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:40.437 21:47:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 1626161 00:30:40.696 21:47:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1624015 00:30:40.696 21:47:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 1624015 ']' 00:30:40.696 21:47:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 1624015 00:30:40.696 21:47:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:30:40.696 21:47:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:40.696 21:47:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1624015 00:30:40.696 21:47:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:30:40.696 21:47:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:30:40.696 21:47:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1624015' 00:30:40.696 killing process with pid 1624015 00:30:40.696 21:47:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 1624015 00:30:40.696 21:47:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 1624015 00:30:40.956 00:30:40.956 real 0m15.685s 00:30:40.956 user 0m30.433s 00:30:40.956 sys 0m4.056s 00:30:40.956 21:47:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:40.956 21:47:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:40.956 ************************************ 00:30:40.956 END TEST nvmf_digest_error 00:30:40.956 ************************************ 00:30:40.956 21:47:41 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:30:40.956 21:47:41 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:30:40.956 21:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:40.956 21:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:30:40.956 21:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:40.956 21:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:30:40.956 21:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:40.956 21:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:40.956 rmmod nvme_tcp 00:30:40.956 rmmod nvme_fabrics 00:30:40.956 rmmod nvme_keyring 00:30:40.956 21:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:40.956 21:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:30:40.956 21:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:30:40.956 21:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1624015 ']' 00:30:40.956 21:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1624015 00:30:40.956 21:47:41 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@949 -- # '[' -z 1624015 ']' 00:30:40.956 21:47:41 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@953 -- # kill -0 1624015 00:30:40.956 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (1624015) - No such process 00:30:40.956 21:47:41 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@976 -- # echo 'Process with pid 1624015 is not found' 00:30:40.956 Process with pid 1624015 is not found 00:30:40.956 21:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:40.956 21:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:40.956 21:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:40.956 21:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:40.956 21:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:40.956 21:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:40.956 21:47:41 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:40.956 21:47:41 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:43.489 21:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:43.490 00:30:43.490 real 0m40.135s 00:30:43.490 user 1m2.589s 00:30:43.490 sys 0m13.160s 00:30:43.490 21:47:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:43.490 21:47:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:43.490 ************************************ 00:30:43.490 END TEST nvmf_digest 00:30:43.490 ************************************ 00:30:43.490 21:47:43 nvmf_tcp -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:30:43.490 21:47:43 nvmf_tcp -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:30:43.490 21:47:43 nvmf_tcp -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:30:43.490 21:47:43 nvmf_tcp -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:43.490 21:47:43 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:30:43.490 21:47:43 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:43.490 21:47:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:43.490 ************************************ 00:30:43.490 START TEST nvmf_bdevperf 00:30:43.490 ************************************ 00:30:43.490 21:47:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:43.490 * Looking for test storage... 00:30:43.490 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:43.490 21:47:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:43.490 21:47:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:30:43.490 21:47:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:43.490 21:47:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:43.490 21:47:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:43.490 21:47:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:43.490 21:47:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:43.490 21:47:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:43.490 21:47:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:43.490 21:47:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:43.490 21:47:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:43.490 21:47:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:43.490 21:47:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:30:43.490 21:47:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:30:43.490 21:47:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:43.490 21:47:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:43.490 21:47:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:43.490 21:47:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:43.490 21:47:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:43.490 21:47:43 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:43.490 21:47:43 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:43.490 21:47:43 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:43.490 21:47:43 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.490 21:47:43 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.490 21:47:43 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.490 21:47:43 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:30:43.490 21:47:43 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.490 21:47:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:30:43.490 21:47:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:43.490 21:47:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:43.490 21:47:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:43.490 21:47:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:43.490 21:47:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:43.490 21:47:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:43.490 21:47:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:43.490 21:47:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:43.490 21:47:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:43.490 21:47:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:43.490 21:47:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:30:43.490 21:47:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:43.490 21:47:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:43.490 21:47:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:43.490 21:47:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:43.490 21:47:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:43.490 21:47:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:43.490 21:47:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:43.490 21:47:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:43.490 21:47:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:43.490 21:47:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:43.490 21:47:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:30:43.490 21:47:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:50.079 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:50.079 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:50.079 Found net devices under 0000:af:00.0: cvl_0_0 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:50.079 Found net devices under 0000:af:00.1: cvl_0_1 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:50.079 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:50.079 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:30:50.079 00:30:50.079 --- 10.0.0.2 ping statistics --- 00:30:50.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:50.079 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:50.079 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:50.079 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:30:50.079 00:30:50.079 --- 10.0.0.1 ping statistics --- 00:30:50.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:50.079 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:50.079 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:50.080 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:50.080 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:50.080 21:47:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:30:50.080 21:47:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:50.080 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:50.080 21:47:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:50.080 21:47:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:50.080 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1630718 00:30:50.080 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1630718 00:30:50.080 21:47:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:50.080 21:47:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@830 -- # '[' -z 1630718 ']' 00:30:50.080 21:47:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:50.080 21:47:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:50.080 21:47:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:50.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:50.080 21:47:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:50.080 21:47:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:50.080 [2024-06-07 21:47:49.922505] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:30:50.080 [2024-06-07 21:47:49.922560] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:50.080 EAL: No free 2048 kB hugepages reported on node 1 00:30:50.080 [2024-06-07 21:47:50.012882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:50.080 [2024-06-07 21:47:50.116223] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:50.080 [2024-06-07 21:47:50.116265] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:50.080 [2024-06-07 21:47:50.116275] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:50.080 [2024-06-07 21:47:50.116284] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:50.080 [2024-06-07 21:47:50.116292] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:50.080 [2024-06-07 21:47:50.116342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:50.080 [2024-06-07 21:47:50.116443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:50.080 [2024-06-07 21:47:50.116444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:50.646 21:47:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:50.646 21:47:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@863 -- # return 0 00:30:50.646 21:47:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:50.646 21:47:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:50.646 21:47:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:50.646 21:47:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:50.646 21:47:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:50.646 21:47:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:50.646 21:47:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:50.646 [2024-06-07 21:47:50.911173] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:50.905 21:47:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:50.905 21:47:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:50.905 21:47:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:50.905 21:47:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:50.905 Malloc0 00:30:50.905 21:47:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:50.905 21:47:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:50.905 21:47:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:50.905 21:47:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:50.905 21:47:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:50.905 21:47:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:50.905 21:47:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:50.905 21:47:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:50.905 21:47:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:50.905 21:47:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:50.905 21:47:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:50.905 21:47:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:50.905 [2024-06-07 21:47:50.974762] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:50.905 21:47:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:50.905 21:47:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:30:50.905 21:47:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:30:50.905 21:47:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:30:50.905 21:47:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:30:50.905 21:47:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:50.905 21:47:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:50.905 { 00:30:50.905 "params": { 00:30:50.905 "name": "Nvme$subsystem", 00:30:50.905 "trtype": "$TEST_TRANSPORT", 00:30:50.905 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:50.906 "adrfam": "ipv4", 00:30:50.906 "trsvcid": "$NVMF_PORT", 00:30:50.906 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:50.906 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:50.906 "hdgst": ${hdgst:-false}, 00:30:50.906 "ddgst": ${ddgst:-false} 00:30:50.906 }, 00:30:50.906 "method": "bdev_nvme_attach_controller" 00:30:50.906 } 00:30:50.906 EOF 00:30:50.906 )") 00:30:50.906 21:47:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:30:50.906 21:47:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:30:50.906 21:47:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:30:50.906 21:47:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:50.906 "params": { 00:30:50.906 "name": "Nvme1", 00:30:50.906 "trtype": "tcp", 00:30:50.906 "traddr": "10.0.0.2", 00:30:50.906 "adrfam": "ipv4", 00:30:50.906 "trsvcid": "4420", 00:30:50.906 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:50.906 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:50.906 "hdgst": false, 00:30:50.906 "ddgst": false 00:30:50.906 }, 00:30:50.906 "method": "bdev_nvme_attach_controller" 00:30:50.906 }' 00:30:50.906 [2024-06-07 21:47:51.029779] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:30:50.906 [2024-06-07 21:47:51.029835] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1630857 ] 00:30:50.906 EAL: No free 2048 kB hugepages reported on node 1 00:30:50.906 [2024-06-07 21:47:51.117853] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:51.164 [2024-06-07 21:47:51.205499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:51.164 Running I/O for 1 seconds... 00:30:52.541 00:30:52.541 Latency(us) 00:30:52.541 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:52.541 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:52.541 Verification LBA range: start 0x0 length 0x4000 00:30:52.541 Nvme1n1 : 1.00 7570.17 29.57 0.00 0.00 16828.53 789.41 14417.92 00:30:52.541 =================================================================================================================== 00:30:52.542 Total : 7570.17 29.57 0.00 0.00 16828.53 789.41 14417.92 00:30:52.542 21:47:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1631176 00:30:52.542 21:47:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:30:52.542 21:47:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:30:52.542 21:47:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:30:52.542 21:47:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:30:52.542 21:47:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:30:52.542 21:47:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:52.542 21:47:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:52.542 { 00:30:52.542 "params": { 00:30:52.542 "name": "Nvme$subsystem", 00:30:52.542 "trtype": "$TEST_TRANSPORT", 00:30:52.542 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:52.542 "adrfam": "ipv4", 00:30:52.542 "trsvcid": "$NVMF_PORT", 00:30:52.542 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:52.542 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:52.542 "hdgst": ${hdgst:-false}, 00:30:52.542 "ddgst": ${ddgst:-false} 00:30:52.542 }, 00:30:52.542 "method": "bdev_nvme_attach_controller" 00:30:52.542 } 00:30:52.542 EOF 00:30:52.542 )") 00:30:52.542 21:47:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:30:52.542 21:47:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:30:52.542 21:47:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:30:52.542 21:47:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:52.542 "params": { 00:30:52.542 "name": "Nvme1", 00:30:52.542 "trtype": "tcp", 00:30:52.542 "traddr": "10.0.0.2", 00:30:52.542 "adrfam": "ipv4", 00:30:52.542 "trsvcid": "4420", 00:30:52.542 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:52.542 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:52.542 "hdgst": false, 00:30:52.542 "ddgst": false 00:30:52.542 }, 00:30:52.542 "method": "bdev_nvme_attach_controller" 00:30:52.542 }' 00:30:52.542 [2024-06-07 21:47:52.664795] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:30:52.542 [2024-06-07 21:47:52.664856] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1631176 ] 00:30:52.542 EAL: No free 2048 kB hugepages reported on node 1 00:30:52.542 [2024-06-07 21:47:52.752773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.802 [2024-06-07 21:47:52.839469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:52.802 Running I/O for 15 seconds... 00:30:55.386 21:47:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1630718 00:30:55.386 21:47:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:30:55.386 [2024-06-07 21:47:55.634177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.386 [2024-06-07 21:47:55.634220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.386 [2024-06-07 21:47:55.634245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.386 [2024-06-07 21:47:55.634259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.386 [2024-06-07 21:47:55.634273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.386 [2024-06-07 21:47:55.634283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.386 [2024-06-07 21:47:55.634296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.386 [2024-06-07 21:47:55.634307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.386 [2024-06-07 21:47:55.634320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.386 [2024-06-07 21:47:55.634331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.386 [2024-06-07 21:47:55.634352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.386 [2024-06-07 21:47:55.634365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.386 [2024-06-07 21:47:55.634378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.386 [2024-06-07 21:47:55.634392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.386 [2024-06-07 21:47:55.634406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.386 [2024-06-07 21:47:55.634415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.386 [2024-06-07 21:47:55.634428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.386 [2024-06-07 21:47:55.634437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.386 [2024-06-07 21:47:55.634451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.386 [2024-06-07 21:47:55.634460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.386 [2024-06-07 21:47:55.634472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.386 [2024-06-07 21:47:55.634484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.386 [2024-06-07 21:47:55.634501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.386 [2024-06-07 21:47:55.634512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.386 [2024-06-07 21:47:55.634528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.386 [2024-06-07 21:47:55.634541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.386 [2024-06-07 21:47:55.634555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.386 [2024-06-07 21:47:55.634568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.386 [2024-06-07 21:47:55.634582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.386 [2024-06-07 21:47:55.634594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.386 [2024-06-07 21:47:55.634609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.386 [2024-06-07 21:47:55.634621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.386 [2024-06-07 21:47:55.634635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.386 [2024-06-07 21:47:55.634647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.386 [2024-06-07 21:47:55.634659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.386 [2024-06-07 21:47:55.634671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.386 [2024-06-07 21:47:55.634684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.386 [2024-06-07 21:47:55.634694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.386 [2024-06-07 21:47:55.634705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.386 [2024-06-07 21:47:55.634715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.386 [2024-06-07 21:47:55.634726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.386 [2024-06-07 21:47:55.634737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.386 [2024-06-07 21:47:55.634749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:22608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.386 [2024-06-07 21:47:55.634758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.386 [2024-06-07 21:47:55.634770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.386 [2024-06-07 21:47:55.634779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.386 [2024-06-07 21:47:55.634791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.386 [2024-06-07 21:47:55.634800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.386 [2024-06-07 21:47:55.634812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.386 [2024-06-07 21:47:55.634821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.386 [2024-06-07 21:47:55.634833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.386 [2024-06-07 21:47:55.634843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.386 [2024-06-07 21:47:55.634855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.386 [2024-06-07 21:47:55.634865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.386 [2024-06-07 21:47:55.634877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.386 [2024-06-07 21:47:55.634886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.386 [2024-06-07 21:47:55.634898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.386 [2024-06-07 21:47:55.634908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.386 [2024-06-07 21:47:55.634920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.386 [2024-06-07 21:47:55.634930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.386 [2024-06-07 21:47:55.634941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.386 [2024-06-07 21:47:55.634952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.386 [2024-06-07 21:47:55.634964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.386 [2024-06-07 21:47:55.634974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.386 [2024-06-07 21:47:55.634986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.386 [2024-06-07 21:47:55.634995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.386 [2024-06-07 21:47:55.635008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.386 [2024-06-07 21:47:55.635017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.386 [2024-06-07 21:47:55.635035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.386 [2024-06-07 21:47:55.635045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.386 [2024-06-07 21:47:55.635057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.386 [2024-06-07 21:47:55.635067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.386 [2024-06-07 21:47:55.635079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.387 [2024-06-07 21:47:55.635088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.387 [2024-06-07 21:47:55.635100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.387 [2024-06-07 21:47:55.635109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.387 [2024-06-07 21:47:55.635122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.387 [2024-06-07 21:47:55.635132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.387 [2024-06-07 21:47:55.635144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.387 [2024-06-07 21:47:55.635154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.387 [2024-06-07 21:47:55.635166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.387 [2024-06-07 21:47:55.635176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.387 [2024-06-07 21:47:55.635188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.387 [2024-06-07 21:47:55.635198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.387 [2024-06-07 21:47:55.635209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.387 [2024-06-07 21:47:55.635220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.387 [2024-06-07 21:47:55.635233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.387 [2024-06-07 21:47:55.635243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.387 [2024-06-07 21:47:55.635255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.387 [2024-06-07 21:47:55.635264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.387 [2024-06-07 21:47:55.635276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.387 [2024-06-07 21:47:55.635286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.387 [2024-06-07 21:47:55.635299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.387 [2024-06-07 21:47:55.635308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.387 [2024-06-07 21:47:55.635320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:22816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.387 [2024-06-07 21:47:55.635330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.387 [2024-06-07 21:47:55.635342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.387 [2024-06-07 21:47:55.635351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.387 [2024-06-07 21:47:55.635367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.387 [2024-06-07 21:47:55.635377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.387 [2024-06-07 21:47:55.635390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.387 [2024-06-07 21:47:55.635400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.387 [2024-06-07 21:47:55.635412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.387 [2024-06-07 21:47:55.635421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.387 [2024-06-07 21:47:55.635433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.387 [2024-06-07 21:47:55.635443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.387 [2024-06-07 21:47:55.635456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.387 [2024-06-07 21:47:55.635465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.387 [2024-06-07 21:47:55.635477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.387 [2024-06-07 21:47:55.635487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.387 [2024-06-07 21:47:55.635498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.387 [2024-06-07 21:47:55.635510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.387 [2024-06-07 21:47:55.635522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.387 [2024-06-07 21:47:55.635531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.387 [2024-06-07 21:47:55.635543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.387 [2024-06-07 21:47:55.635553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.387 [2024-06-07 21:47:55.635565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.387 [2024-06-07 21:47:55.635574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.387 [2024-06-07 21:47:55.635586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.387 [2024-06-07 21:47:55.635596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.387 [2024-06-07 21:47:55.635608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.387 [2024-06-07 21:47:55.635618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.387 [2024-06-07 21:47:55.635629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.387 [2024-06-07 21:47:55.635639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.387 [2024-06-07 21:47:55.635650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.387 [2024-06-07 21:47:55.635660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.387 [2024-06-07 21:47:55.635671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.387 [2024-06-07 21:47:55.635681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.387 [2024-06-07 21:47:55.635692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.387 [2024-06-07 21:47:55.635703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.387 [2024-06-07 21:47:55.635716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.387 [2024-06-07 21:47:55.635726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.387 [2024-06-07 21:47:55.635738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.387 [2024-06-07 21:47:55.635747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.387 [2024-06-07 21:47:55.635759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.387 [2024-06-07 21:47:55.635768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.387 [2024-06-07 21:47:55.635780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.387 [2024-06-07 21:47:55.635795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.387 [2024-06-07 21:47:55.635807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.387 [2024-06-07 21:47:55.635817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.387 [2024-06-07 21:47:55.635828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.387 [2024-06-07 21:47:55.635838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.387 [2024-06-07 21:47:55.635849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.387 [2024-06-07 21:47:55.635859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.387 [2024-06-07 21:47:55.635871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.387 [2024-06-07 21:47:55.635880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.387 [2024-06-07 21:47:55.635891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.387 [2024-06-07 21:47:55.635901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.387 [2024-06-07 21:47:55.635913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.387 [2024-06-07 21:47:55.635922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.387 [2024-06-07 21:47:55.635934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.387 [2024-06-07 21:47:55.635944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.387 [2024-06-07 21:47:55.635956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.388 [2024-06-07 21:47:55.635966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.388 [2024-06-07 21:47:55.635977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.388 [2024-06-07 21:47:55.635987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.388 [2024-06-07 21:47:55.635998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.388 [2024-06-07 21:47:55.636008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.388 [2024-06-07 21:47:55.636019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.388 [2024-06-07 21:47:55.636035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.388 [2024-06-07 21:47:55.636047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.388 [2024-06-07 21:47:55.636057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.388 [2024-06-07 21:47:55.636073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.388 [2024-06-07 21:47:55.636083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.388 [2024-06-07 21:47:55.636096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.388 [2024-06-07 21:47:55.636105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.388 [2024-06-07 21:47:55.636116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.388 [2024-06-07 21:47:55.636126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.388 [2024-06-07 21:47:55.636138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.388 [2024-06-07 21:47:55.636147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.388 [2024-06-07 21:47:55.636159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:23120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.388 [2024-06-07 21:47:55.636168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.388 [2024-06-07 21:47:55.636180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.388 [2024-06-07 21:47:55.636190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.388 [2024-06-07 21:47:55.636202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.388 [2024-06-07 21:47:55.636211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.388 [2024-06-07 21:47:55.636223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.388 [2024-06-07 21:47:55.636232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.388 [2024-06-07 21:47:55.636244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.388 [2024-06-07 21:47:55.636253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.388 [2024-06-07 21:47:55.636265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.388 [2024-06-07 21:47:55.636274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.388 [2024-06-07 21:47:55.636286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.388 [2024-06-07 21:47:55.636295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.388 [2024-06-07 21:47:55.636307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:23176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.388 [2024-06-07 21:47:55.636316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.388 [2024-06-07 21:47:55.636328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.388 [2024-06-07 21:47:55.636339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.388 [2024-06-07 21:47:55.636351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.388 [2024-06-07 21:47:55.636360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.388 [2024-06-07 21:47:55.636372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.388 [2024-06-07 21:47:55.636381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.388 [2024-06-07 21:47:55.636393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.388 [2024-06-07 21:47:55.636402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.388 [2024-06-07 21:47:55.636415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.388 [2024-06-07 21:47:55.636424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.388 [2024-06-07 21:47:55.636436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.388 [2024-06-07 21:47:55.636446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.388 [2024-06-07 21:47:55.636457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.388 [2024-06-07 21:47:55.636467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.388 [2024-06-07 21:47:55.636478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.388 [2024-06-07 21:47:55.636488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.388 [2024-06-07 21:47:55.636499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.388 [2024-06-07 21:47:55.636509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.388 [2024-06-07 21:47:55.636521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.388 [2024-06-07 21:47:55.636530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.388 [2024-06-07 21:47:55.636542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.388 [2024-06-07 21:47:55.636552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.388 [2024-06-07 21:47:55.636563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.388 [2024-06-07 21:47:55.636572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.388 [2024-06-07 21:47:55.636584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.388 [2024-06-07 21:47:55.636594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.388 [2024-06-07 21:47:55.636608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.388 [2024-06-07 21:47:55.636617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.388 [2024-06-07 21:47:55.636630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.388 [2024-06-07 21:47:55.636640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.388 [2024-06-07 21:47:55.636651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.388 [2024-06-07 21:47:55.636661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.388 [2024-06-07 21:47:55.636673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.388 [2024-06-07 21:47:55.636682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.388 [2024-06-07 21:47:55.636694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.388 [2024-06-07 21:47:55.636703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.388 [2024-06-07 21:47:55.636715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.388 [2024-06-07 21:47:55.636725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.388 [2024-06-07 21:47:55.636737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.388 [2024-06-07 21:47:55.636746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.388 [2024-06-07 21:47:55.636760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.388 [2024-06-07 21:47:55.636769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.388 [2024-06-07 21:47:55.636781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.388 [2024-06-07 21:47:55.636790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.388 [2024-06-07 21:47:55.636802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.388 [2024-06-07 21:47:55.636812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.388 [2024-06-07 21:47:55.636823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.389 [2024-06-07 21:47:55.636833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.389 [2024-06-07 21:47:55.636845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.389 [2024-06-07 21:47:55.636854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.389 [2024-06-07 21:47:55.636867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.389 [2024-06-07 21:47:55.636876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.389 [2024-06-07 21:47:55.636889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.389 [2024-06-07 21:47:55.636899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.389 [2024-06-07 21:47:55.636911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.389 [2024-06-07 21:47:55.636920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.389 [2024-06-07 21:47:55.636932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.389 [2024-06-07 21:47:55.636941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.389 [2024-06-07 21:47:55.636953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.389 [2024-06-07 21:47:55.636962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.389 [2024-06-07 21:47:55.636974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.389 [2024-06-07 21:47:55.636984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.389 [2024-06-07 21:47:55.636996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.389 [2024-06-07 21:47:55.637006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.389 [2024-06-07 21:47:55.637018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.389 [2024-06-07 21:47:55.637033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.389 [2024-06-07 21:47:55.637045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.389 [2024-06-07 21:47:55.637054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.389 [2024-06-07 21:47:55.637066] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc39f0 is same with the state(5) to be set 00:30:55.389 [2024-06-07 21:47:55.637077] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.389 [2024-06-07 21:47:55.637084] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.389 [2024-06-07 21:47:55.637093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22432 len:8 PRP1 0x0 PRP2 0x0 00:30:55.389 [2024-06-07 21:47:55.637104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.389 [2024-06-07 21:47:55.637154] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1dc39f0 was disconnected and freed. reset controller. 00:30:55.389 [2024-06-07 21:47:55.637207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.389 [2024-06-07 21:47:55.637220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.389 [2024-06-07 21:47:55.637231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.389 [2024-06-07 21:47:55.637240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.389 [2024-06-07 21:47:55.637253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.389 [2024-06-07 21:47:55.637263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.389 [2024-06-07 21:47:55.637274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.389 [2024-06-07 21:47:55.637283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.389 [2024-06-07 21:47:55.637292] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:55.389 [2024-06-07 21:47:55.641510] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.389 [2024-06-07 21:47:55.641540] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:55.389 [2024-06-07 21:47:55.642373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.389 [2024-06-07 21:47:55.642395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:55.389 [2024-06-07 21:47:55.642405] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:55.389 [2024-06-07 21:47:55.642671] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:55.389 [2024-06-07 21:47:55.642935] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.389 [2024-06-07 21:47:55.642947] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.389 [2024-06-07 21:47:55.642958] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.389 [2024-06-07 21:47:55.647220] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.649 [2024-06-07 21:47:55.656504] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.649 [2024-06-07 21:47:55.657080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.649 [2024-06-07 21:47:55.657127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:55.649 [2024-06-07 21:47:55.657151] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:55.649 [2024-06-07 21:47:55.657729] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:55.649 [2024-06-07 21:47:55.658098] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.649 [2024-06-07 21:47:55.658110] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.649 [2024-06-07 21:47:55.658120] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.649 [2024-06-07 21:47:55.662375] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.649 [2024-06-07 21:47:55.671167] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.649 [2024-06-07 21:47:55.671621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.649 [2024-06-07 21:47:55.671643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:55.649 [2024-06-07 21:47:55.671654] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:55.649 [2024-06-07 21:47:55.671918] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:55.649 [2024-06-07 21:47:55.672196] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.649 [2024-06-07 21:47:55.672208] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.649 [2024-06-07 21:47:55.672218] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.649 [2024-06-07 21:47:55.676467] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.649 [2024-06-07 21:47:55.685756] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.649 [2024-06-07 21:47:55.686270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.649 [2024-06-07 21:47:55.686293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:55.649 [2024-06-07 21:47:55.686303] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:55.649 [2024-06-07 21:47:55.686567] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:55.649 [2024-06-07 21:47:55.686832] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.650 [2024-06-07 21:47:55.686843] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.650 [2024-06-07 21:47:55.686853] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.650 [2024-06-07 21:47:55.691112] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.650 [2024-06-07 21:47:55.700389] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.650 [2024-06-07 21:47:55.700973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.650 [2024-06-07 21:47:55.701015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:55.650 [2024-06-07 21:47:55.701053] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:55.650 [2024-06-07 21:47:55.701621] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:55.650 [2024-06-07 21:47:55.701886] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.650 [2024-06-07 21:47:55.701898] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.650 [2024-06-07 21:47:55.701907] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.650 [2024-06-07 21:47:55.706162] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.650 [2024-06-07 21:47:55.715193] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.650 [2024-06-07 21:47:55.715752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.650 [2024-06-07 21:47:55.715794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:55.650 [2024-06-07 21:47:55.715816] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:55.650 [2024-06-07 21:47:55.716407] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:55.650 [2024-06-07 21:47:55.716941] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.650 [2024-06-07 21:47:55.716953] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.650 [2024-06-07 21:47:55.716963] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.650 [2024-06-07 21:47:55.721228] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.650 [2024-06-07 21:47:55.729748] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.650 [2024-06-07 21:47:55.730229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.650 [2024-06-07 21:47:55.730251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:55.650 [2024-06-07 21:47:55.730261] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:55.650 [2024-06-07 21:47:55.730526] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:55.650 [2024-06-07 21:47:55.730792] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.650 [2024-06-07 21:47:55.730804] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.650 [2024-06-07 21:47:55.730813] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.650 [2024-06-07 21:47:55.735067] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.650 [2024-06-07 21:47:55.744346] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.650 [2024-06-07 21:47:55.744871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.650 [2024-06-07 21:47:55.744913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:55.650 [2024-06-07 21:47:55.744936] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:55.650 [2024-06-07 21:47:55.745528] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:55.650 [2024-06-07 21:47:55.746023] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.650 [2024-06-07 21:47:55.746040] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.650 [2024-06-07 21:47:55.746050] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.650 [2024-06-07 21:47:55.750301] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.650 [2024-06-07 21:47:55.759082] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.650 [2024-06-07 21:47:55.759602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.650 [2024-06-07 21:47:55.759644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:55.650 [2024-06-07 21:47:55.759666] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:55.650 [2024-06-07 21:47:55.760135] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:55.650 [2024-06-07 21:47:55.760401] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.650 [2024-06-07 21:47:55.760413] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.650 [2024-06-07 21:47:55.760422] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.650 [2024-06-07 21:47:55.764671] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.650 [2024-06-07 21:47:55.773709] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.650 [2024-06-07 21:47:55.774358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.650 [2024-06-07 21:47:55.774401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:55.650 [2024-06-07 21:47:55.774430] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:55.650 [2024-06-07 21:47:55.774978] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:55.650 [2024-06-07 21:47:55.775248] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.650 [2024-06-07 21:47:55.775261] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.650 [2024-06-07 21:47:55.775271] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.650 [2024-06-07 21:47:55.779517] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.650 [2024-06-07 21:47:55.788296] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.650 [2024-06-07 21:47:55.788831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.650 [2024-06-07 21:47:55.788872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:55.650 [2024-06-07 21:47:55.788893] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:55.650 [2024-06-07 21:47:55.789488] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:55.650 [2024-06-07 21:47:55.790079] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.650 [2024-06-07 21:47:55.790113] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.650 [2024-06-07 21:47:55.790122] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.650 [2024-06-07 21:47:55.794372] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.650 [2024-06-07 21:47:55.802893] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.650 [2024-06-07 21:47:55.803482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.650 [2024-06-07 21:47:55.803503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:55.650 [2024-06-07 21:47:55.803514] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:55.650 [2024-06-07 21:47:55.803778] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:55.650 [2024-06-07 21:47:55.804048] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.650 [2024-06-07 21:47:55.804060] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.650 [2024-06-07 21:47:55.804070] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.650 [2024-06-07 21:47:55.808307] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.650 [2024-06-07 21:47:55.817581] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.650 [2024-06-07 21:47:55.818115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.650 [2024-06-07 21:47:55.818170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:55.650 [2024-06-07 21:47:55.818192] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:55.650 [2024-06-07 21:47:55.818771] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:55.650 [2024-06-07 21:47:55.819085] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.650 [2024-06-07 21:47:55.819101] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.650 [2024-06-07 21:47:55.819111] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.650 [2024-06-07 21:47:55.823360] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.650 [2024-06-07 21:47:55.832135] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.650 [2024-06-07 21:47:55.832642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.650 [2024-06-07 21:47:55.832663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:55.650 [2024-06-07 21:47:55.832673] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:55.650 [2024-06-07 21:47:55.832937] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:55.650 [2024-06-07 21:47:55.833208] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.650 [2024-06-07 21:47:55.833220] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.650 [2024-06-07 21:47:55.833229] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.650 [2024-06-07 21:47:55.837478] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.651 [2024-06-07 21:47:55.846752] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.651 [2024-06-07 21:47:55.847259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.651 [2024-06-07 21:47:55.847301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:55.651 [2024-06-07 21:47:55.847323] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:55.651 [2024-06-07 21:47:55.847906] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:55.651 [2024-06-07 21:47:55.848269] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.651 [2024-06-07 21:47:55.848281] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.651 [2024-06-07 21:47:55.848291] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.651 [2024-06-07 21:47:55.852538] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.651 [2024-06-07 21:47:55.861315] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.651 [2024-06-07 21:47:55.861881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.651 [2024-06-07 21:47:55.861923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:55.651 [2024-06-07 21:47:55.861945] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:55.651 [2024-06-07 21:47:55.862475] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:55.651 [2024-06-07 21:47:55.862864] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.651 [2024-06-07 21:47:55.862881] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.651 [2024-06-07 21:47:55.862894] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.651 [2024-06-07 21:47:55.869134] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.651 [2024-06-07 21:47:55.876328] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.651 [2024-06-07 21:47:55.876920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.651 [2024-06-07 21:47:55.876963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:55.651 [2024-06-07 21:47:55.876984] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:55.651 [2024-06-07 21:47:55.877451] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:55.651 [2024-06-07 21:47:55.877716] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.651 [2024-06-07 21:47:55.877728] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.651 [2024-06-07 21:47:55.877737] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.651 [2024-06-07 21:47:55.881977] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.651 [2024-06-07 21:47:55.890993] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.651 [2024-06-07 21:47:55.891586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.651 [2024-06-07 21:47:55.891629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:55.651 [2024-06-07 21:47:55.891651] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:55.651 [2024-06-07 21:47:55.892241] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:55.651 [2024-06-07 21:47:55.892633] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.651 [2024-06-07 21:47:55.892645] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.651 [2024-06-07 21:47:55.892654] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.651 [2024-06-07 21:47:55.896901] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.651 [2024-06-07 21:47:55.905675] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.651 [2024-06-07 21:47:55.906276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.651 [2024-06-07 21:47:55.906298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:55.651 [2024-06-07 21:47:55.906308] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:55.651 [2024-06-07 21:47:55.906573] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:55.651 [2024-06-07 21:47:55.906837] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.651 [2024-06-07 21:47:55.906849] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.651 [2024-06-07 21:47:55.906858] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.651 [2024-06-07 21:47:55.911110] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.913 [2024-06-07 21:47:55.920380] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.913 [2024-06-07 21:47:55.920991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.913 [2024-06-07 21:47:55.921046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:55.913 [2024-06-07 21:47:55.921076] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:55.913 [2024-06-07 21:47:55.921633] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:55.913 [2024-06-07 21:47:55.921899] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.913 [2024-06-07 21:47:55.921911] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.913 [2024-06-07 21:47:55.921920] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.913 [2024-06-07 21:47:55.926173] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.913 [2024-06-07 21:47:55.934939] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.913 [2024-06-07 21:47:55.935550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.913 [2024-06-07 21:47:55.935593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:55.913 [2024-06-07 21:47:55.935614] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:55.913 [2024-06-07 21:47:55.936205] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:55.913 [2024-06-07 21:47:55.936498] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.913 [2024-06-07 21:47:55.936509] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.913 [2024-06-07 21:47:55.936519] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.913 [2024-06-07 21:47:55.940764] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.913 [2024-06-07 21:47:55.949528] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.913 [2024-06-07 21:47:55.950055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.913 [2024-06-07 21:47:55.950098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:55.913 [2024-06-07 21:47:55.950119] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:55.913 [2024-06-07 21:47:55.950649] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:55.913 [2024-06-07 21:47:55.950915] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.913 [2024-06-07 21:47:55.950927] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.913 [2024-06-07 21:47:55.950936] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.913 [2024-06-07 21:47:55.955182] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.913 [2024-06-07 21:47:55.964191] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.913 [2024-06-07 21:47:55.964804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.913 [2024-06-07 21:47:55.964846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:55.913 [2024-06-07 21:47:55.964868] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:55.913 [2024-06-07 21:47:55.965429] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:55.913 [2024-06-07 21:47:55.965696] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.913 [2024-06-07 21:47:55.965711] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.913 [2024-06-07 21:47:55.965720] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.913 [2024-06-07 21:47:55.969966] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.913 [2024-06-07 21:47:55.978742] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.913 [2024-06-07 21:47:55.979362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.913 [2024-06-07 21:47:55.979404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:55.913 [2024-06-07 21:47:55.979426] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:55.913 [2024-06-07 21:47:55.980005] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:55.913 [2024-06-07 21:47:55.980537] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.913 [2024-06-07 21:47:55.980549] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.913 [2024-06-07 21:47:55.980558] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.913 [2024-06-07 21:47:55.984799] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.913 [2024-06-07 21:47:55.993313] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.913 [2024-06-07 21:47:55.993892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.913 [2024-06-07 21:47:55.993913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:55.913 [2024-06-07 21:47:55.993923] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:55.913 [2024-06-07 21:47:55.994194] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:55.913 [2024-06-07 21:47:55.994460] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.913 [2024-06-07 21:47:55.994472] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.913 [2024-06-07 21:47:55.994481] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.913 [2024-06-07 21:47:55.998967] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.913 [2024-06-07 21:47:56.007993] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.913 [2024-06-07 21:47:56.008523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.913 [2024-06-07 21:47:56.008546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:55.913 [2024-06-07 21:47:56.008557] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:55.913 [2024-06-07 21:47:56.008820] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:55.913 [2024-06-07 21:47:56.009093] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.913 [2024-06-07 21:47:56.009106] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.913 [2024-06-07 21:47:56.009115] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.913 [2024-06-07 21:47:56.013358] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.913 [2024-06-07 21:47:56.022636] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.913 [2024-06-07 21:47:56.023145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.913 [2024-06-07 21:47:56.023168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:55.913 [2024-06-07 21:47:56.023178] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:55.913 [2024-06-07 21:47:56.023443] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:55.913 [2024-06-07 21:47:56.023710] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.913 [2024-06-07 21:47:56.023721] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.913 [2024-06-07 21:47:56.023731] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.913 [2024-06-07 21:47:56.027981] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.913 [2024-06-07 21:47:56.037263] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.913 [2024-06-07 21:47:56.037873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.913 [2024-06-07 21:47:56.037916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:55.913 [2024-06-07 21:47:56.037938] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:55.914 [2024-06-07 21:47:56.038529] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:55.914 [2024-06-07 21:47:56.038795] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.914 [2024-06-07 21:47:56.038807] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.914 [2024-06-07 21:47:56.038816] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.914 [2024-06-07 21:47:56.043067] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.914 [2024-06-07 21:47:56.051834] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.914 [2024-06-07 21:47:56.052452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.914 [2024-06-07 21:47:56.052495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:55.914 [2024-06-07 21:47:56.052517] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:55.914 [2024-06-07 21:47:56.053179] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:55.914 [2024-06-07 21:47:56.053715] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.914 [2024-06-07 21:47:56.053727] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.914 [2024-06-07 21:47:56.053736] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.914 [2024-06-07 21:47:56.057983] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.914 [2024-06-07 21:47:56.066503] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.914 [2024-06-07 21:47:56.067023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.914 [2024-06-07 21:47:56.067050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:55.914 [2024-06-07 21:47:56.067060] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:55.914 [2024-06-07 21:47:56.067332] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:55.914 [2024-06-07 21:47:56.067597] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.914 [2024-06-07 21:47:56.067608] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.914 [2024-06-07 21:47:56.067617] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.914 [2024-06-07 21:47:56.071872] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.914 [2024-06-07 21:47:56.081150] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.914 [2024-06-07 21:47:56.081762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.914 [2024-06-07 21:47:56.081784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:55.914 [2024-06-07 21:47:56.081794] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:55.914 [2024-06-07 21:47:56.082065] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:55.914 [2024-06-07 21:47:56.082331] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.914 [2024-06-07 21:47:56.082343] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.914 [2024-06-07 21:47:56.082352] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.914 [2024-06-07 21:47:56.086595] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.914 [2024-06-07 21:47:56.095876] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.914 [2024-06-07 21:47:56.096342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.914 [2024-06-07 21:47:56.096364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:55.914 [2024-06-07 21:47:56.096374] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:55.914 [2024-06-07 21:47:56.096637] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:55.914 [2024-06-07 21:47:56.096902] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.914 [2024-06-07 21:47:56.096913] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.914 [2024-06-07 21:47:56.096923] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.914 [2024-06-07 21:47:56.101174] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.914 [2024-06-07 21:47:56.110440] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.914 [2024-06-07 21:47:56.111047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.914 [2024-06-07 21:47:56.111090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:55.914 [2024-06-07 21:47:56.111112] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:55.914 [2024-06-07 21:47:56.111655] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:55.914 [2024-06-07 21:47:56.111921] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.914 [2024-06-07 21:47:56.111932] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.914 [2024-06-07 21:47:56.111945] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.914 [2024-06-07 21:47:56.116191] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.914 [2024-06-07 21:47:56.125220] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.914 [2024-06-07 21:47:56.125833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.914 [2024-06-07 21:47:56.125876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:55.914 [2024-06-07 21:47:56.125898] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:55.914 [2024-06-07 21:47:56.126467] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:55.914 [2024-06-07 21:47:56.126857] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.914 [2024-06-07 21:47:56.126873] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.914 [2024-06-07 21:47:56.126887] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.914 [2024-06-07 21:47:56.133125] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.914 [2024-06-07 21:47:56.140245] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.914 [2024-06-07 21:47:56.140835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.914 [2024-06-07 21:47:56.140877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:55.914 [2024-06-07 21:47:56.140899] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:55.914 [2024-06-07 21:47:56.141488] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:55.914 [2024-06-07 21:47:56.142086] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.914 [2024-06-07 21:47:56.142099] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.914 [2024-06-07 21:47:56.142108] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.914 [2024-06-07 21:47:56.146355] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.914 [2024-06-07 21:47:56.154875] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.914 [2024-06-07 21:47:56.155406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.914 [2024-06-07 21:47:56.155449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:55.914 [2024-06-07 21:47:56.155471] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:55.914 [2024-06-07 21:47:56.156015] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:55.914 [2024-06-07 21:47:56.156286] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.914 [2024-06-07 21:47:56.156299] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.914 [2024-06-07 21:47:56.156308] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.914 [2024-06-07 21:47:56.160552] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.914 [2024-06-07 21:47:56.169567] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.914 [2024-06-07 21:47:56.170158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.914 [2024-06-07 21:47:56.170208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:55.914 [2024-06-07 21:47:56.170230] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:55.914 [2024-06-07 21:47:56.170809] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:55.914 [2024-06-07 21:47:56.171114] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.914 [2024-06-07 21:47:56.171127] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.914 [2024-06-07 21:47:56.171136] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.914 [2024-06-07 21:47:56.175382] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.175 [2024-06-07 21:47:56.184157] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.175 [2024-06-07 21:47:56.184743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.175 [2024-06-07 21:47:56.184764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.175 [2024-06-07 21:47:56.184775] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.175 [2024-06-07 21:47:56.185046] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.175 [2024-06-07 21:47:56.185312] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.175 [2024-06-07 21:47:56.185323] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.175 [2024-06-07 21:47:56.185332] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.175 [2024-06-07 21:47:56.189577] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.175 [2024-06-07 21:47:56.198846] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.175 [2024-06-07 21:47:56.199437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.175 [2024-06-07 21:47:56.199480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.175 [2024-06-07 21:47:56.199502] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.175 [2024-06-07 21:47:56.200038] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.175 [2024-06-07 21:47:56.200305] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.175 [2024-06-07 21:47:56.200316] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.175 [2024-06-07 21:47:56.200325] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.175 [2024-06-07 21:47:56.204567] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.175 [2024-06-07 21:47:56.213585] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.175 [2024-06-07 21:47:56.214175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.175 [2024-06-07 21:47:56.214218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.175 [2024-06-07 21:47:56.214239] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.175 [2024-06-07 21:47:56.214816] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.175 [2024-06-07 21:47:56.215149] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.175 [2024-06-07 21:47:56.215162] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.175 [2024-06-07 21:47:56.215171] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.175 [2024-06-07 21:47:56.219410] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.175 [2024-06-07 21:47:56.228165] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.175 [2024-06-07 21:47:56.228756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.175 [2024-06-07 21:47:56.228777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.175 [2024-06-07 21:47:56.228787] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.175 [2024-06-07 21:47:56.229059] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.175 [2024-06-07 21:47:56.229324] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.175 [2024-06-07 21:47:56.229336] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.175 [2024-06-07 21:47:56.229345] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.175 [2024-06-07 21:47:56.233590] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.175 [2024-06-07 21:47:56.242854] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.175 [2024-06-07 21:47:56.243411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.175 [2024-06-07 21:47:56.243432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.175 [2024-06-07 21:47:56.243442] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.175 [2024-06-07 21:47:56.243705] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.175 [2024-06-07 21:47:56.243971] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.175 [2024-06-07 21:47:56.243983] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.175 [2024-06-07 21:47:56.243992] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.175 [2024-06-07 21:47:56.248236] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.176 [2024-06-07 21:47:56.257491] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.176 [2024-06-07 21:47:56.258078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.176 [2024-06-07 21:47:56.258122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.176 [2024-06-07 21:47:56.258144] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.176 [2024-06-07 21:47:56.258723] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.176 [2024-06-07 21:47:56.259315] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.176 [2024-06-07 21:47:56.259341] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.176 [2024-06-07 21:47:56.259361] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.176 [2024-06-07 21:47:56.263645] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.176 [2024-06-07 21:47:56.272159] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.176 [2024-06-07 21:47:56.272768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.176 [2024-06-07 21:47:56.272810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.176 [2024-06-07 21:47:56.272831] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.176 [2024-06-07 21:47:56.273353] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.176 [2024-06-07 21:47:56.273618] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.176 [2024-06-07 21:47:56.273630] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.176 [2024-06-07 21:47:56.273639] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.176 [2024-06-07 21:47:56.277874] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.176 [2024-06-07 21:47:56.286889] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.176 [2024-06-07 21:47:56.287387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.176 [2024-06-07 21:47:56.287408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.176 [2024-06-07 21:47:56.287419] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.176 [2024-06-07 21:47:56.287682] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.176 [2024-06-07 21:47:56.287947] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.176 [2024-06-07 21:47:56.287958] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.176 [2024-06-07 21:47:56.287967] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.176 [2024-06-07 21:47:56.292213] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.176 [2024-06-07 21:47:56.301477] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.176 [2024-06-07 21:47:56.302060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.176 [2024-06-07 21:47:56.302081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.176 [2024-06-07 21:47:56.302091] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.176 [2024-06-07 21:47:56.302355] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.176 [2024-06-07 21:47:56.302619] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.176 [2024-06-07 21:47:56.302630] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.176 [2024-06-07 21:47:56.302639] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.176 [2024-06-07 21:47:56.306884] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.176 [2024-06-07 21:47:56.316212] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.176 [2024-06-07 21:47:56.316801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.176 [2024-06-07 21:47:56.316822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.176 [2024-06-07 21:47:56.316837] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.176 [2024-06-07 21:47:56.317107] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.176 [2024-06-07 21:47:56.317373] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.176 [2024-06-07 21:47:56.317385] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.176 [2024-06-07 21:47:56.317394] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.176 [2024-06-07 21:47:56.321644] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.176 [2024-06-07 21:47:56.330916] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.176 [2024-06-07 21:47:56.331499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.176 [2024-06-07 21:47:56.331521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.176 [2024-06-07 21:47:56.331531] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.176 [2024-06-07 21:47:56.331795] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.176 [2024-06-07 21:47:56.332069] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.176 [2024-06-07 21:47:56.332081] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.176 [2024-06-07 21:47:56.332091] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.176 [2024-06-07 21:47:56.336337] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.176 [2024-06-07 21:47:56.345604] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.176 [2024-06-07 21:47:56.346194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.176 [2024-06-07 21:47:56.346238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.176 [2024-06-07 21:47:56.346260] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.176 [2024-06-07 21:47:56.346840] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.176 [2024-06-07 21:47:56.347415] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.176 [2024-06-07 21:47:56.347428] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.176 [2024-06-07 21:47:56.347437] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.176 [2024-06-07 21:47:56.351676] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.176 [2024-06-07 21:47:56.360192] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.176 [2024-06-07 21:47:56.360690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.176 [2024-06-07 21:47:56.360711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.176 [2024-06-07 21:47:56.360721] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.176 [2024-06-07 21:47:56.360985] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.176 [2024-06-07 21:47:56.361258] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.176 [2024-06-07 21:47:56.361274] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.176 [2024-06-07 21:47:56.361284] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.176 [2024-06-07 21:47:56.365526] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.176 [2024-06-07 21:47:56.374798] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.176 [2024-06-07 21:47:56.375370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.176 [2024-06-07 21:47:56.375392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.176 [2024-06-07 21:47:56.375401] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.176 [2024-06-07 21:47:56.375665] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.176 [2024-06-07 21:47:56.375929] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.176 [2024-06-07 21:47:56.375941] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.176 [2024-06-07 21:47:56.375950] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.176 [2024-06-07 21:47:56.380198] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.176 [2024-06-07 21:47:56.389465] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.176 [2024-06-07 21:47:56.390058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.176 [2024-06-07 21:47:56.390100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.176 [2024-06-07 21:47:56.390122] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.176 [2024-06-07 21:47:56.390473] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.176 [2024-06-07 21:47:56.390738] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.176 [2024-06-07 21:47:56.390749] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.176 [2024-06-07 21:47:56.390758] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.176 [2024-06-07 21:47:56.394998] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.177 [2024-06-07 21:47:56.404039] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.177 [2024-06-07 21:47:56.404627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.177 [2024-06-07 21:47:56.404669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.177 [2024-06-07 21:47:56.404690] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.177 [2024-06-07 21:47:56.405279] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.177 [2024-06-07 21:47:56.405612] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.177 [2024-06-07 21:47:56.405624] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.177 [2024-06-07 21:47:56.405633] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.177 [2024-06-07 21:47:56.409881] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.177 [2024-06-07 21:47:56.418652] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.177 [2024-06-07 21:47:56.419225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.177 [2024-06-07 21:47:56.419268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.177 [2024-06-07 21:47:56.419290] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.177 [2024-06-07 21:47:56.419849] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.177 [2024-06-07 21:47:56.420119] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.177 [2024-06-07 21:47:56.420132] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.177 [2024-06-07 21:47:56.420141] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.177 [2024-06-07 21:47:56.424385] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.177 [2024-06-07 21:47:56.433401] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.177 [2024-06-07 21:47:56.433999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.177 [2024-06-07 21:47:56.434052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.177 [2024-06-07 21:47:56.434075] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.177 [2024-06-07 21:47:56.434653] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.177 [2024-06-07 21:47:56.434953] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.177 [2024-06-07 21:47:56.434965] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.177 [2024-06-07 21:47:56.434974] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.177 [2024-06-07 21:47:56.439225] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.437 [2024-06-07 21:47:56.447992] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.437 [2024-06-07 21:47:56.448553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.437 [2024-06-07 21:47:56.448574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.437 [2024-06-07 21:47:56.448585] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.437 [2024-06-07 21:47:56.448848] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.437 [2024-06-07 21:47:56.449120] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.437 [2024-06-07 21:47:56.449132] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.437 [2024-06-07 21:47:56.449142] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.437 [2024-06-07 21:47:56.453381] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.437 [2024-06-07 21:47:56.462646] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.437 [2024-06-07 21:47:56.463238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.437 [2024-06-07 21:47:56.463280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.437 [2024-06-07 21:47:56.463302] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.437 [2024-06-07 21:47:56.463815] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.437 [2024-06-07 21:47:56.464088] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.437 [2024-06-07 21:47:56.464101] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.437 [2024-06-07 21:47:56.464110] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.437 [2024-06-07 21:47:56.468345] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.437 [2024-06-07 21:47:56.477364] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.437 [2024-06-07 21:47:56.477951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.437 [2024-06-07 21:47:56.477993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.437 [2024-06-07 21:47:56.478014] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.437 [2024-06-07 21:47:56.478589] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.437 [2024-06-07 21:47:56.478854] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.437 [2024-06-07 21:47:56.478866] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.437 [2024-06-07 21:47:56.478875] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.437 [2024-06-07 21:47:56.483126] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.437 [2024-06-07 21:47:56.492141] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.437 [2024-06-07 21:47:56.492725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.437 [2024-06-07 21:47:56.492746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.437 [2024-06-07 21:47:56.492756] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.437 [2024-06-07 21:47:56.493020] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.437 [2024-06-07 21:47:56.493293] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.437 [2024-06-07 21:47:56.493305] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.437 [2024-06-07 21:47:56.493314] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.437 [2024-06-07 21:47:56.497553] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.437 [2024-06-07 21:47:56.506857] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.437 [2024-06-07 21:47:56.507460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.437 [2024-06-07 21:47:56.507503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.437 [2024-06-07 21:47:56.507524] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.437 [2024-06-07 21:47:56.508001] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.437 [2024-06-07 21:47:56.508273] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.438 [2024-06-07 21:47:56.508286] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.438 [2024-06-07 21:47:56.508299] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.438 [2024-06-07 21:47:56.512537] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.438 [2024-06-07 21:47:56.521548] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.438 [2024-06-07 21:47:56.522141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.438 [2024-06-07 21:47:56.522184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.438 [2024-06-07 21:47:56.522206] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.438 [2024-06-07 21:47:56.522686] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.438 [2024-06-07 21:47:56.522950] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.438 [2024-06-07 21:47:56.522962] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.438 [2024-06-07 21:47:56.522972] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.438 [2024-06-07 21:47:56.527222] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.438 [2024-06-07 21:47:56.536259] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.438 [2024-06-07 21:47:56.536904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.438 [2024-06-07 21:47:56.536946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.438 [2024-06-07 21:47:56.536968] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.438 [2024-06-07 21:47:56.537563] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.438 [2024-06-07 21:47:56.538007] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.438 [2024-06-07 21:47:56.538019] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.438 [2024-06-07 21:47:56.538034] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.438 [2024-06-07 21:47:56.542289] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.438 [2024-06-07 21:47:56.550814] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.438 [2024-06-07 21:47:56.551427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.438 [2024-06-07 21:47:56.551448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.438 [2024-06-07 21:47:56.551458] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.438 [2024-06-07 21:47:56.551721] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.438 [2024-06-07 21:47:56.551985] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.438 [2024-06-07 21:47:56.551998] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.438 [2024-06-07 21:47:56.552007] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.438 [2024-06-07 21:47:56.556262] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.438 [2024-06-07 21:47:56.565538] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.438 [2024-06-07 21:47:56.566155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.438 [2024-06-07 21:47:56.566199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.438 [2024-06-07 21:47:56.566221] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.438 [2024-06-07 21:47:56.566800] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.438 [2024-06-07 21:47:56.567234] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.438 [2024-06-07 21:47:56.567247] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.438 [2024-06-07 21:47:56.567256] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.438 [2024-06-07 21:47:56.571505] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.438 [2024-06-07 21:47:56.580271] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.438 [2024-06-07 21:47:56.580886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.438 [2024-06-07 21:47:56.580927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.438 [2024-06-07 21:47:56.580949] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.438 [2024-06-07 21:47:56.581539] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.438 [2024-06-07 21:47:56.582012] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.438 [2024-06-07 21:47:56.582024] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.438 [2024-06-07 21:47:56.582038] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.438 [2024-06-07 21:47:56.586289] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.438 [2024-06-07 21:47:56.594826] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.438 [2024-06-07 21:47:56.595359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.438 [2024-06-07 21:47:56.595410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.438 [2024-06-07 21:47:56.595432] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.438 [2024-06-07 21:47:56.595965] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.438 [2024-06-07 21:47:56.596238] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.438 [2024-06-07 21:47:56.596251] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.438 [2024-06-07 21:47:56.596260] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.438 [2024-06-07 21:47:56.600502] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.438 [2024-06-07 21:47:56.609534] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.438 [2024-06-07 21:47:56.610101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.438 [2024-06-07 21:47:56.610123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.438 [2024-06-07 21:47:56.610132] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.438 [2024-06-07 21:47:56.610401] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.438 [2024-06-07 21:47:56.610667] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.438 [2024-06-07 21:47:56.610679] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.438 [2024-06-07 21:47:56.610688] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.438 [2024-06-07 21:47:56.614931] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.438 [2024-06-07 21:47:56.624203] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.438 [2024-06-07 21:47:56.624782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.438 [2024-06-07 21:47:56.624825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.438 [2024-06-07 21:47:56.624848] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.438 [2024-06-07 21:47:56.625441] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.438 [2024-06-07 21:47:56.626031] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.438 [2024-06-07 21:47:56.626056] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.438 [2024-06-07 21:47:56.626084] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.438 [2024-06-07 21:47:56.630330] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.438 [2024-06-07 21:47:56.638846] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.438 [2024-06-07 21:47:56.639455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.438 [2024-06-07 21:47:56.639477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.438 [2024-06-07 21:47:56.639487] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.438 [2024-06-07 21:47:56.639751] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.438 [2024-06-07 21:47:56.640016] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.438 [2024-06-07 21:47:56.640034] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.438 [2024-06-07 21:47:56.640044] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.438 [2024-06-07 21:47:56.644283] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.438 [2024-06-07 21:47:56.653545] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.438 [2024-06-07 21:47:56.654130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.438 [2024-06-07 21:47:56.654152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.438 [2024-06-07 21:47:56.654162] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.438 [2024-06-07 21:47:56.654426] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.438 [2024-06-07 21:47:56.654692] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.438 [2024-06-07 21:47:56.654703] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.438 [2024-06-07 21:47:56.654716] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.439 [2024-06-07 21:47:56.658966] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.439 [2024-06-07 21:47:56.668105] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.439 [2024-06-07 21:47:56.668692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.439 [2024-06-07 21:47:56.668714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.439 [2024-06-07 21:47:56.668724] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.439 [2024-06-07 21:47:56.668987] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.439 [2024-06-07 21:47:56.669262] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.439 [2024-06-07 21:47:56.669275] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.439 [2024-06-07 21:47:56.669284] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.439 [2024-06-07 21:47:56.673539] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.439 [2024-06-07 21:47:56.682822] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.439 [2024-06-07 21:47:56.683336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.439 [2024-06-07 21:47:56.683358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.439 [2024-06-07 21:47:56.683368] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.439 [2024-06-07 21:47:56.683631] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.439 [2024-06-07 21:47:56.683898] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.439 [2024-06-07 21:47:56.683910] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.439 [2024-06-07 21:47:56.683919] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.439 [2024-06-07 21:47:56.688176] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.439 [2024-06-07 21:47:56.697458] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.439 [2024-06-07 21:47:56.698042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.439 [2024-06-07 21:47:56.698064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.439 [2024-06-07 21:47:56.698074] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.439 [2024-06-07 21:47:56.698339] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.439 [2024-06-07 21:47:56.698605] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.439 [2024-06-07 21:47:56.698616] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.439 [2024-06-07 21:47:56.698626] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.439 [2024-06-07 21:47:56.702875] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.699 [2024-06-07 21:47:56.712160] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.699 [2024-06-07 21:47:56.712744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.699 [2024-06-07 21:47:56.712769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.699 [2024-06-07 21:47:56.712779] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.699 [2024-06-07 21:47:56.713053] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.699 [2024-06-07 21:47:56.713318] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.699 [2024-06-07 21:47:56.713330] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.699 [2024-06-07 21:47:56.713339] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.699 [2024-06-07 21:47:56.717581] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.699 [2024-06-07 21:47:56.726845] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.699 [2024-06-07 21:47:56.727440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.699 [2024-06-07 21:47:56.727484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.699 [2024-06-07 21:47:56.727506] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.699 [2024-06-07 21:47:56.728070] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.699 [2024-06-07 21:47:56.728335] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.699 [2024-06-07 21:47:56.728347] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.699 [2024-06-07 21:47:56.728356] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.699 [2024-06-07 21:47:56.732598] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.699 [2024-06-07 21:47:56.741614] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.699 [2024-06-07 21:47:56.742224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.699 [2024-06-07 21:47:56.742267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.699 [2024-06-07 21:47:56.742288] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.699 [2024-06-07 21:47:56.742688] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.699 [2024-06-07 21:47:56.742952] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.700 [2024-06-07 21:47:56.742964] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.700 [2024-06-07 21:47:56.742973] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.700 [2024-06-07 21:47:56.747224] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.700 [2024-06-07 21:47:56.756247] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.700 [2024-06-07 21:47:56.756841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.700 [2024-06-07 21:47:56.756883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.700 [2024-06-07 21:47:56.756904] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.700 [2024-06-07 21:47:56.757444] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.700 [2024-06-07 21:47:56.757713] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.700 [2024-06-07 21:47:56.757725] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.700 [2024-06-07 21:47:56.757734] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.700 [2024-06-07 21:47:56.761973] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.700 [2024-06-07 21:47:56.770980] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.700 [2024-06-07 21:47:56.771573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.700 [2024-06-07 21:47:56.771595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.700 [2024-06-07 21:47:56.771627] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.700 [2024-06-07 21:47:56.772191] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.700 [2024-06-07 21:47:56.772456] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.700 [2024-06-07 21:47:56.772468] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.700 [2024-06-07 21:47:56.772477] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.700 [2024-06-07 21:47:56.776713] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.700 [2024-06-07 21:47:56.785726] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.700 [2024-06-07 21:47:56.786311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.700 [2024-06-07 21:47:56.786333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.700 [2024-06-07 21:47:56.786342] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.700 [2024-06-07 21:47:56.786607] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.700 [2024-06-07 21:47:56.786872] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.700 [2024-06-07 21:47:56.786884] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.700 [2024-06-07 21:47:56.786893] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.700 [2024-06-07 21:47:56.791136] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.700 [2024-06-07 21:47:56.800396] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.700 [2024-06-07 21:47:56.800998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.700 [2024-06-07 21:47:56.801051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.700 [2024-06-07 21:47:56.801074] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.700 [2024-06-07 21:47:56.801587] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.700 [2024-06-07 21:47:56.801852] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.700 [2024-06-07 21:47:56.801864] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.700 [2024-06-07 21:47:56.801873] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.700 [2024-06-07 21:47:56.806124] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.700 [2024-06-07 21:47:56.815134] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.700 [2024-06-07 21:47:56.815690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.700 [2024-06-07 21:47:56.815711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.700 [2024-06-07 21:47:56.815720] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.700 [2024-06-07 21:47:56.815985] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.700 [2024-06-07 21:47:56.816257] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.700 [2024-06-07 21:47:56.816269] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.700 [2024-06-07 21:47:56.816279] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.700 [2024-06-07 21:47:56.820517] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.700 [2024-06-07 21:47:56.829782] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.700 [2024-06-07 21:47:56.830301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.700 [2024-06-07 21:47:56.830344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.700 [2024-06-07 21:47:56.830365] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.700 [2024-06-07 21:47:56.830877] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.700 [2024-06-07 21:47:56.831147] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.700 [2024-06-07 21:47:56.831159] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.700 [2024-06-07 21:47:56.831168] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.700 [2024-06-07 21:47:56.835408] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.700 [2024-06-07 21:47:56.844424] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.700 [2024-06-07 21:47:56.845017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.700 [2024-06-07 21:47:56.845071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.700 [2024-06-07 21:47:56.845092] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.700 [2024-06-07 21:47:56.845641] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.700 [2024-06-07 21:47:56.845905] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.700 [2024-06-07 21:47:56.845916] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.700 [2024-06-07 21:47:56.845925] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.700 [2024-06-07 21:47:56.850172] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.700 [2024-06-07 21:47:56.859182] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.700 [2024-06-07 21:47:56.859773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.700 [2024-06-07 21:47:56.859815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.700 [2024-06-07 21:47:56.859843] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.700 [2024-06-07 21:47:56.860436] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.700 [2024-06-07 21:47:56.860742] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.700 [2024-06-07 21:47:56.860754] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.700 [2024-06-07 21:47:56.860763] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.700 [2024-06-07 21:47:56.865002] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.700 [2024-06-07 21:47:56.873800] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.700 [2024-06-07 21:47:56.874341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.700 [2024-06-07 21:47:56.874363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.700 [2024-06-07 21:47:56.874373] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.700 [2024-06-07 21:47:56.874638] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.700 [2024-06-07 21:47:56.874903] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.700 [2024-06-07 21:47:56.874915] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.700 [2024-06-07 21:47:56.874924] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.700 [2024-06-07 21:47:56.879180] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.700 [2024-06-07 21:47:56.888464] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.700 [2024-06-07 21:47:56.888999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.700 [2024-06-07 21:47:56.889052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.700 [2024-06-07 21:47:56.889075] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.700 [2024-06-07 21:47:56.889578] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.700 [2024-06-07 21:47:56.889843] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.700 [2024-06-07 21:47:56.889854] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.700 [2024-06-07 21:47:56.889863] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.700 [2024-06-07 21:47:56.894115] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.701 [2024-06-07 21:47:56.903152] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.701 [2024-06-07 21:47:56.903650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.701 [2024-06-07 21:47:56.903671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.701 [2024-06-07 21:47:56.903681] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.701 [2024-06-07 21:47:56.903944] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.701 [2024-06-07 21:47:56.904220] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.701 [2024-06-07 21:47:56.904235] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.701 [2024-06-07 21:47:56.904245] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.701 [2024-06-07 21:47:56.908498] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.701 [2024-06-07 21:47:56.917789] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.701 [2024-06-07 21:47:56.918303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.701 [2024-06-07 21:47:56.918325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.701 [2024-06-07 21:47:56.918334] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.701 [2024-06-07 21:47:56.918599] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.701 [2024-06-07 21:47:56.918865] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.701 [2024-06-07 21:47:56.918877] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.701 [2024-06-07 21:47:56.918886] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.701 [2024-06-07 21:47:56.923141] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.701 [2024-06-07 21:47:56.932434] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.701 [2024-06-07 21:47:56.933023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.701 [2024-06-07 21:47:56.933078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.701 [2024-06-07 21:47:56.933100] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.701 [2024-06-07 21:47:56.933678] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.701 [2024-06-07 21:47:56.934034] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.701 [2024-06-07 21:47:56.934046] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.701 [2024-06-07 21:47:56.934055] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.701 [2024-06-07 21:47:56.938312] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.701 [2024-06-07 21:47:56.947104] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.701 [2024-06-07 21:47:56.947714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.701 [2024-06-07 21:47:56.947756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.701 [2024-06-07 21:47:56.947778] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.701 [2024-06-07 21:47:56.948251] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.701 [2024-06-07 21:47:56.948522] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.701 [2024-06-07 21:47:56.948534] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.701 [2024-06-07 21:47:56.948544] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.701 [2024-06-07 21:47:56.952798] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.701 [2024-06-07 21:47:56.961851] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.701 [2024-06-07 21:47:56.962385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.701 [2024-06-07 21:47:56.962439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.701 [2024-06-07 21:47:56.962460] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.701 [2024-06-07 21:47:56.963051] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.701 [2024-06-07 21:47:56.963437] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.701 [2024-06-07 21:47:56.963449] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.701 [2024-06-07 21:47:56.963458] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.961 [2024-06-07 21:47:56.967710] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.961 [2024-06-07 21:47:56.976513] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.961 [2024-06-07 21:47:56.977075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.961 [2024-06-07 21:47:56.977119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.961 [2024-06-07 21:47:56.977141] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.961 [2024-06-07 21:47:56.977720] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.961 [2024-06-07 21:47:56.978251] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.961 [2024-06-07 21:47:56.978268] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.961 [2024-06-07 21:47:56.978281] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.961 [2024-06-07 21:47:56.984529] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.961 [2024-06-07 21:47:56.991987] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.961 [2024-06-07 21:47:56.992556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.961 [2024-06-07 21:47:56.992577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.961 [2024-06-07 21:47:56.992587] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.961 [2024-06-07 21:47:56.992851] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.961 [2024-06-07 21:47:56.993125] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.961 [2024-06-07 21:47:56.993137] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.961 [2024-06-07 21:47:56.993146] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.961 [2024-06-07 21:47:56.997620] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.961 [2024-06-07 21:47:57.006679] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.961 [2024-06-07 21:47:57.007274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.961 [2024-06-07 21:47:57.007321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.961 [2024-06-07 21:47:57.007343] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.961 [2024-06-07 21:47:57.007874] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.961 [2024-06-07 21:47:57.008147] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.961 [2024-06-07 21:47:57.008159] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.961 [2024-06-07 21:47:57.008168] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.961 [2024-06-07 21:47:57.012463] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.961 [2024-06-07 21:47:57.021268] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.961 [2024-06-07 21:47:57.021839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.961 [2024-06-07 21:47:57.021861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.961 [2024-06-07 21:47:57.021871] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.961 [2024-06-07 21:47:57.022145] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.961 [2024-06-07 21:47:57.022411] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.961 [2024-06-07 21:47:57.022422] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.961 [2024-06-07 21:47:57.022432] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.961 [2024-06-07 21:47:57.026691] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.961 [2024-06-07 21:47:57.035981] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.961 [2024-06-07 21:47:57.036498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.961 [2024-06-07 21:47:57.036520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.961 [2024-06-07 21:47:57.036530] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.961 [2024-06-07 21:47:57.036794] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.961 [2024-06-07 21:47:57.037065] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.961 [2024-06-07 21:47:57.037077] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.961 [2024-06-07 21:47:57.037086] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.961 [2024-06-07 21:47:57.041334] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.961 [2024-06-07 21:47:57.050627] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.961 [2024-06-07 21:47:57.051268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.961 [2024-06-07 21:47:57.051313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.961 [2024-06-07 21:47:57.051334] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.961 [2024-06-07 21:47:57.051906] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.961 [2024-06-07 21:47:57.052181] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.961 [2024-06-07 21:47:57.052200] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.961 [2024-06-07 21:47:57.052213] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.961 [2024-06-07 21:47:57.056475] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.961 [2024-06-07 21:47:57.065268] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.961 [2024-06-07 21:47:57.065882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.961 [2024-06-07 21:47:57.065925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.961 [2024-06-07 21:47:57.065947] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.961 [2024-06-07 21:47:57.066383] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.961 [2024-06-07 21:47:57.066649] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.961 [2024-06-07 21:47:57.066660] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.961 [2024-06-07 21:47:57.066670] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.961 [2024-06-07 21:47:57.070920] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.961 [2024-06-07 21:47:57.080171] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.961 [2024-06-07 21:47:57.080797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.961 [2024-06-07 21:47:57.080839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.961 [2024-06-07 21:47:57.080861] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.961 [2024-06-07 21:47:57.081329] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.961 [2024-06-07 21:47:57.081595] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.961 [2024-06-07 21:47:57.081606] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.961 [2024-06-07 21:47:57.081616] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.961 [2024-06-07 21:47:57.085861] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.961 [2024-06-07 21:47:57.094895] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.961 [2024-06-07 21:47:57.095514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.961 [2024-06-07 21:47:57.095558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.962 [2024-06-07 21:47:57.095580] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.962 [2024-06-07 21:47:57.096061] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.962 [2024-06-07 21:47:57.096326] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.962 [2024-06-07 21:47:57.096338] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.962 [2024-06-07 21:47:57.096347] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.962 [2024-06-07 21:47:57.100594] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.962 [2024-06-07 21:47:57.109626] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.962 [2024-06-07 21:47:57.110244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.962 [2024-06-07 21:47:57.110287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.962 [2024-06-07 21:47:57.110308] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.962 [2024-06-07 21:47:57.110885] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.962 [2024-06-07 21:47:57.111283] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.962 [2024-06-07 21:47:57.111301] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.962 [2024-06-07 21:47:57.111314] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.962 [2024-06-07 21:47:57.117560] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.962 [2024-06-07 21:47:57.124929] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.962 [2024-06-07 21:47:57.125529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.962 [2024-06-07 21:47:57.125572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.962 [2024-06-07 21:47:57.125594] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.962 [2024-06-07 21:47:57.126162] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.962 [2024-06-07 21:47:57.126428] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.962 [2024-06-07 21:47:57.126440] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.962 [2024-06-07 21:47:57.126449] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.962 [2024-06-07 21:47:57.130692] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.962 [2024-06-07 21:47:57.139475] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.962 [2024-06-07 21:47:57.140096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.962 [2024-06-07 21:47:57.140140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.962 [2024-06-07 21:47:57.140161] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.962 [2024-06-07 21:47:57.140536] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.962 [2024-06-07 21:47:57.140801] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.962 [2024-06-07 21:47:57.140813] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.962 [2024-06-07 21:47:57.140822] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.962 [2024-06-07 21:47:57.145075] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.962 [2024-06-07 21:47:57.154094] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.962 [2024-06-07 21:47:57.154599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.962 [2024-06-07 21:47:57.154621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.962 [2024-06-07 21:47:57.154631] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.962 [2024-06-07 21:47:57.154898] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.962 [2024-06-07 21:47:57.155172] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.962 [2024-06-07 21:47:57.155185] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.962 [2024-06-07 21:47:57.155194] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.962 [2024-06-07 21:47:57.159441] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.962 [2024-06-07 21:47:57.168722] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.962 [2024-06-07 21:47:57.169246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.962 [2024-06-07 21:47:57.169267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.962 [2024-06-07 21:47:57.169277] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.962 [2024-06-07 21:47:57.169542] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.962 [2024-06-07 21:47:57.169806] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.962 [2024-06-07 21:47:57.169818] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.962 [2024-06-07 21:47:57.169827] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.962 [2024-06-07 21:47:57.174086] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.962 [2024-06-07 21:47:57.183361] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.962 [2024-06-07 21:47:57.183972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.962 [2024-06-07 21:47:57.183993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.962 [2024-06-07 21:47:57.184003] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.962 [2024-06-07 21:47:57.184274] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.962 [2024-06-07 21:47:57.184539] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.962 [2024-06-07 21:47:57.184551] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.962 [2024-06-07 21:47:57.184560] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.962 [2024-06-07 21:47:57.188801] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.962 [2024-06-07 21:47:57.198094] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.962 [2024-06-07 21:47:57.198696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.962 [2024-06-07 21:47:57.198717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.962 [2024-06-07 21:47:57.198727] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.962 [2024-06-07 21:47:57.198992] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.962 [2024-06-07 21:47:57.199265] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.962 [2024-06-07 21:47:57.199277] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.962 [2024-06-07 21:47:57.199286] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.962 [2024-06-07 21:47:57.203545] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.962 [2024-06-07 21:47:57.212822] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.962 [2024-06-07 21:47:57.213439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.962 [2024-06-07 21:47:57.213460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.962 [2024-06-07 21:47:57.213470] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.962 [2024-06-07 21:47:57.213733] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:56.962 [2024-06-07 21:47:57.213999] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.962 [2024-06-07 21:47:57.214010] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.962 [2024-06-07 21:47:57.214019] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.962 [2024-06-07 21:47:57.218276] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.962 [2024-06-07 21:47:57.227556] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.962 [2024-06-07 21:47:57.228139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.962 [2024-06-07 21:47:57.228161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:56.962 [2024-06-07 21:47:57.228171] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:56.962 [2024-06-07 21:47:57.228435] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.222 [2024-06-07 21:47:57.228700] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.222 [2024-06-07 21:47:57.228712] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.222 [2024-06-07 21:47:57.228721] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.222 [2024-06-07 21:47:57.232968] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.222 [2024-06-07 21:47:57.242250] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.222 [2024-06-07 21:47:57.242855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.222 [2024-06-07 21:47:57.242876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:57.222 [2024-06-07 21:47:57.242886] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:57.222 [2024-06-07 21:47:57.243158] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.222 [2024-06-07 21:47:57.243424] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.222 [2024-06-07 21:47:57.243435] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.222 [2024-06-07 21:47:57.243445] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.222 [2024-06-07 21:47:57.247689] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.222 [2024-06-07 21:47:57.256970] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.222 [2024-06-07 21:47:57.257584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.222 [2024-06-07 21:47:57.257610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:57.222 [2024-06-07 21:47:57.257620] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:57.222 [2024-06-07 21:47:57.257884] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.223 [2024-06-07 21:47:57.258157] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.223 [2024-06-07 21:47:57.258170] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.223 [2024-06-07 21:47:57.258179] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.223 [2024-06-07 21:47:57.262428] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.223 [2024-06-07 21:47:57.271718] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.223 [2024-06-07 21:47:57.272328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.223 [2024-06-07 21:47:57.272350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:57.223 [2024-06-07 21:47:57.272360] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:57.223 [2024-06-07 21:47:57.272624] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.223 [2024-06-07 21:47:57.272888] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.223 [2024-06-07 21:47:57.272900] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.223 [2024-06-07 21:47:57.272909] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.223 [2024-06-07 21:47:57.277164] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.223 [2024-06-07 21:47:57.286447] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.223 [2024-06-07 21:47:57.287056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.223 [2024-06-07 21:47:57.287077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:57.223 [2024-06-07 21:47:57.287087] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:57.223 [2024-06-07 21:47:57.287351] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.223 [2024-06-07 21:47:57.287615] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.223 [2024-06-07 21:47:57.287627] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.223 [2024-06-07 21:47:57.287635] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.223 [2024-06-07 21:47:57.291890] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.223 [2024-06-07 21:47:57.301161] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.223 [2024-06-07 21:47:57.301650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.223 [2024-06-07 21:47:57.301671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:57.223 [2024-06-07 21:47:57.301681] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:57.223 [2024-06-07 21:47:57.301944] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.223 [2024-06-07 21:47:57.302219] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.223 [2024-06-07 21:47:57.302232] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.223 [2024-06-07 21:47:57.302241] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.223 [2024-06-07 21:47:57.306485] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.223 [2024-06-07 21:47:57.315759] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.223 [2024-06-07 21:47:57.316356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.223 [2024-06-07 21:47:57.316378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:57.223 [2024-06-07 21:47:57.316388] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:57.223 [2024-06-07 21:47:57.316653] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.223 [2024-06-07 21:47:57.316920] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.223 [2024-06-07 21:47:57.316932] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.223 [2024-06-07 21:47:57.316941] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.223 [2024-06-07 21:47:57.321193] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.223 [2024-06-07 21:47:57.330469] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.223 [2024-06-07 21:47:57.331032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.223 [2024-06-07 21:47:57.331054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:57.223 [2024-06-07 21:47:57.331063] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:57.223 [2024-06-07 21:47:57.331328] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.223 [2024-06-07 21:47:57.331593] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.223 [2024-06-07 21:47:57.331604] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.223 [2024-06-07 21:47:57.331613] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.223 [2024-06-07 21:47:57.335859] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.223 [2024-06-07 21:47:57.345140] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.223 [2024-06-07 21:47:57.345745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.223 [2024-06-07 21:47:57.345766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:57.223 [2024-06-07 21:47:57.345776] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:57.223 [2024-06-07 21:47:57.346046] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.223 [2024-06-07 21:47:57.346313] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.223 [2024-06-07 21:47:57.346324] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.223 [2024-06-07 21:47:57.346334] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.223 [2024-06-07 21:47:57.350578] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.223 [2024-06-07 21:47:57.359858] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.223 [2024-06-07 21:47:57.360472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.223 [2024-06-07 21:47:57.360493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:57.223 [2024-06-07 21:47:57.360503] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:57.223 [2024-06-07 21:47:57.360767] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.223 [2024-06-07 21:47:57.361038] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.223 [2024-06-07 21:47:57.361050] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.223 [2024-06-07 21:47:57.361059] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.223 [2024-06-07 21:47:57.365305] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.223 [2024-06-07 21:47:57.374586] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.223 [2024-06-07 21:47:57.375189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.223 [2024-06-07 21:47:57.375211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:57.223 [2024-06-07 21:47:57.375221] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:57.223 [2024-06-07 21:47:57.375485] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.223 [2024-06-07 21:47:57.375750] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.223 [2024-06-07 21:47:57.375761] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.223 [2024-06-07 21:47:57.375770] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.223 [2024-06-07 21:47:57.380016] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.223 [2024-06-07 21:47:57.389294] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.223 [2024-06-07 21:47:57.389825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.223 [2024-06-07 21:47:57.389847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:57.223 [2024-06-07 21:47:57.389857] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:57.223 [2024-06-07 21:47:57.390127] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.223 [2024-06-07 21:47:57.390393] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.223 [2024-06-07 21:47:57.390405] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.223 [2024-06-07 21:47:57.390414] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.223 [2024-06-07 21:47:57.394657] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.223 [2024-06-07 21:47:57.403933] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.223 [2024-06-07 21:47:57.404552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.223 [2024-06-07 21:47:57.404595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:57.223 [2024-06-07 21:47:57.404624] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:57.223 [2024-06-07 21:47:57.405151] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.223 [2024-06-07 21:47:57.405417] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.223 [2024-06-07 21:47:57.405428] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.224 [2024-06-07 21:47:57.405437] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.224 [2024-06-07 21:47:57.409680] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.224 [2024-06-07 21:47:57.418702] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.224 [2024-06-07 21:47:57.419303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.224 [2024-06-07 21:47:57.419325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:57.224 [2024-06-07 21:47:57.419335] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:57.224 [2024-06-07 21:47:57.419599] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.224 [2024-06-07 21:47:57.419865] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.224 [2024-06-07 21:47:57.419877] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.224 [2024-06-07 21:47:57.419886] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.224 [2024-06-07 21:47:57.424132] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.224 [2024-06-07 21:47:57.433402] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.224 [2024-06-07 21:47:57.434006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.224 [2024-06-07 21:47:57.434033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:57.224 [2024-06-07 21:47:57.434044] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:57.224 [2024-06-07 21:47:57.434308] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.224 [2024-06-07 21:47:57.434573] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.224 [2024-06-07 21:47:57.434584] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.224 [2024-06-07 21:47:57.434593] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.224 [2024-06-07 21:47:57.438838] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.224 [2024-06-07 21:47:57.448107] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.224 [2024-06-07 21:47:57.448619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.224 [2024-06-07 21:47:57.448662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:57.224 [2024-06-07 21:47:57.448683] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:57.224 [2024-06-07 21:47:57.449230] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.224 [2024-06-07 21:47:57.449496] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.224 [2024-06-07 21:47:57.449511] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.224 [2024-06-07 21:47:57.449520] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.224 [2024-06-07 21:47:57.453760] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.224 [2024-06-07 21:47:57.462780] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.224 [2024-06-07 21:47:57.463370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.224 [2024-06-07 21:47:57.463411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:57.224 [2024-06-07 21:47:57.463433] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:57.224 [2024-06-07 21:47:57.463977] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.224 [2024-06-07 21:47:57.464248] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.224 [2024-06-07 21:47:57.464261] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.224 [2024-06-07 21:47:57.464270] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.224 [2024-06-07 21:47:57.468512] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.224 [2024-06-07 21:47:57.477542] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.224 [2024-06-07 21:47:57.478151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.224 [2024-06-07 21:47:57.478173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:57.224 [2024-06-07 21:47:57.478183] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:57.224 [2024-06-07 21:47:57.478448] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.224 [2024-06-07 21:47:57.478713] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.224 [2024-06-07 21:47:57.478725] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.224 [2024-06-07 21:47:57.478734] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.224 [2024-06-07 21:47:57.482982] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.484 [2024-06-07 21:47:57.492257] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.484 [2024-06-07 21:47:57.492879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.484 [2024-06-07 21:47:57.492921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:57.484 [2024-06-07 21:47:57.492943] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:57.484 [2024-06-07 21:47:57.493352] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.484 [2024-06-07 21:47:57.493618] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.484 [2024-06-07 21:47:57.493630] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.484 [2024-06-07 21:47:57.493638] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.484 [2024-06-07 21:47:57.497882] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.484 [2024-06-07 21:47:57.506894] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.484 [2024-06-07 21:47:57.507514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.484 [2024-06-07 21:47:57.507556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:57.484 [2024-06-07 21:47:57.507577] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:57.484 [2024-06-07 21:47:57.508100] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.484 [2024-06-07 21:47:57.508365] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.485 [2024-06-07 21:47:57.508377] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.485 [2024-06-07 21:47:57.508386] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.485 [2024-06-07 21:47:57.512626] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.485 [2024-06-07 21:47:57.521644] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.485 [2024-06-07 21:47:57.522257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.485 [2024-06-07 21:47:57.522300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:57.485 [2024-06-07 21:47:57.522322] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:57.485 [2024-06-07 21:47:57.522900] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.485 [2024-06-07 21:47:57.523485] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.485 [2024-06-07 21:47:57.523497] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.485 [2024-06-07 21:47:57.523507] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.485 [2024-06-07 21:47:57.527751] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.485 [2024-06-07 21:47:57.536267] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.485 [2024-06-07 21:47:57.536892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.485 [2024-06-07 21:47:57.536933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:57.485 [2024-06-07 21:47:57.536955] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:57.485 [2024-06-07 21:47:57.537493] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.485 [2024-06-07 21:47:57.537760] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.485 [2024-06-07 21:47:57.537771] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.485 [2024-06-07 21:47:57.537780] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.485 [2024-06-07 21:47:57.542016] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.485 [2024-06-07 21:47:57.551037] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.485 [2024-06-07 21:47:57.551640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.485 [2024-06-07 21:47:57.551662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:57.485 [2024-06-07 21:47:57.551672] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:57.485 [2024-06-07 21:47:57.551944] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.485 [2024-06-07 21:47:57.552216] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.485 [2024-06-07 21:47:57.552229] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.485 [2024-06-07 21:47:57.552238] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.485 [2024-06-07 21:47:57.556480] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.485 [2024-06-07 21:47:57.565743] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.485 [2024-06-07 21:47:57.566358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.485 [2024-06-07 21:47:57.566401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:57.485 [2024-06-07 21:47:57.566422] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:57.485 [2024-06-07 21:47:57.566955] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.485 [2024-06-07 21:47:57.567352] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.485 [2024-06-07 21:47:57.567370] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.485 [2024-06-07 21:47:57.567383] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.485 [2024-06-07 21:47:57.573625] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.485 [2024-06-07 21:47:57.580683] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.485 [2024-06-07 21:47:57.581295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.485 [2024-06-07 21:47:57.581338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:57.485 [2024-06-07 21:47:57.581360] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:57.485 [2024-06-07 21:47:57.581936] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.485 [2024-06-07 21:47:57.582507] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.485 [2024-06-07 21:47:57.582520] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.485 [2024-06-07 21:47:57.582529] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.485 [2024-06-07 21:47:57.586780] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.485 [2024-06-07 21:47:57.595294] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.485 [2024-06-07 21:47:57.595788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.485 [2024-06-07 21:47:57.595809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:57.485 [2024-06-07 21:47:57.595818] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:57.485 [2024-06-07 21:47:57.596089] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.485 [2024-06-07 21:47:57.596354] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.485 [2024-06-07 21:47:57.596366] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.485 [2024-06-07 21:47:57.596380] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.485 [2024-06-07 21:47:57.600622] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.485 [2024-06-07 21:47:57.609892] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.485 [2024-06-07 21:47:57.610404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.485 [2024-06-07 21:47:57.610425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:57.485 [2024-06-07 21:47:57.610435] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:57.485 [2024-06-07 21:47:57.610699] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.485 [2024-06-07 21:47:57.610963] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.485 [2024-06-07 21:47:57.610974] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.485 [2024-06-07 21:47:57.610983] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.485 [2024-06-07 21:47:57.615235] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.485 [2024-06-07 21:47:57.624512] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.485 [2024-06-07 21:47:57.625124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.485 [2024-06-07 21:47:57.625167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:57.485 [2024-06-07 21:47:57.625189] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:57.485 [2024-06-07 21:47:57.625722] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.485 [2024-06-07 21:47:57.625986] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.485 [2024-06-07 21:47:57.625998] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.485 [2024-06-07 21:47:57.626008] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.485 [2024-06-07 21:47:57.630259] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.485 [2024-06-07 21:47:57.639285] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.485 [2024-06-07 21:47:57.639863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.485 [2024-06-07 21:47:57.639883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:57.485 [2024-06-07 21:47:57.639894] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:57.485 [2024-06-07 21:47:57.640166] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.485 [2024-06-07 21:47:57.640430] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.485 [2024-06-07 21:47:57.640442] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.485 [2024-06-07 21:47:57.640452] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.485 [2024-06-07 21:47:57.644693] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.485 [2024-06-07 21:47:57.653973] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.485 [2024-06-07 21:47:57.654594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.485 [2024-06-07 21:47:57.654636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:57.485 [2024-06-07 21:47:57.654657] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:57.485 [2024-06-07 21:47:57.655246] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.485 [2024-06-07 21:47:57.655724] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.485 [2024-06-07 21:47:57.655736] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.485 [2024-06-07 21:47:57.655745] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.485 [2024-06-07 21:47:57.659992] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.486 [2024-06-07 21:47:57.668764] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.486 [2024-06-07 21:47:57.669374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.486 [2024-06-07 21:47:57.669395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:57.486 [2024-06-07 21:47:57.669405] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:57.486 [2024-06-07 21:47:57.669668] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.486 [2024-06-07 21:47:57.669932] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.486 [2024-06-07 21:47:57.669944] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.486 [2024-06-07 21:47:57.669953] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.486 [2024-06-07 21:47:57.674234] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.486 [2024-06-07 21:47:57.683510] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.486 [2024-06-07 21:47:57.684038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.486 [2024-06-07 21:47:57.684060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:57.486 [2024-06-07 21:47:57.684071] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:57.486 [2024-06-07 21:47:57.684335] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.486 [2024-06-07 21:47:57.684601] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.486 [2024-06-07 21:47:57.684613] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.486 [2024-06-07 21:47:57.684622] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.486 [2024-06-07 21:47:57.688971] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.486 [2024-06-07 21:47:57.698264] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.486 [2024-06-07 21:47:57.698872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.486 [2024-06-07 21:47:57.698916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:57.486 [2024-06-07 21:47:57.698939] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:57.486 [2024-06-07 21:47:57.699499] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.486 [2024-06-07 21:47:57.699770] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.486 [2024-06-07 21:47:57.699783] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.486 [2024-06-07 21:47:57.699794] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.486 [2024-06-07 21:47:57.704054] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.486 [2024-06-07 21:47:57.712843] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.486 [2024-06-07 21:47:57.713467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.486 [2024-06-07 21:47:57.713511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:57.486 [2024-06-07 21:47:57.713532] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:57.486 [2024-06-07 21:47:57.713926] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.486 [2024-06-07 21:47:57.714194] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.486 [2024-06-07 21:47:57.714207] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.486 [2024-06-07 21:47:57.714216] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.486 [2024-06-07 21:47:57.718471] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.486 [2024-06-07 21:47:57.727505] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.486 [2024-06-07 21:47:57.728083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.486 [2024-06-07 21:47:57.728104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:57.486 [2024-06-07 21:47:57.728114] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:57.486 [2024-06-07 21:47:57.728379] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.486 [2024-06-07 21:47:57.728643] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.486 [2024-06-07 21:47:57.728654] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.486 [2024-06-07 21:47:57.728663] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.486 [2024-06-07 21:47:57.732913] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.486 [2024-06-07 21:47:57.742200] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.486 [2024-06-07 21:47:57.742766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.486 [2024-06-07 21:47:57.742808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:57.486 [2024-06-07 21:47:57.742829] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:57.486 [2024-06-07 21:47:57.743424] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.486 [2024-06-07 21:47:57.743934] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.486 [2024-06-07 21:47:57.743946] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.486 [2024-06-07 21:47:57.743955] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.486 [2024-06-07 21:47:57.748211] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.747 [2024-06-07 21:47:57.756989] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.747 [2024-06-07 21:47:57.757596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.747 [2024-06-07 21:47:57.757618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:57.747 [2024-06-07 21:47:57.757628] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:57.747 [2024-06-07 21:47:57.757893] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.747 [2024-06-07 21:47:57.758164] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.747 [2024-06-07 21:47:57.758177] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.747 [2024-06-07 21:47:57.758186] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.747 [2024-06-07 21:47:57.762434] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.747 [2024-06-07 21:47:57.771726] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.747 [2024-06-07 21:47:57.772311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.747 [2024-06-07 21:47:57.772334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:57.747 [2024-06-07 21:47:57.772344] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:57.747 [2024-06-07 21:47:57.772607] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.747 [2024-06-07 21:47:57.772872] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.747 [2024-06-07 21:47:57.772883] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.747 [2024-06-07 21:47:57.772892] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.747 [2024-06-07 21:47:57.777153] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.747 [2024-06-07 21:47:57.786437] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.747 [2024-06-07 21:47:57.787063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.747 [2024-06-07 21:47:57.787106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:57.747 [2024-06-07 21:47:57.787128] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:57.747 [2024-06-07 21:47:57.787706] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.747 [2024-06-07 21:47:57.788227] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.747 [2024-06-07 21:47:57.788240] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.747 [2024-06-07 21:47:57.788249] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.747 [2024-06-07 21:47:57.792493] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.747 [2024-06-07 21:47:57.801007] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.747 [2024-06-07 21:47:57.801631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.747 [2024-06-07 21:47:57.801673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:57.747 [2024-06-07 21:47:57.801702] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:57.747 [2024-06-07 21:47:57.802214] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.747 [2024-06-07 21:47:57.802479] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.747 [2024-06-07 21:47:57.802491] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.747 [2024-06-07 21:47:57.802500] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.747 [2024-06-07 21:47:57.806743] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.747 [2024-06-07 21:47:57.815760] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.747 [2024-06-07 21:47:57.816278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.747 [2024-06-07 21:47:57.816299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:57.747 [2024-06-07 21:47:57.816309] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:57.747 [2024-06-07 21:47:57.816574] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.747 [2024-06-07 21:47:57.816840] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.748 [2024-06-07 21:47:57.816851] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.748 [2024-06-07 21:47:57.816861] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.748 [2024-06-07 21:47:57.821106] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.748 [2024-06-07 21:47:57.830370] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.748 [2024-06-07 21:47:57.830983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.748 [2024-06-07 21:47:57.831037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:57.748 [2024-06-07 21:47:57.831061] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:57.748 [2024-06-07 21:47:57.831641] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.748 [2024-06-07 21:47:57.832191] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.748 [2024-06-07 21:47:57.832203] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.748 [2024-06-07 21:47:57.832212] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.748 [2024-06-07 21:47:57.836456] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.748 [2024-06-07 21:47:57.844971] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.748 [2024-06-07 21:47:57.845595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.748 [2024-06-07 21:47:57.845639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:57.748 [2024-06-07 21:47:57.845660] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:57.748 [2024-06-07 21:47:57.846250] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.748 [2024-06-07 21:47:57.846557] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.748 [2024-06-07 21:47:57.846569] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.748 [2024-06-07 21:47:57.846578] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.748 [2024-06-07 21:47:57.850818] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.748 [2024-06-07 21:47:57.859585] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.748 [2024-06-07 21:47:57.860155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.748 [2024-06-07 21:47:57.860196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:57.748 [2024-06-07 21:47:57.860218] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:57.748 [2024-06-07 21:47:57.860796] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.748 [2024-06-07 21:47:57.861338] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.748 [2024-06-07 21:47:57.861350] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.748 [2024-06-07 21:47:57.861360] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.748 [2024-06-07 21:47:57.865598] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.748 [2024-06-07 21:47:57.874369] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.748 [2024-06-07 21:47:57.874972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.748 [2024-06-07 21:47:57.874993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:57.748 [2024-06-07 21:47:57.875003] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:57.748 [2024-06-07 21:47:57.875273] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.748 [2024-06-07 21:47:57.875540] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.748 [2024-06-07 21:47:57.875551] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.748 [2024-06-07 21:47:57.875560] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.748 [2024-06-07 21:47:57.879805] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.748 [2024-06-07 21:47:57.889071] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.748 [2024-06-07 21:47:57.889688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.748 [2024-06-07 21:47:57.889710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:57.748 [2024-06-07 21:47:57.889720] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:57.748 [2024-06-07 21:47:57.889985] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.748 [2024-06-07 21:47:57.890258] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.748 [2024-06-07 21:47:57.890271] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.748 [2024-06-07 21:47:57.890280] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.748 [2024-06-07 21:47:57.894523] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.748 [2024-06-07 21:47:57.903798] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.748 [2024-06-07 21:47:57.904407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.748 [2024-06-07 21:47:57.904428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:57.748 [2024-06-07 21:47:57.904438] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:57.748 [2024-06-07 21:47:57.904703] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.748 [2024-06-07 21:47:57.904968] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.748 [2024-06-07 21:47:57.904980] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.748 [2024-06-07 21:47:57.904989] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.748 [2024-06-07 21:47:57.909278] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.748 [2024-06-07 21:47:57.918551] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.748 [2024-06-07 21:47:57.919131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.748 [2024-06-07 21:47:57.919152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:57.748 [2024-06-07 21:47:57.919162] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:57.748 [2024-06-07 21:47:57.919426] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.748 [2024-06-07 21:47:57.919690] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.748 [2024-06-07 21:47:57.919701] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.748 [2024-06-07 21:47:57.919710] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.748 [2024-06-07 21:47:57.923958] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.748 [2024-06-07 21:47:57.933232] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.748 [2024-06-07 21:47:57.933842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.748 [2024-06-07 21:47:57.933884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:57.748 [2024-06-07 21:47:57.933905] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:57.748 [2024-06-07 21:47:57.934421] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.748 [2024-06-07 21:47:57.934688] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.748 [2024-06-07 21:47:57.934699] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.748 [2024-06-07 21:47:57.934708] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.748 [2024-06-07 21:47:57.938948] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.748 [2024-06-07 21:47:57.947966] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.748 [2024-06-07 21:47:57.948584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.748 [2024-06-07 21:47:57.948627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:57.748 [2024-06-07 21:47:57.948656] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:57.748 [2024-06-07 21:47:57.949228] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.748 [2024-06-07 21:47:57.949494] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.748 [2024-06-07 21:47:57.949506] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.748 [2024-06-07 21:47:57.949514] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.748 [2024-06-07 21:47:57.953759] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.748 [2024-06-07 21:47:57.962527] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.748 [2024-06-07 21:47:57.963133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.748 [2024-06-07 21:47:57.963154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:57.748 [2024-06-07 21:47:57.963165] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:57.748 [2024-06-07 21:47:57.963429] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.748 [2024-06-07 21:47:57.963693] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.748 [2024-06-07 21:47:57.963704] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.749 [2024-06-07 21:47:57.963713] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.749 [2024-06-07 21:47:57.967954] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.749 [2024-06-07 21:47:57.977232] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.749 [2024-06-07 21:47:57.977836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.749 [2024-06-07 21:47:57.977857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:57.749 [2024-06-07 21:47:57.977867] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:57.749 [2024-06-07 21:47:57.978137] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.749 [2024-06-07 21:47:57.978402] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.749 [2024-06-07 21:47:57.978414] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.749 [2024-06-07 21:47:57.978423] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.749 [2024-06-07 21:47:57.982664] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.749 [2024-06-07 21:47:57.991927] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.749 [2024-06-07 21:47:57.992549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.749 [2024-06-07 21:47:57.992590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:57.749 [2024-06-07 21:47:57.992611] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:57.749 [2024-06-07 21:47:57.993195] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.749 [2024-06-07 21:47:57.993461] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.749 [2024-06-07 21:47:57.993476] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.749 [2024-06-07 21:47:57.993486] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.749 [2024-06-07 21:47:57.997951] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.749 [2024-06-07 21:47:58.006472] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.749 [2024-06-07 21:47:58.006977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.749 [2024-06-07 21:47:58.006999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:57.749 [2024-06-07 21:47:58.007010] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:57.749 [2024-06-07 21:47:58.007282] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:57.749 [2024-06-07 21:47:58.007548] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.749 [2024-06-07 21:47:58.007560] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.749 [2024-06-07 21:47:58.007569] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.749 [2024-06-07 21:47:58.011811] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.009 [2024-06-07 21:47:58.021081] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.009 [2024-06-07 21:47:58.021579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.009 [2024-06-07 21:47:58.021600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.009 [2024-06-07 21:47:58.021610] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.009 [2024-06-07 21:47:58.021874] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.009 [2024-06-07 21:47:58.022147] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.009 [2024-06-07 21:47:58.022159] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.009 [2024-06-07 21:47:58.022168] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.009 [2024-06-07 21:47:58.026409] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.009 [2024-06-07 21:47:58.035693] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.009 [2024-06-07 21:47:58.036220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.009 [2024-06-07 21:47:58.036243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.009 [2024-06-07 21:47:58.036253] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.009 [2024-06-07 21:47:58.036518] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.009 [2024-06-07 21:47:58.036782] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.009 [2024-06-07 21:47:58.036793] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.009 [2024-06-07 21:47:58.036802] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.009 [2024-06-07 21:47:58.041052] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.009 [2024-06-07 21:47:58.050320] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.009 [2024-06-07 21:47:58.050927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.009 [2024-06-07 21:47:58.050947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.009 [2024-06-07 21:47:58.050957] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.009 [2024-06-07 21:47:58.051228] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.009 [2024-06-07 21:47:58.051495] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.009 [2024-06-07 21:47:58.051507] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.009 [2024-06-07 21:47:58.051517] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.009 [2024-06-07 21:47:58.055759] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.009 [2024-06-07 21:47:58.065032] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.009 [2024-06-07 21:47:58.065540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.010 [2024-06-07 21:47:58.065581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.010 [2024-06-07 21:47:58.065603] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.010 [2024-06-07 21:47:58.066195] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.010 [2024-06-07 21:47:58.066710] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.010 [2024-06-07 21:47:58.066721] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.010 [2024-06-07 21:47:58.066730] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.010 [2024-06-07 21:47:58.070970] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.010 [2024-06-07 21:47:58.079742] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.010 [2024-06-07 21:47:58.080355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.010 [2024-06-07 21:47:58.080377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.010 [2024-06-07 21:47:58.080387] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.010 [2024-06-07 21:47:58.080652] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.010 [2024-06-07 21:47:58.080917] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.010 [2024-06-07 21:47:58.080929] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.010 [2024-06-07 21:47:58.080938] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.010 [2024-06-07 21:47:58.085187] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.010 [2024-06-07 21:47:58.094455] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.010 [2024-06-07 21:47:58.095040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.010 [2024-06-07 21:47:58.095062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.010 [2024-06-07 21:47:58.095072] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.010 [2024-06-07 21:47:58.095340] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.010 [2024-06-07 21:47:58.095607] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.010 [2024-06-07 21:47:58.095618] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.010 [2024-06-07 21:47:58.095627] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.010 [2024-06-07 21:47:58.099875] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.010 [2024-06-07 21:47:58.109148] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.010 [2024-06-07 21:47:58.109762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.010 [2024-06-07 21:47:58.109804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.010 [2024-06-07 21:47:58.109826] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.010 [2024-06-07 21:47:58.110420] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.010 [2024-06-07 21:47:58.110939] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.010 [2024-06-07 21:47:58.110951] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.010 [2024-06-07 21:47:58.110960] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.010 [2024-06-07 21:47:58.115207] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.010 [2024-06-07 21:47:58.123717] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.010 [2024-06-07 21:47:58.124320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.010 [2024-06-07 21:47:58.124341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.010 [2024-06-07 21:47:58.124351] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.010 [2024-06-07 21:47:58.124615] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.010 [2024-06-07 21:47:58.124879] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.010 [2024-06-07 21:47:58.124890] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.010 [2024-06-07 21:47:58.124900] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.010 [2024-06-07 21:47:58.129151] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.010 [2024-06-07 21:47:58.138406] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.010 [2024-06-07 21:47:58.138984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.010 [2024-06-07 21:47:58.139005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.010 [2024-06-07 21:47:58.139014] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.010 [2024-06-07 21:47:58.139285] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.010 [2024-06-07 21:47:58.139550] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.010 [2024-06-07 21:47:58.139561] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.010 [2024-06-07 21:47:58.139574] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.010 [2024-06-07 21:47:58.143811] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.010 [2024-06-07 21:47:58.153077] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.010 [2024-06-07 21:47:58.153694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.010 [2024-06-07 21:47:58.153735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.010 [2024-06-07 21:47:58.153757] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.010 [2024-06-07 21:47:58.154350] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.010 [2024-06-07 21:47:58.154947] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.010 [2024-06-07 21:47:58.154958] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.010 [2024-06-07 21:47:58.154967] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.010 [2024-06-07 21:47:58.159215] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.010 [2024-06-07 21:47:58.167735] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.010 [2024-06-07 21:47:58.168322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.010 [2024-06-07 21:47:58.168364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.010 [2024-06-07 21:47:58.168385] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.010 [2024-06-07 21:47:58.168920] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.010 [2024-06-07 21:47:58.169191] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.010 [2024-06-07 21:47:58.169203] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.010 [2024-06-07 21:47:58.169212] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.010 [2024-06-07 21:47:58.173466] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.010 [2024-06-07 21:47:58.182480] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.010 [2024-06-07 21:47:58.183087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.010 [2024-06-07 21:47:58.183130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.010 [2024-06-07 21:47:58.183152] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.010 [2024-06-07 21:47:58.183731] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.010 [2024-06-07 21:47:58.184116] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.010 [2024-06-07 21:47:58.184128] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.010 [2024-06-07 21:47:58.184137] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.010 [2024-06-07 21:47:58.188383] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.010 [2024-06-07 21:47:58.197150] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.010 [2024-06-07 21:47:58.197762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.010 [2024-06-07 21:47:58.197787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.010 [2024-06-07 21:47:58.197798] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.010 [2024-06-07 21:47:58.198069] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.010 [2024-06-07 21:47:58.198334] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.010 [2024-06-07 21:47:58.198346] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.010 [2024-06-07 21:47:58.198355] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.010 [2024-06-07 21:47:58.202595] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.010 [2024-06-07 21:47:58.211859] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.010 [2024-06-07 21:47:58.212472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.010 [2024-06-07 21:47:58.212493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.010 [2024-06-07 21:47:58.212503] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.011 [2024-06-07 21:47:58.212768] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.011 [2024-06-07 21:47:58.213041] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.011 [2024-06-07 21:47:58.213053] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.011 [2024-06-07 21:47:58.213062] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.011 [2024-06-07 21:47:58.217308] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.011 [2024-06-07 21:47:58.226573] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.011 [2024-06-07 21:47:58.227069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.011 [2024-06-07 21:47:58.227112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.011 [2024-06-07 21:47:58.227134] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.011 [2024-06-07 21:47:58.227714] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.011 [2024-06-07 21:47:58.228034] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.011 [2024-06-07 21:47:58.228046] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.011 [2024-06-07 21:47:58.228056] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.011 [2024-06-07 21:47:58.232298] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.011 [2024-06-07 21:47:58.241311] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.011 [2024-06-07 21:47:58.241899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.011 [2024-06-07 21:47:58.241940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.011 [2024-06-07 21:47:58.241962] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.011 [2024-06-07 21:47:58.242553] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.011 [2024-06-07 21:47:58.243069] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.011 [2024-06-07 21:47:58.243081] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.011 [2024-06-07 21:47:58.243090] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.011 [2024-06-07 21:47:58.247333] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.011 [2024-06-07 21:47:58.255846] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.011 [2024-06-07 21:47:58.256456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.011 [2024-06-07 21:47:58.256477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.011 [2024-06-07 21:47:58.256487] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.011 [2024-06-07 21:47:58.256751] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.011 [2024-06-07 21:47:58.257014] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.011 [2024-06-07 21:47:58.257032] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.011 [2024-06-07 21:47:58.257042] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.011 [2024-06-07 21:47:58.261288] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.011 [2024-06-07 21:47:58.270553] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.011 [2024-06-07 21:47:58.271084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.011 [2024-06-07 21:47:58.271139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.011 [2024-06-07 21:47:58.271160] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.011 [2024-06-07 21:47:58.271739] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.011 [2024-06-07 21:47:58.272097] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.011 [2024-06-07 21:47:58.272110] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.011 [2024-06-07 21:47:58.272118] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.011 [2024-06-07 21:47:58.276363] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.270 [2024-06-07 21:47:58.285124] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.270 [2024-06-07 21:47:58.285734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.270 [2024-06-07 21:47:58.285775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.270 [2024-06-07 21:47:58.285797] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.270 [2024-06-07 21:47:58.286266] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.270 [2024-06-07 21:47:58.286531] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.270 [2024-06-07 21:47:58.286543] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.270 [2024-06-07 21:47:58.286552] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.270 [2024-06-07 21:47:58.292757] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.270 [2024-06-07 21:47:58.300285] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.270 [2024-06-07 21:47:58.300871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.270 [2024-06-07 21:47:58.300893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.270 [2024-06-07 21:47:58.300903] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.270 [2024-06-07 21:47:58.301174] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.270 [2024-06-07 21:47:58.301440] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.270 [2024-06-07 21:47:58.301452] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.270 [2024-06-07 21:47:58.301461] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.270 [2024-06-07 21:47:58.305703] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.270 [2024-06-07 21:47:58.314971] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.270 [2024-06-07 21:47:58.315574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.270 [2024-06-07 21:47:58.315616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.270 [2024-06-07 21:47:58.315637] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.270 [2024-06-07 21:47:58.316171] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.270 [2024-06-07 21:47:58.316436] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.270 [2024-06-07 21:47:58.316447] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.270 [2024-06-07 21:47:58.316456] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.270 [2024-06-07 21:47:58.320697] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.270 [2024-06-07 21:47:58.329711] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.270 [2024-06-07 21:47:58.330297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.270 [2024-06-07 21:47:58.330339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.270 [2024-06-07 21:47:58.330362] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.270 [2024-06-07 21:47:58.330939] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.270 [2024-06-07 21:47:58.331458] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.270 [2024-06-07 21:47:58.331471] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.271 [2024-06-07 21:47:58.331480] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.271 [2024-06-07 21:47:58.335724] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.271 [2024-06-07 21:47:58.344485] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.271 [2024-06-07 21:47:58.345088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.271 [2024-06-07 21:47:58.345110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.271 [2024-06-07 21:47:58.345124] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.271 [2024-06-07 21:47:58.345388] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.271 [2024-06-07 21:47:58.345654] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.271 [2024-06-07 21:47:58.345665] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.271 [2024-06-07 21:47:58.345674] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.271 [2024-06-07 21:47:58.349918] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.271 [2024-06-07 21:47:58.359222] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.271 [2024-06-07 21:47:58.359835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.271 [2024-06-07 21:47:58.359879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.271 [2024-06-07 21:47:58.359900] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.271 [2024-06-07 21:47:58.360405] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.271 [2024-06-07 21:47:58.360672] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.271 [2024-06-07 21:47:58.360683] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.271 [2024-06-07 21:47:58.360693] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.271 [2024-06-07 21:47:58.364934] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.271 [2024-06-07 21:47:58.373956] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.271 [2024-06-07 21:47:58.374568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.271 [2024-06-07 21:47:58.374611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.271 [2024-06-07 21:47:58.374632] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.271 [2024-06-07 21:47:58.375205] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.271 [2024-06-07 21:47:58.375471] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.271 [2024-06-07 21:47:58.375482] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.271 [2024-06-07 21:47:58.375492] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.271 [2024-06-07 21:47:58.379735] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.271 [2024-06-07 21:47:58.388495] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.271 [2024-06-07 21:47:58.389074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.271 [2024-06-07 21:47:58.389095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.271 [2024-06-07 21:47:58.389106] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.271 [2024-06-07 21:47:58.389370] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.271 [2024-06-07 21:47:58.389634] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.271 [2024-06-07 21:47:58.389648] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.271 [2024-06-07 21:47:58.389658] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.271 [2024-06-07 21:47:58.393906] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.271 [2024-06-07 21:47:58.403185] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.271 [2024-06-07 21:47:58.403787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.271 [2024-06-07 21:47:58.403808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.271 [2024-06-07 21:47:58.403817] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.271 [2024-06-07 21:47:58.404088] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.271 [2024-06-07 21:47:58.404360] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.271 [2024-06-07 21:47:58.404372] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.271 [2024-06-07 21:47:58.404381] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.271 [2024-06-07 21:47:58.408629] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.271 [2024-06-07 21:47:58.417901] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.271 [2024-06-07 21:47:58.418461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.271 [2024-06-07 21:47:58.418482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.271 [2024-06-07 21:47:58.418492] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.271 [2024-06-07 21:47:58.418756] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.271 [2024-06-07 21:47:58.419022] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.271 [2024-06-07 21:47:58.419040] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.271 [2024-06-07 21:47:58.419050] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.271 [2024-06-07 21:47:58.423294] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.271 [2024-06-07 21:47:58.432569] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.271 [2024-06-07 21:47:58.433149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.271 [2024-06-07 21:47:58.433170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.271 [2024-06-07 21:47:58.433180] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.271 [2024-06-07 21:47:58.433444] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.271 [2024-06-07 21:47:58.433709] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.271 [2024-06-07 21:47:58.433721] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.271 [2024-06-07 21:47:58.433730] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.271 [2024-06-07 21:47:58.437976] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.271 [2024-06-07 21:47:58.447260] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.271 [2024-06-07 21:47:58.447778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.271 [2024-06-07 21:47:58.447799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.271 [2024-06-07 21:47:58.447809] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.271 [2024-06-07 21:47:58.448081] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.271 [2024-06-07 21:47:58.448347] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.271 [2024-06-07 21:47:58.448358] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.271 [2024-06-07 21:47:58.448368] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.271 [2024-06-07 21:47:58.452628] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.271 [2024-06-07 21:47:58.461920] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.271 [2024-06-07 21:47:58.462530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.271 [2024-06-07 21:47:58.462551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.271 [2024-06-07 21:47:58.462561] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.271 [2024-06-07 21:47:58.462825] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.271 [2024-06-07 21:47:58.463096] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.271 [2024-06-07 21:47:58.463108] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.271 [2024-06-07 21:47:58.463117] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.271 [2024-06-07 21:47:58.467357] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.271 [2024-06-07 21:47:58.476637] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.271 [2024-06-07 21:47:58.477243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.271 [2024-06-07 21:47:58.477265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.271 [2024-06-07 21:47:58.477275] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.271 [2024-06-07 21:47:58.477539] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.271 [2024-06-07 21:47:58.477802] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.271 [2024-06-07 21:47:58.477814] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.271 [2024-06-07 21:47:58.477823] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.271 [2024-06-07 21:47:58.482075] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.271 [2024-06-07 21:47:58.491379] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.271 [2024-06-07 21:47:58.492010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.271 [2024-06-07 21:47:58.492039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.271 [2024-06-07 21:47:58.492053] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.271 [2024-06-07 21:47:58.492317] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.271 [2024-06-07 21:47:58.492581] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.271 [2024-06-07 21:47:58.492593] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.271 [2024-06-07 21:47:58.492603] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.271 [2024-06-07 21:47:58.496854] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.271 [2024-06-07 21:47:58.506144] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.271 [2024-06-07 21:47:58.506749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.271 [2024-06-07 21:47:58.506770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.271 [2024-06-07 21:47:58.506780] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.271 [2024-06-07 21:47:58.507050] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.271 [2024-06-07 21:47:58.507316] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.271 [2024-06-07 21:47:58.507327] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.271 [2024-06-07 21:47:58.507336] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.271 [2024-06-07 21:47:58.511583] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.271 [2024-06-07 21:47:58.520855] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.271 [2024-06-07 21:47:58.521459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.271 [2024-06-07 21:47:58.521481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.271 [2024-06-07 21:47:58.521491] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.271 [2024-06-07 21:47:58.521755] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.271 [2024-06-07 21:47:58.522021] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.271 [2024-06-07 21:47:58.522040] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.271 [2024-06-07 21:47:58.522049] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.271 [2024-06-07 21:47:58.526292] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.271 [2024-06-07 21:47:58.535567] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.271 [2024-06-07 21:47:58.536173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.271 [2024-06-07 21:47:58.536195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.271 [2024-06-07 21:47:58.536205] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.271 [2024-06-07 21:47:58.536470] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.271 [2024-06-07 21:47:58.536735] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.271 [2024-06-07 21:47:58.536747] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.271 [2024-06-07 21:47:58.536760] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.531 [2024-06-07 21:47:58.541015] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.531 [2024-06-07 21:47:58.550298] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.531 [2024-06-07 21:47:58.550723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.531 [2024-06-07 21:47:58.550744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.531 [2024-06-07 21:47:58.550754] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.531 [2024-06-07 21:47:58.551018] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.531 [2024-06-07 21:47:58.551292] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.531 [2024-06-07 21:47:58.551304] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.531 [2024-06-07 21:47:58.551313] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.531 [2024-06-07 21:47:58.555559] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.531 [2024-06-07 21:47:58.564840] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.531 [2024-06-07 21:47:58.565346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.531 [2024-06-07 21:47:58.565367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.531 [2024-06-07 21:47:58.565377] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.531 [2024-06-07 21:47:58.565641] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.531 [2024-06-07 21:47:58.565907] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.531 [2024-06-07 21:47:58.565918] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.531 [2024-06-07 21:47:58.565928] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.531 [2024-06-07 21:47:58.570182] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.531 [2024-06-07 21:47:58.579469] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.531 [2024-06-07 21:47:58.580080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.531 [2024-06-07 21:47:58.580123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.531 [2024-06-07 21:47:58.580144] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.531 [2024-06-07 21:47:58.580723] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.531 [2024-06-07 21:47:58.581179] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.531 [2024-06-07 21:47:58.581197] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.531 [2024-06-07 21:47:58.581210] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.531 [2024-06-07 21:47:58.587453] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.531 [2024-06-07 21:47:58.594710] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.531 [2024-06-07 21:47:58.595241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.531 [2024-06-07 21:47:58.595263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.532 [2024-06-07 21:47:58.595274] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.532 [2024-06-07 21:47:58.595854] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.532 [2024-06-07 21:47:58.596125] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.532 [2024-06-07 21:47:58.596137] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.532 [2024-06-07 21:47:58.596146] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.532 [2024-06-07 21:47:58.600397] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.532 [2024-06-07 21:47:58.609424] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.532 [2024-06-07 21:47:58.609801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.532 [2024-06-07 21:47:58.609822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.532 [2024-06-07 21:47:58.609832] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.532 [2024-06-07 21:47:58.610102] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.532 [2024-06-07 21:47:58.610368] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.532 [2024-06-07 21:47:58.610379] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.532 [2024-06-07 21:47:58.610388] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.532 [2024-06-07 21:47:58.614632] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.532 [2024-06-07 21:47:58.624160] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.532 [2024-06-07 21:47:58.624660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.532 [2024-06-07 21:47:58.624715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.532 [2024-06-07 21:47:58.624737] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.532 [2024-06-07 21:47:58.625329] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.532 [2024-06-07 21:47:58.625911] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.532 [2024-06-07 21:47:58.625936] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.532 [2024-06-07 21:47:58.625955] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.532 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1630718 Killed "${NVMF_APP[@]}" "$@" 00:30:58.532 21:47:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:30:58.532 21:47:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:58.532 21:47:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:58.532 21:47:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:58.532 21:47:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:58.532 [2024-06-07 21:47:58.630223] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.532 21:47:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1632183 00:30:58.532 21:47:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1632183 00:30:58.532 21:47:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:58.532 21:47:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@830 -- # '[' -z 1632183 ']' 00:30:58.532 21:47:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:58.532 21:47:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:58.532 21:47:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:58.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:58.532 21:47:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:58.532 21:47:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:58.532 [2024-06-07 21:47:58.638751] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.532 [2024-06-07 21:47:58.639261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.532 [2024-06-07 21:47:58.639283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.532 [2024-06-07 21:47:58.639293] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.532 [2024-06-07 21:47:58.639559] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.532 [2024-06-07 21:47:58.639822] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.532 [2024-06-07 21:47:58.639833] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.532 [2024-06-07 21:47:58.639843] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.532 [2024-06-07 21:47:58.644096] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.532 [2024-06-07 21:47:58.653379] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.532 [2024-06-07 21:47:58.653907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.532 [2024-06-07 21:47:58.653929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.532 [2024-06-07 21:47:58.653939] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.532 [2024-06-07 21:47:58.654209] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.532 [2024-06-07 21:47:58.654475] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.532 [2024-06-07 21:47:58.654486] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.532 [2024-06-07 21:47:58.654495] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.532 [2024-06-07 21:47:58.658747] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.532 [2024-06-07 21:47:58.668038] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.532 [2024-06-07 21:47:58.668551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.532 [2024-06-07 21:47:58.668571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.532 [2024-06-07 21:47:58.668581] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.532 [2024-06-07 21:47:58.668847] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.532 [2024-06-07 21:47:58.669122] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.532 [2024-06-07 21:47:58.669134] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.532 [2024-06-07 21:47:58.669143] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.532 [2024-06-07 21:47:58.673400] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.532 [2024-06-07 21:47:58.682679] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.532 [2024-06-07 21:47:58.683265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.532 [2024-06-07 21:47:58.683288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.532 [2024-06-07 21:47:58.683298] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.532 [2024-06-07 21:47:58.683563] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.532 [2024-06-07 21:47:58.683828] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.532 [2024-06-07 21:47:58.683839] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.532 [2024-06-07 21:47:58.683848] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.532 [2024-06-07 21:47:58.688102] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.532 [2024-06-07 21:47:58.688403] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:30:58.532 [2024-06-07 21:47:58.688457] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:58.532 [2024-06-07 21:47:58.697378] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.532 [2024-06-07 21:47:58.697882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.532 [2024-06-07 21:47:58.697903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.532 [2024-06-07 21:47:58.697913] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.532 [2024-06-07 21:47:58.698185] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.532 [2024-06-07 21:47:58.698450] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.532 [2024-06-07 21:47:58.698462] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.532 [2024-06-07 21:47:58.698471] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.532 [2024-06-07 21:47:58.702722] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.532 [2024-06-07 21:47:58.712113] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.532 [2024-06-07 21:47:58.712630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.532 [2024-06-07 21:47:58.712652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.532 [2024-06-07 21:47:58.712663] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.532 [2024-06-07 21:47:58.712928] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.532 [2024-06-07 21:47:58.713205] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.532 [2024-06-07 21:47:58.713219] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.532 [2024-06-07 21:47:58.713229] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.532 [2024-06-07 21:47:58.717478] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.532 [2024-06-07 21:47:58.726761] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.532 [2024-06-07 21:47:58.727227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.532 [2024-06-07 21:47:58.727249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.532 [2024-06-07 21:47:58.727260] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.532 [2024-06-07 21:47:58.727524] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.532 [2024-06-07 21:47:58.727789] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.532 [2024-06-07 21:47:58.727801] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.532 [2024-06-07 21:47:58.727810] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.532 EAL: No free 2048 kB hugepages reported on node 1 00:30:58.532 [2024-06-07 21:47:58.732066] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.532 [2024-06-07 21:47:58.741355] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.532 [2024-06-07 21:47:58.741936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.532 [2024-06-07 21:47:58.741958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.532 [2024-06-07 21:47:58.741968] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.532 [2024-06-07 21:47:58.742239] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.532 [2024-06-07 21:47:58.742506] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.532 [2024-06-07 21:47:58.742518] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.532 [2024-06-07 21:47:58.742527] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.532 [2024-06-07 21:47:58.746791] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.532 [2024-06-07 21:47:58.756079] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.532 [2024-06-07 21:47:58.756622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.532 [2024-06-07 21:47:58.756643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.532 [2024-06-07 21:47:58.756654] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.532 [2024-06-07 21:47:58.756919] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.532 [2024-06-07 21:47:58.757191] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.532 [2024-06-07 21:47:58.757204] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.532 [2024-06-07 21:47:58.757214] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.532 [2024-06-07 21:47:58.761465] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.532 [2024-06-07 21:47:58.770741] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.532 [2024-06-07 21:47:58.771255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.532 [2024-06-07 21:47:58.771276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.532 [2024-06-07 21:47:58.771286] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.532 [2024-06-07 21:47:58.771551] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.532 [2024-06-07 21:47:58.771814] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.532 [2024-06-07 21:47:58.771826] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.532 [2024-06-07 21:47:58.771835] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.532 [2024-06-07 21:47:58.776095] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.532 [2024-06-07 21:47:58.777948] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:58.532 [2024-06-07 21:47:58.785387] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.532 [2024-06-07 21:47:58.785908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.532 [2024-06-07 21:47:58.785932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.532 [2024-06-07 21:47:58.785942] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.532 [2024-06-07 21:47:58.786212] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.532 [2024-06-07 21:47:58.786477] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.532 [2024-06-07 21:47:58.786489] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.532 [2024-06-07 21:47:58.786499] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.532 [2024-06-07 21:47:58.790744] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.792 [2024-06-07 21:47:58.800032] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.792 [2024-06-07 21:47:58.800537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.792 [2024-06-07 21:47:58.800558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.792 [2024-06-07 21:47:58.800568] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.792 [2024-06-07 21:47:58.800832] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.792 [2024-06-07 21:47:58.801105] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.792 [2024-06-07 21:47:58.801117] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.792 [2024-06-07 21:47:58.801126] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.792 [2024-06-07 21:47:58.805378] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.792 [2024-06-07 21:47:58.814647] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.792 [2024-06-07 21:47:58.815154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.792 [2024-06-07 21:47:58.815182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.792 [2024-06-07 21:47:58.815192] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.792 [2024-06-07 21:47:58.815457] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.792 [2024-06-07 21:47:58.815722] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.792 [2024-06-07 21:47:58.815733] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.792 [2024-06-07 21:47:58.815742] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.792 [2024-06-07 21:47:58.819990] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.793 [2024-06-07 21:47:58.829280] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.793 [2024-06-07 21:47:58.829825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.793 [2024-06-07 21:47:58.829848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.793 [2024-06-07 21:47:58.829859] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.793 [2024-06-07 21:47:58.830129] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.793 [2024-06-07 21:47:58.830397] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.793 [2024-06-07 21:47:58.830409] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.793 [2024-06-07 21:47:58.830419] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.793 [2024-06-07 21:47:58.834668] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.793 [2024-06-07 21:47:58.843946] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.793 [2024-06-07 21:47:58.844475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.793 [2024-06-07 21:47:58.844497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.793 [2024-06-07 21:47:58.844507] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.793 [2024-06-07 21:47:58.844772] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.793 [2024-06-07 21:47:58.845046] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.793 [2024-06-07 21:47:58.845058] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.793 [2024-06-07 21:47:58.845068] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.793 [2024-06-07 21:47:58.849314] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.793 [2024-06-07 21:47:58.858593] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.793 [2024-06-07 21:47:58.859198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.793 [2024-06-07 21:47:58.859220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.793 [2024-06-07 21:47:58.859230] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.793 [2024-06-07 21:47:58.859495] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.793 [2024-06-07 21:47:58.859764] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.793 [2024-06-07 21:47:58.859776] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.793 [2024-06-07 21:47:58.859785] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.793 [2024-06-07 21:47:58.864037] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.793 [2024-06-07 21:47:58.872150] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:58.793 [2024-06-07 21:47:58.872182] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:58.793 [2024-06-07 21:47:58.872191] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:58.793 [2024-06-07 21:47:58.872200] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:58.793 [2024-06-07 21:47:58.872207] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:58.793 [2024-06-07 21:47:58.872249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:58.793 [2024-06-07 21:47:58.872384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:58.793 [2024-06-07 21:47:58.872385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:58.793 [2024-06-07 21:47:58.873328] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.793 [2024-06-07 21:47:58.873792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.793 [2024-06-07 21:47:58.873814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.793 [2024-06-07 21:47:58.873824] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.793 [2024-06-07 21:47:58.874095] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.793 [2024-06-07 21:47:58.874361] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.793 [2024-06-07 21:47:58.874373] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.793 [2024-06-07 21:47:58.874383] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.793 [2024-06-07 21:47:58.878638] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.793 [2024-06-07 21:47:58.887912] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.793 [2024-06-07 21:47:58.888441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.793 [2024-06-07 21:47:58.888465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.793 [2024-06-07 21:47:58.888476] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.793 [2024-06-07 21:47:58.888741] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.793 [2024-06-07 21:47:58.889008] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.793 [2024-06-07 21:47:58.889020] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.793 [2024-06-07 21:47:58.889036] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.793 [2024-06-07 21:47:58.893286] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.793 [2024-06-07 21:47:58.902570] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.793 [2024-06-07 21:47:58.903195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.793 [2024-06-07 21:47:58.903227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.793 [2024-06-07 21:47:58.903238] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.793 [2024-06-07 21:47:58.903503] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.793 [2024-06-07 21:47:58.903770] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.793 [2024-06-07 21:47:58.903782] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.793 [2024-06-07 21:47:58.903792] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.793 [2024-06-07 21:47:58.908047] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.793 [2024-06-07 21:47:58.917311] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.793 [2024-06-07 21:47:58.917942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.793 [2024-06-07 21:47:58.917965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.793 [2024-06-07 21:47:58.917976] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.793 [2024-06-07 21:47:58.918247] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.793 [2024-06-07 21:47:58.918513] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.793 [2024-06-07 21:47:58.918525] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.793 [2024-06-07 21:47:58.918534] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.793 [2024-06-07 21:47:58.922777] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.793 [2024-06-07 21:47:58.932058] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.793 [2024-06-07 21:47:58.932694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.793 [2024-06-07 21:47:58.932716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.793 [2024-06-07 21:47:58.932727] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.793 [2024-06-07 21:47:58.932992] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.793 [2024-06-07 21:47:58.933265] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.793 [2024-06-07 21:47:58.933277] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.793 [2024-06-07 21:47:58.933287] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.793 [2024-06-07 21:47:58.937527] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.793 [2024-06-07 21:47:58.946794] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.793 [2024-06-07 21:47:58.947330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.793 [2024-06-07 21:47:58.947352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.793 [2024-06-07 21:47:58.947362] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.793 [2024-06-07 21:47:58.947626] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.793 [2024-06-07 21:47:58.947896] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.793 [2024-06-07 21:47:58.947907] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.793 [2024-06-07 21:47:58.947916] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.793 [2024-06-07 21:47:58.952165] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.793 [2024-06-07 21:47:58.961429] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.793 [2024-06-07 21:47:58.962013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.794 [2024-06-07 21:47:58.962039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.794 [2024-06-07 21:47:58.962050] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.794 [2024-06-07 21:47:58.962313] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.794 [2024-06-07 21:47:58.962578] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.794 [2024-06-07 21:47:58.962590] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.794 [2024-06-07 21:47:58.962599] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.794 [2024-06-07 21:47:58.966842] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.794 [2024-06-07 21:47:58.976116] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.794 [2024-06-07 21:47:58.976702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.794 [2024-06-07 21:47:58.976723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.794 [2024-06-07 21:47:58.976733] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.794 [2024-06-07 21:47:58.976996] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.794 [2024-06-07 21:47:58.977268] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.794 [2024-06-07 21:47:58.977280] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.794 [2024-06-07 21:47:58.977289] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.794 [2024-06-07 21:47:58.981527] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.794 [2024-06-07 21:47:58.990792] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.794 [2024-06-07 21:47:58.991375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.794 [2024-06-07 21:47:58.991396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.794 [2024-06-07 21:47:58.991406] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.794 [2024-06-07 21:47:58.991672] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.794 [2024-06-07 21:47:58.991936] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.794 [2024-06-07 21:47:58.991948] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.794 [2024-06-07 21:47:58.991958] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.794 [2024-06-07 21:47:58.996458] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.794 [2024-06-07 21:47:59.005472] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.794 [2024-06-07 21:47:59.006058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.794 [2024-06-07 21:47:59.006081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.794 [2024-06-07 21:47:59.006092] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.794 [2024-06-07 21:47:59.006356] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.794 [2024-06-07 21:47:59.006621] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.794 [2024-06-07 21:47:59.006633] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.794 [2024-06-07 21:47:59.006642] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.794 [2024-06-07 21:47:59.010892] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.794 [2024-06-07 21:47:59.020159] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.794 [2024-06-07 21:47:59.020741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.794 [2024-06-07 21:47:59.020762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.794 [2024-06-07 21:47:59.020772] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.794 [2024-06-07 21:47:59.021043] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.794 [2024-06-07 21:47:59.021308] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.794 [2024-06-07 21:47:59.021320] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.794 [2024-06-07 21:47:59.021329] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.794 [2024-06-07 21:47:59.025569] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.794 [2024-06-07 21:47:59.034835] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.794 [2024-06-07 21:47:59.035422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.794 [2024-06-07 21:47:59.035444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.794 [2024-06-07 21:47:59.035454] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.794 [2024-06-07 21:47:59.035719] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.794 [2024-06-07 21:47:59.035983] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.794 [2024-06-07 21:47:59.035995] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.794 [2024-06-07 21:47:59.036004] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.794 [2024-06-07 21:47:59.040247] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.794 [2024-06-07 21:47:59.049510] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.794 [2024-06-07 21:47:59.050092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.794 [2024-06-07 21:47:59.050114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:58.794 [2024-06-07 21:47:59.050128] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:58.794 [2024-06-07 21:47:59.050392] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:58.794 [2024-06-07 21:47:59.050656] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.794 [2024-06-07 21:47:59.050668] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.794 [2024-06-07 21:47:59.050677] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.794 [2024-06-07 21:47:59.054919] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.055 [2024-06-07 21:47:59.064379] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.055 [2024-06-07 21:47:59.064958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.055 [2024-06-07 21:47:59.064980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:59.055 [2024-06-07 21:47:59.064991] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:59.055 [2024-06-07 21:47:59.065260] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:59.055 [2024-06-07 21:47:59.065526] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.055 [2024-06-07 21:47:59.065537] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.055 [2024-06-07 21:47:59.065546] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.055 [2024-06-07 21:47:59.069788] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.055 [2024-06-07 21:47:59.079060] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.055 [2024-06-07 21:47:59.079642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.055 [2024-06-07 21:47:59.079664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:59.055 [2024-06-07 21:47:59.079674] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:59.055 [2024-06-07 21:47:59.079939] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:59.055 [2024-06-07 21:47:59.080210] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.055 [2024-06-07 21:47:59.080222] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.055 [2024-06-07 21:47:59.080231] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.055 [2024-06-07 21:47:59.084472] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.055 [2024-06-07 21:47:59.093731] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.055 [2024-06-07 21:47:59.094325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.055 [2024-06-07 21:47:59.094347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:59.055 [2024-06-07 21:47:59.094357] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:59.055 [2024-06-07 21:47:59.094622] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:59.055 [2024-06-07 21:47:59.094887] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.055 [2024-06-07 21:47:59.094902] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.055 [2024-06-07 21:47:59.094912] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.055 [2024-06-07 21:47:59.099160] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.055 [2024-06-07 21:47:59.108422] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.055 [2024-06-07 21:47:59.109006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.055 [2024-06-07 21:47:59.109032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:59.055 [2024-06-07 21:47:59.109043] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:59.055 [2024-06-07 21:47:59.109306] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:59.055 [2024-06-07 21:47:59.109570] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.055 [2024-06-07 21:47:59.109582] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.055 [2024-06-07 21:47:59.109591] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.055 [2024-06-07 21:47:59.113832] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.055 [2024-06-07 21:47:59.123097] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.055 [2024-06-07 21:47:59.123652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.055 [2024-06-07 21:47:59.123673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:59.055 [2024-06-07 21:47:59.123683] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:59.055 [2024-06-07 21:47:59.123946] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:59.055 [2024-06-07 21:47:59.124219] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.055 [2024-06-07 21:47:59.124231] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.055 [2024-06-07 21:47:59.124240] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.055 [2024-06-07 21:47:59.128480] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.055 [2024-06-07 21:47:59.137743] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.055 [2024-06-07 21:47:59.138328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.055 [2024-06-07 21:47:59.138350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:59.055 [2024-06-07 21:47:59.138360] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:59.055 [2024-06-07 21:47:59.138624] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:59.055 [2024-06-07 21:47:59.138888] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.055 [2024-06-07 21:47:59.138900] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.055 [2024-06-07 21:47:59.138909] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.055 [2024-06-07 21:47:59.143153] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.055 [2024-06-07 21:47:59.152417] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.055 [2024-06-07 21:47:59.153003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.055 [2024-06-07 21:47:59.153029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:59.055 [2024-06-07 21:47:59.153040] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:59.055 [2024-06-07 21:47:59.153304] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:59.055 [2024-06-07 21:47:59.153569] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.055 [2024-06-07 21:47:59.153580] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.055 [2024-06-07 21:47:59.153589] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.055 [2024-06-07 21:47:59.157832] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.055 [2024-06-07 21:47:59.167100] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.055 [2024-06-07 21:47:59.167686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.055 [2024-06-07 21:47:59.167707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:59.055 [2024-06-07 21:47:59.167717] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:59.055 [2024-06-07 21:47:59.167981] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:59.055 [2024-06-07 21:47:59.168252] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.055 [2024-06-07 21:47:59.168264] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.055 [2024-06-07 21:47:59.168273] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.055 [2024-06-07 21:47:59.172526] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.055 [2024-06-07 21:47:59.181800] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.055 [2024-06-07 21:47:59.182408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.055 [2024-06-07 21:47:59.182430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:59.056 [2024-06-07 21:47:59.182440] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:59.056 [2024-06-07 21:47:59.182705] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:59.056 [2024-06-07 21:47:59.182970] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.056 [2024-06-07 21:47:59.182981] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.056 [2024-06-07 21:47:59.182991] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.056 [2024-06-07 21:47:59.187235] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.056 [2024-06-07 21:47:59.196497] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.056 [2024-06-07 21:47:59.197078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.056 [2024-06-07 21:47:59.197100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:59.056 [2024-06-07 21:47:59.197110] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:59.056 [2024-06-07 21:47:59.197378] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:59.056 [2024-06-07 21:47:59.197643] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.056 [2024-06-07 21:47:59.197654] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.056 [2024-06-07 21:47:59.197663] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.056 [2024-06-07 21:47:59.201902] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.056 [2024-06-07 21:47:59.211165] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.056 [2024-06-07 21:47:59.211743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.056 [2024-06-07 21:47:59.211764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:59.056 [2024-06-07 21:47:59.211774] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:59.056 [2024-06-07 21:47:59.212043] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:59.056 [2024-06-07 21:47:59.212309] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.056 [2024-06-07 21:47:59.212321] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.056 [2024-06-07 21:47:59.212330] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.056 [2024-06-07 21:47:59.216563] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.056 [2024-06-07 21:47:59.225824] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.056 [2024-06-07 21:47:59.226413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.056 [2024-06-07 21:47:59.226435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:59.056 [2024-06-07 21:47:59.226445] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:59.056 [2024-06-07 21:47:59.226708] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:59.056 [2024-06-07 21:47:59.226974] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.056 [2024-06-07 21:47:59.226985] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.056 [2024-06-07 21:47:59.226994] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.056 [2024-06-07 21:47:59.231236] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.056 [2024-06-07 21:47:59.240487] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.056 [2024-06-07 21:47:59.241069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.056 [2024-06-07 21:47:59.241090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:59.056 [2024-06-07 21:47:59.241101] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:59.056 [2024-06-07 21:47:59.241366] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:59.056 [2024-06-07 21:47:59.241631] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.056 [2024-06-07 21:47:59.241643] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.056 [2024-06-07 21:47:59.241656] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.056 [2024-06-07 21:47:59.245900] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.056 [2024-06-07 21:47:59.255153] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.056 [2024-06-07 21:47:59.255659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.056 [2024-06-07 21:47:59.255679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:59.056 [2024-06-07 21:47:59.255689] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:59.056 [2024-06-07 21:47:59.255954] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:59.056 [2024-06-07 21:47:59.256223] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.056 [2024-06-07 21:47:59.256236] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.056 [2024-06-07 21:47:59.256245] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.056 [2024-06-07 21:47:59.260492] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.056 [2024-06-07 21:47:59.269761] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.056 [2024-06-07 21:47:59.270323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.056 [2024-06-07 21:47:59.270345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:59.056 [2024-06-07 21:47:59.270355] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:59.056 [2024-06-07 21:47:59.270619] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:59.056 [2024-06-07 21:47:59.270885] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.056 [2024-06-07 21:47:59.270896] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.056 [2024-06-07 21:47:59.270905] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.056 [2024-06-07 21:47:59.275151] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.056 [2024-06-07 21:47:59.284415] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.056 [2024-06-07 21:47:59.284922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.056 [2024-06-07 21:47:59.284943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:59.056 [2024-06-07 21:47:59.284953] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:59.056 [2024-06-07 21:47:59.285223] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:59.056 [2024-06-07 21:47:59.285488] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.056 [2024-06-07 21:47:59.285499] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.056 [2024-06-07 21:47:59.285509] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.056 [2024-06-07 21:47:59.289742] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.056 [2024-06-07 21:47:59.299003] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.056 [2024-06-07 21:47:59.299595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.056 [2024-06-07 21:47:59.299616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:59.056 [2024-06-07 21:47:59.299626] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:59.056 [2024-06-07 21:47:59.299889] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:59.056 [2024-06-07 21:47:59.300159] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.056 [2024-06-07 21:47:59.300171] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.056 [2024-06-07 21:47:59.300180] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.056 [2024-06-07 21:47:59.304418] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.056 [2024-06-07 21:47:59.313678] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.056 [2024-06-07 21:47:59.314236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.056 [2024-06-07 21:47:59.314257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:59.056 [2024-06-07 21:47:59.314267] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:59.056 [2024-06-07 21:47:59.314532] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:59.056 [2024-06-07 21:47:59.314797] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.056 [2024-06-07 21:47:59.314808] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.056 [2024-06-07 21:47:59.314818] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.056 [2024-06-07 21:47:59.319064] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.317 [2024-06-07 21:47:59.328334] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.317 [2024-06-07 21:47:59.328916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.317 [2024-06-07 21:47:59.328938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:59.317 [2024-06-07 21:47:59.328948] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:59.317 [2024-06-07 21:47:59.329218] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:59.317 [2024-06-07 21:47:59.329484] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.317 [2024-06-07 21:47:59.329495] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.317 [2024-06-07 21:47:59.329504] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.317 [2024-06-07 21:47:59.333744] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.317 [2024-06-07 21:47:59.343003] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.317 [2024-06-07 21:47:59.343590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.317 [2024-06-07 21:47:59.343611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:59.317 [2024-06-07 21:47:59.343621] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:59.317 [2024-06-07 21:47:59.343885] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:59.317 [2024-06-07 21:47:59.344158] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.317 [2024-06-07 21:47:59.344170] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.317 [2024-06-07 21:47:59.344179] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.317 [2024-06-07 21:47:59.348420] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.317 [2024-06-07 21:47:59.357684] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.317 [2024-06-07 21:47:59.358263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.317 [2024-06-07 21:47:59.358283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:59.317 [2024-06-07 21:47:59.358294] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:59.317 [2024-06-07 21:47:59.358557] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:59.317 [2024-06-07 21:47:59.358822] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.317 [2024-06-07 21:47:59.358834] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.317 [2024-06-07 21:47:59.358843] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.317 [2024-06-07 21:47:59.363092] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.317 [2024-06-07 21:47:59.372358] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.317 [2024-06-07 21:47:59.372936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.317 [2024-06-07 21:47:59.372957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:59.317 [2024-06-07 21:47:59.372967] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:59.317 [2024-06-07 21:47:59.373236] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:59.317 [2024-06-07 21:47:59.373502] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.317 [2024-06-07 21:47:59.373513] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.317 [2024-06-07 21:47:59.373522] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.317 [2024-06-07 21:47:59.377760] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.317 [2024-06-07 21:47:59.387033] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.317 [2024-06-07 21:47:59.387609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.317 [2024-06-07 21:47:59.387630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:59.317 [2024-06-07 21:47:59.387640] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:59.317 [2024-06-07 21:47:59.387903] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:59.317 [2024-06-07 21:47:59.388174] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.317 [2024-06-07 21:47:59.388185] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.317 [2024-06-07 21:47:59.388195] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.317 [2024-06-07 21:47:59.392438] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.317 [2024-06-07 21:47:59.401703] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.317 [2024-06-07 21:47:59.402309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.317 [2024-06-07 21:47:59.402331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:59.317 [2024-06-07 21:47:59.402341] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:59.317 [2024-06-07 21:47:59.402606] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:59.317 [2024-06-07 21:47:59.402871] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.317 [2024-06-07 21:47:59.402883] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.317 [2024-06-07 21:47:59.402892] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.317 [2024-06-07 21:47:59.407141] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.317 [2024-06-07 21:47:59.416410] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.318 [2024-06-07 21:47:59.417013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.318 [2024-06-07 21:47:59.417040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:59.318 [2024-06-07 21:47:59.417051] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:59.318 [2024-06-07 21:47:59.417316] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:59.318 [2024-06-07 21:47:59.417580] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.318 [2024-06-07 21:47:59.417592] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.318 [2024-06-07 21:47:59.417603] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.318 [2024-06-07 21:47:59.421846] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.318 [2024-06-07 21:47:59.431117] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.318 [2024-06-07 21:47:59.431694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.318 [2024-06-07 21:47:59.431714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:59.318 [2024-06-07 21:47:59.431724] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:59.318 [2024-06-07 21:47:59.431988] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:59.318 [2024-06-07 21:47:59.432261] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.318 [2024-06-07 21:47:59.432273] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.318 [2024-06-07 21:47:59.432282] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.318 [2024-06-07 21:47:59.436523] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.318 [2024-06-07 21:47:59.445791] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.318 [2024-06-07 21:47:59.446395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.318 [2024-06-07 21:47:59.446420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:59.318 [2024-06-07 21:47:59.446430] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:59.318 [2024-06-07 21:47:59.446694] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:59.318 [2024-06-07 21:47:59.446959] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.318 [2024-06-07 21:47:59.446970] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.318 [2024-06-07 21:47:59.446979] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.318 [2024-06-07 21:47:59.451222] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.318 [2024-06-07 21:47:59.460489] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.318 [2024-06-07 21:47:59.461008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.318 [2024-06-07 21:47:59.461034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:59.318 [2024-06-07 21:47:59.461045] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:59.318 [2024-06-07 21:47:59.461309] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:59.318 [2024-06-07 21:47:59.461575] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.318 [2024-06-07 21:47:59.461587] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.318 [2024-06-07 21:47:59.461596] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.318 [2024-06-07 21:47:59.465842] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.318 [2024-06-07 21:47:59.475117] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.318 [2024-06-07 21:47:59.475644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.318 [2024-06-07 21:47:59.475665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:59.318 [2024-06-07 21:47:59.475675] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:59.318 [2024-06-07 21:47:59.475940] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:59.318 [2024-06-07 21:47:59.476211] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.318 [2024-06-07 21:47:59.476224] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.318 [2024-06-07 21:47:59.476232] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.318 [2024-06-07 21:47:59.480478] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.318 [2024-06-07 21:47:59.489746] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.318 [2024-06-07 21:47:59.490308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.318 [2024-06-07 21:47:59.490328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:59.318 [2024-06-07 21:47:59.490338] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:59.318 [2024-06-07 21:47:59.490604] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:59.318 [2024-06-07 21:47:59.490872] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.318 [2024-06-07 21:47:59.490884] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.318 [2024-06-07 21:47:59.490893] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.318 [2024-06-07 21:47:59.495141] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.318 [2024-06-07 21:47:59.504408] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.318 [2024-06-07 21:47:59.505014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.318 [2024-06-07 21:47:59.505041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:59.318 [2024-06-07 21:47:59.505052] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:59.318 [2024-06-07 21:47:59.505315] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:59.318 [2024-06-07 21:47:59.505581] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.318 [2024-06-07 21:47:59.505593] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.318 [2024-06-07 21:47:59.505602] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.318 [2024-06-07 21:47:59.509843] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.318 [2024-06-07 21:47:59.519116] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.318 [2024-06-07 21:47:59.519634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.318 [2024-06-07 21:47:59.519654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:59.318 [2024-06-07 21:47:59.519665] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:59.318 [2024-06-07 21:47:59.519930] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:59.318 [2024-06-07 21:47:59.520199] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.318 [2024-06-07 21:47:59.520212] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.318 [2024-06-07 21:47:59.520221] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.318 [2024-06-07 21:47:59.524466] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.318 [2024-06-07 21:47:59.533740] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.318 [2024-06-07 21:47:59.534321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.318 [2024-06-07 21:47:59.534343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:59.318 [2024-06-07 21:47:59.534353] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:59.318 [2024-06-07 21:47:59.534616] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:59.318 [2024-06-07 21:47:59.534881] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.318 [2024-06-07 21:47:59.534894] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.318 [2024-06-07 21:47:59.534903] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.318 [2024-06-07 21:47:59.539147] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.318 [2024-06-07 21:47:59.548426] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.318 [2024-06-07 21:47:59.549036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.318 [2024-06-07 21:47:59.549057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:59.318 [2024-06-07 21:47:59.549067] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:59.318 [2024-06-07 21:47:59.549331] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:59.319 [2024-06-07 21:47:59.549597] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.319 [2024-06-07 21:47:59.549610] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.319 [2024-06-07 21:47:59.549620] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.319 [2024-06-07 21:47:59.553864] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.319 [2024-06-07 21:47:59.563138] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.319 [2024-06-07 21:47:59.563653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.319 [2024-06-07 21:47:59.563674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:59.319 [2024-06-07 21:47:59.563685] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:59.319 [2024-06-07 21:47:59.563951] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:59.319 [2024-06-07 21:47:59.564221] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.319 [2024-06-07 21:47:59.564234] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.319 [2024-06-07 21:47:59.564243] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.319 [2024-06-07 21:47:59.568488] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.319 [2024-06-07 21:47:59.577761] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.319 [2024-06-07 21:47:59.578395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.319 [2024-06-07 21:47:59.578416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:59.319 [2024-06-07 21:47:59.578427] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:59.319 [2024-06-07 21:47:59.578690] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:59.319 [2024-06-07 21:47:59.578955] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.319 [2024-06-07 21:47:59.578968] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.319 [2024-06-07 21:47:59.578977] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.319 [2024-06-07 21:47:59.583224] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.579 [2024-06-07 21:47:59.592491] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.579 [2024-06-07 21:47:59.593100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.579 [2024-06-07 21:47:59.593121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:59.579 [2024-06-07 21:47:59.593135] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:59.579 [2024-06-07 21:47:59.593399] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:59.579 [2024-06-07 21:47:59.593665] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.579 [2024-06-07 21:47:59.593677] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.579 [2024-06-07 21:47:59.593686] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.579 [2024-06-07 21:47:59.597926] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.579 [2024-06-07 21:47:59.607194] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.579 [2024-06-07 21:47:59.607777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.579 [2024-06-07 21:47:59.607799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:59.579 [2024-06-07 21:47:59.607809] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:59.579 [2024-06-07 21:47:59.608078] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:59.579 [2024-06-07 21:47:59.608344] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.579 [2024-06-07 21:47:59.608356] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.579 [2024-06-07 21:47:59.608364] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.579 [2024-06-07 21:47:59.612607] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.579 [2024-06-07 21:47:59.621871] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.579 [2024-06-07 21:47:59.622457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.579 [2024-06-07 21:47:59.622478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:59.579 [2024-06-07 21:47:59.622488] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:59.579 [2024-06-07 21:47:59.622752] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:59.579 [2024-06-07 21:47:59.623018] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.579 [2024-06-07 21:47:59.623035] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.579 [2024-06-07 21:47:59.623045] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.579 [2024-06-07 21:47:59.627283] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.579 21:47:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:59.579 21:47:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@863 -- # return 0 00:30:59.579 21:47:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:59.579 21:47:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:59.579 21:47:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:59.579 [2024-06-07 21:47:59.636544] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.579 [2024-06-07 21:47:59.637070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.579 [2024-06-07 21:47:59.637093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:59.579 [2024-06-07 21:47:59.637109] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:59.579 [2024-06-07 21:47:59.637376] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:59.579 [2024-06-07 21:47:59.637642] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.579 [2024-06-07 21:47:59.637654] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.579 [2024-06-07 21:47:59.637663] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.579 [2024-06-07 21:47:59.641909] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.579 [2024-06-07 21:47:59.651191] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.579 [2024-06-07 21:47:59.651790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.579 [2024-06-07 21:47:59.651811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:59.579 [2024-06-07 21:47:59.651821] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:59.579 [2024-06-07 21:47:59.652092] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:59.579 [2024-06-07 21:47:59.652359] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.579 [2024-06-07 21:47:59.652373] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.579 [2024-06-07 21:47:59.652384] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.579 [2024-06-07 21:47:59.656630] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.580 [2024-06-07 21:47:59.665900] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.580 [2024-06-07 21:47:59.666403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.580 [2024-06-07 21:47:59.666424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:59.580 [2024-06-07 21:47:59.666436] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:59.580 [2024-06-07 21:47:59.666699] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:59.580 [2024-06-07 21:47:59.666966] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.580 [2024-06-07 21:47:59.666977] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.580 [2024-06-07 21:47:59.666986] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.580 21:47:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:59.580 21:47:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:59.580 21:47:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:59.580 21:47:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:59.580 [2024-06-07 21:47:59.671236] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.580 [2024-06-07 21:47:59.674690] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:59.580 21:47:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:59.580 [2024-06-07 21:47:59.680528] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.580 21:47:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:59.580 [2024-06-07 21:47:59.681103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.580 [2024-06-07 21:47:59.681125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:59.580 [2024-06-07 21:47:59.681135] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:59.580 21:47:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:59.580 [2024-06-07 21:47:59.681400] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:59.580 21:47:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:59.580 [2024-06-07 21:47:59.681668] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.580 [2024-06-07 21:47:59.681680] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.580 [2024-06-07 21:47:59.681689] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.580 [2024-06-07 21:47:59.685935] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.580 [2024-06-07 21:47:59.695207] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.580 [2024-06-07 21:47:59.695808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.580 [2024-06-07 21:47:59.695828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:59.580 [2024-06-07 21:47:59.695838] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:59.580 [2024-06-07 21:47:59.696108] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:59.580 [2024-06-07 21:47:59.696374] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.580 [2024-06-07 21:47:59.696386] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.580 [2024-06-07 21:47:59.696395] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.580 [2024-06-07 21:47:59.700636] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.580 [2024-06-07 21:47:59.709898] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.580 [2024-06-07 21:47:59.710503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.580 [2024-06-07 21:47:59.710525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:59.580 [2024-06-07 21:47:59.710535] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:59.580 [2024-06-07 21:47:59.710799] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:59.580 [2024-06-07 21:47:59.711071] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.580 [2024-06-07 21:47:59.711084] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.580 [2024-06-07 21:47:59.711093] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.580 [2024-06-07 21:47:59.715335] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.580 Malloc0 00:30:59.580 21:47:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:59.580 21:47:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:59.580 21:47:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:59.580 21:47:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:59.580 [2024-06-07 21:47:59.724599] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.580 [2024-06-07 21:47:59.725125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.580 [2024-06-07 21:47:59.725146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:59.580 [2024-06-07 21:47:59.725157] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:59.580 [2024-06-07 21:47:59.725424] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): Bad file descriptor 00:30:59.580 [2024-06-07 21:47:59.725690] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.580 [2024-06-07 21:47:59.725702] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.580 [2024-06-07 21:47:59.725711] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.580 [2024-06-07 21:47:59.729958] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.580 21:47:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:59.580 21:47:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:59.580 21:47:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:59.580 21:47:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:59.580 [2024-06-07 21:47:59.739372] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.580 21:47:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:59.580 21:47:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:59.580 [2024-06-07 21:47:59.739986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.580 [2024-06-07 21:47:59.740009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b92dc0 with addr=10.0.0.2, port=4420 00:30:59.580 [2024-06-07 21:47:59.740019] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92dc0 is same with the state(5) to be set 00:30:59.580 [2024-06-07 21:47:59.740289] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b92dc0 (9): 21:47:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:59.580 Bad file descriptor 00:30:59.580 21:47:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:59.580 [2024-06-07 21:47:59.740560] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.580 [2024-06-07 21:47:59.740574] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.580 [2024-06-07 21:47:59.740584] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.580 [2024-06-07 21:47:59.743158] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:59.580 [2024-06-07 21:47:59.744832] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.580 21:47:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:59.580 21:47:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1631176 00:30:59.580 [2024-06-07 21:47:59.754110] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.839 [2024-06-07 21:47:59.919892] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:07.956 00:31:07.956 Latency(us) 00:31:07.956 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:07.956 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:07.956 Verification LBA range: start 0x0 length 0x4000 00:31:07.956 Nvme1n1 : 15.02 5421.10 21.18 8666.45 0.00 9055.42 960.70 19541.64 00:31:07.956 =================================================================================================================== 00:31:07.956 Total : 5421.10 21.18 8666.45 0.00 9055.42 960.70 19541.64 00:31:08.215 21:48:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:31:08.215 21:48:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:08.215 21:48:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:08.215 21:48:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:08.215 21:48:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:08.215 21:48:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:31:08.215 21:48:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:31:08.215 21:48:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:08.215 21:48:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:31:08.215 21:48:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:08.215 21:48:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:31:08.215 21:48:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:08.215 21:48:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:08.215 rmmod nvme_tcp 00:31:08.215 rmmod nvme_fabrics 00:31:08.215 rmmod nvme_keyring 00:31:08.215 21:48:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:08.215 21:48:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:31:08.215 21:48:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:31:08.215 21:48:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1632183 ']' 00:31:08.215 21:48:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1632183 00:31:08.215 21:48:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@949 -- # '[' -z 1632183 ']' 00:31:08.215 21:48:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # kill -0 1632183 00:31:08.215 21:48:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # uname 00:31:08.215 21:48:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:08.215 21:48:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1632183 00:31:08.215 21:48:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:31:08.215 21:48:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:31:08.215 21:48:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1632183' 00:31:08.215 killing process with pid 1632183 00:31:08.215 21:48:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@968 -- # kill 1632183 00:31:08.215 21:48:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@973 -- # wait 1632183 00:31:08.474 21:48:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:08.474 21:48:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:08.474 21:48:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:08.474 21:48:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:08.474 21:48:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:08.474 21:48:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:08.474 21:48:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:08.474 21:48:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:11.008 21:48:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:11.008 00:31:11.008 real 0m27.508s 00:31:11.008 user 1m4.033s 00:31:11.008 sys 0m7.088s 00:31:11.008 21:48:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:11.008 21:48:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:11.008 ************************************ 00:31:11.008 END TEST nvmf_bdevperf 00:31:11.008 ************************************ 00:31:11.008 21:48:10 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:31:11.008 21:48:10 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:31:11.008 21:48:10 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:11.008 21:48:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:11.008 ************************************ 00:31:11.008 START TEST nvmf_target_disconnect 00:31:11.008 ************************************ 00:31:11.008 21:48:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:31:11.008 * Looking for test storage... 00:31:11.008 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:11.008 21:48:10 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:11.008 21:48:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:31:11.008 21:48:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:11.008 21:48:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:11.008 21:48:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:11.008 21:48:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:11.008 21:48:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:11.008 21:48:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:11.008 21:48:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:11.008 21:48:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:11.008 21:48:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:11.008 21:48:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:11.008 21:48:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:31:11.008 21:48:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:31:11.008 21:48:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:11.008 21:48:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:11.008 21:48:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:11.008 21:48:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:11.008 21:48:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:11.008 21:48:10 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:11.008 21:48:10 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:11.008 21:48:10 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:11.008 21:48:10 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.008 21:48:10 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.008 21:48:10 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.008 21:48:10 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:31:11.008 21:48:10 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.008 21:48:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:31:11.008 21:48:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:11.008 21:48:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:11.009 21:48:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:11.009 21:48:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:11.009 21:48:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:11.009 21:48:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:11.009 21:48:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:11.009 21:48:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:11.009 21:48:10 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:11.009 21:48:10 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:31:11.009 21:48:10 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:31:11.009 21:48:10 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:31:11.009 21:48:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:11.009 21:48:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:11.009 21:48:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:11.009 21:48:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:11.009 21:48:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:11.009 21:48:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:11.009 21:48:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:11.009 21:48:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:11.009 21:48:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:11.009 21:48:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:11.009 21:48:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:31:11.009 21:48:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:17.586 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:17.586 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:31:17.586 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:17.586 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:17.586 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:17.586 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:17.586 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:17.586 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:31:17.586 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:17.586 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:31:17.586 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:31:17.586 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:31:17.586 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:31:17.586 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:31:17.586 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:31:17.586 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:17.586 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:17.586 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:17.586 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:17.586 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:17.586 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:17.586 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:17.586 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:17.586 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:17.586 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:17.586 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:17.586 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:17.586 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:17.586 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:17.586 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:17.586 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:17.586 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:17.586 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:17.586 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:17.586 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:17.586 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:17.587 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:17.587 Found net devices under 0000:af:00.0: cvl_0_0 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:17.587 Found net devices under 0000:af:00.1: cvl_0_1 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:17.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:17.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:31:17.587 00:31:17.587 --- 10.0.0.2 ping statistics --- 00:31:17.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:17.587 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:17.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:17.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:31:17.587 00:31:17.587 --- 10.0.0.1 ping statistics --- 00:31:17.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:17.587 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:17.587 ************************************ 00:31:17.587 START TEST nvmf_target_disconnect_tc1 00:31:17.587 ************************************ 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # nvmf_target_disconnect_tc1 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@649 -- # local es=0 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:17.587 EAL: No free 2048 kB hugepages reported on node 1 00:31:17.587 [2024-06-07 21:48:17.500584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.587 [2024-06-07 21:48:17.500633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x954cb0 with addr=10.0.0.2, port=4420 00:31:17.587 [2024-06-07 21:48:17.500661] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:17.587 [2024-06-07 21:48:17.500678] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:17.587 [2024-06-07 21:48:17.500687] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:31:17.587 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:31:17.587 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:31:17.587 Initializing NVMe Controllers 00:31:17.587 21:48:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # es=1 00:31:17.588 21:48:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:31:17.588 21:48:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:31:17.588 21:48:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:31:17.588 00:31:17.588 real 0m0.130s 00:31:17.588 user 0m0.052s 00:31:17.588 sys 0m0.077s 00:31:17.588 21:48:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:17.588 21:48:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:17.588 ************************************ 00:31:17.588 END TEST nvmf_target_disconnect_tc1 00:31:17.588 ************************************ 00:31:17.588 21:48:17 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:31:17.588 21:48:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:31:17.588 21:48:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:17.588 21:48:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:17.588 ************************************ 00:31:17.588 START TEST nvmf_target_disconnect_tc2 00:31:17.588 ************************************ 00:31:17.588 21:48:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # nvmf_target_disconnect_tc2 00:31:17.588 21:48:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:31:17.588 21:48:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:31:17.588 21:48:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:17.588 21:48:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:17.588 21:48:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:17.588 21:48:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1638456 00:31:17.588 21:48:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1638456 00:31:17.588 21:48:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@830 -- # '[' -z 1638456 ']' 00:31:17.588 21:48:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:17.588 21:48:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:31:17.588 21:48:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:17.588 21:48:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:17.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:17.588 21:48:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:17.588 21:48:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:17.588 [2024-06-07 21:48:17.633129] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:31:17.588 [2024-06-07 21:48:17.633181] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:17.588 EAL: No free 2048 kB hugepages reported on node 1 00:31:17.588 [2024-06-07 21:48:17.725821] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:17.588 [2024-06-07 21:48:17.817344] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:17.588 [2024-06-07 21:48:17.817383] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:17.588 [2024-06-07 21:48:17.817393] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:17.588 [2024-06-07 21:48:17.817402] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:17.588 [2024-06-07 21:48:17.817409] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:17.588 [2024-06-07 21:48:17.817525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:31:17.588 [2024-06-07 21:48:17.817661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:31:17.588 [2024-06-07 21:48:17.817774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:31:17.588 [2024-06-07 21:48:17.817774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:31:18.532 21:48:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:18.532 21:48:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@863 -- # return 0 00:31:18.532 21:48:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:18.532 21:48:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:18.532 21:48:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:18.532 21:48:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:18.532 21:48:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:18.532 21:48:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:18.532 21:48:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:18.532 Malloc0 00:31:18.532 21:48:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:18.532 21:48:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:18.532 21:48:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:18.532 21:48:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:18.532 [2024-06-07 21:48:18.630933] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:18.532 21:48:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:18.532 21:48:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:18.533 21:48:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:18.533 21:48:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:18.533 21:48:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:18.533 21:48:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:18.533 21:48:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:18.533 21:48:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:18.533 21:48:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:18.533 21:48:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:18.533 21:48:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:18.533 21:48:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:18.533 [2024-06-07 21:48:18.659192] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:18.533 21:48:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:18.533 21:48:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:18.533 21:48:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:18.533 21:48:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:18.533 21:48:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:18.533 21:48:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1638661 00:31:18.533 21:48:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:31:18.533 21:48:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:18.533 EAL: No free 2048 kB hugepages reported on node 1 00:31:20.437 21:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1638456 00:31:20.437 21:48:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:31:20.437 Read completed with error (sct=0, sc=8) 00:31:20.437 starting I/O failed 00:31:20.437 Read completed with error (sct=0, sc=8) 00:31:20.437 starting I/O failed 00:31:20.437 Read completed with error (sct=0, sc=8) 00:31:20.437 starting I/O failed 00:31:20.437 Read completed with error (sct=0, sc=8) 00:31:20.437 starting I/O failed 00:31:20.437 Read completed with error (sct=0, sc=8) 00:31:20.437 starting I/O failed 00:31:20.437 Read completed with error (sct=0, sc=8) 00:31:20.437 starting I/O failed 00:31:20.437 Read completed with error (sct=0, sc=8) 00:31:20.437 starting I/O failed 00:31:20.437 Read completed with error (sct=0, sc=8) 00:31:20.437 starting I/O failed 00:31:20.437 Read completed with error (sct=0, sc=8) 00:31:20.437 starting I/O failed 00:31:20.437 Read completed with error (sct=0, sc=8) 00:31:20.437 starting I/O failed 00:31:20.437 Read completed with error (sct=0, sc=8) 00:31:20.437 starting I/O failed 00:31:20.437 Write completed with error (sct=0, sc=8) 00:31:20.437 starting I/O failed 00:31:20.437 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Write completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Write completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Write completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Write completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Write completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Write completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Write completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Write completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Write completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Write completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 [2024-06-07 21:48:20.688932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Write completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Write completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Write completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Write completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Write completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Write completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Write completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Write completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Write completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 [2024-06-07 21:48:20.689227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Write completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Write completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Write completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Write completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Write completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Write completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Write completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Write completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Write completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Write completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Write completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Write completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 [2024-06-07 21:48:20.689518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:20.438 Write completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Write completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Write completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Write completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Write completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Write completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Write completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Write completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Write completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Write completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Write completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Write completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Write completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Write completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Write completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Write completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.438 Read completed with error (sct=0, sc=8) 00:31:20.438 starting I/O failed 00:31:20.439 Write completed with error (sct=0, sc=8) 00:31:20.439 starting I/O failed 00:31:20.439 Read completed with error (sct=0, sc=8) 00:31:20.439 starting I/O failed 00:31:20.439 Write completed with error (sct=0, sc=8) 00:31:20.439 starting I/O failed 00:31:20.439 Read completed with error (sct=0, sc=8) 00:31:20.439 starting I/O failed 00:31:20.439 [2024-06-07 21:48:20.689689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.439 [2024-06-07 21:48:20.689934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.439 [2024-06-07 21:48:20.689956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7b4000b90 with addr=10.0.0.2, port=4420 00:31:20.439 qpair failed and we were unable to recover it. 00:31:20.439 [2024-06-07 21:48:20.690274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.439 [2024-06-07 21:48:20.690290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7b4000b90 with addr=10.0.0.2, port=4420 00:31:20.439 qpair failed and we were unable to recover it. 00:31:20.439 [2024-06-07 21:48:20.690531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.439 [2024-06-07 21:48:20.690546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7b4000b90 with addr=10.0.0.2, port=4420 00:31:20.439 qpair failed and we were unable to recover it. 00:31:20.439 [2024-06-07 21:48:20.690803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.439 [2024-06-07 21:48:20.690818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7b4000b90 with addr=10.0.0.2, port=4420 00:31:20.439 qpair failed and we were unable to recover it. 00:31:20.439 [2024-06-07 21:48:20.691034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.439 [2024-06-07 21:48:20.691049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7b4000b90 with addr=10.0.0.2, port=4420 00:31:20.439 qpair failed and we were unable to recover it. 00:31:20.439 [2024-06-07 21:48:20.691231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.439 [2024-06-07 21:48:20.691246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7b4000b90 with addr=10.0.0.2, port=4420 00:31:20.439 qpair failed and we were unable to recover it. 00:31:20.439 [2024-06-07 21:48:20.691485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.439 [2024-06-07 21:48:20.691515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7b4000b90 with addr=10.0.0.2, port=4420 00:31:20.439 qpair failed and we were unable to recover it. 00:31:20.439 [2024-06-07 21:48:20.691760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.439 [2024-06-07 21:48:20.691790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7b4000b90 with addr=10.0.0.2, port=4420 00:31:20.439 qpair failed and we were unable to recover it. 00:31:20.439 [2024-06-07 21:48:20.692061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.439 [2024-06-07 21:48:20.692092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7b4000b90 with addr=10.0.0.2, port=4420 00:31:20.439 qpair failed and we were unable to recover it. 00:31:20.439 [2024-06-07 21:48:20.692303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.439 [2024-06-07 21:48:20.692333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7b4000b90 with addr=10.0.0.2, port=4420 00:31:20.439 qpair failed and we were unable to recover it. 00:31:20.439 [2024-06-07 21:48:20.692598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.439 [2024-06-07 21:48:20.692629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7b4000b90 with addr=10.0.0.2, port=4420 00:31:20.439 qpair failed and we were unable to recover it. 00:31:20.439 [2024-06-07 21:48:20.692891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.439 [2024-06-07 21:48:20.692921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7b4000b90 with addr=10.0.0.2, port=4420 00:31:20.439 qpair failed and we were unable to recover it. 00:31:20.439 [2024-06-07 21:48:20.693233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.439 [2024-06-07 21:48:20.693265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7b4000b90 with addr=10.0.0.2, port=4420 00:31:20.439 qpair failed and we were unable to recover it. 00:31:20.439 [2024-06-07 21:48:20.693520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.439 [2024-06-07 21:48:20.693551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7b4000b90 with addr=10.0.0.2, port=4420 00:31:20.439 qpair failed and we were unable to recover it. 00:31:20.439 [2024-06-07 21:48:20.693724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.439 [2024-06-07 21:48:20.693754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7b4000b90 with addr=10.0.0.2, port=4420 00:31:20.439 qpair failed and we were unable to recover it. 00:31:20.439 [2024-06-07 21:48:20.694135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.439 [2024-06-07 21:48:20.694181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.439 qpair failed and we were unable to recover it. 00:31:20.439 [2024-06-07 21:48:20.694451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.439 [2024-06-07 21:48:20.694483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.439 qpair failed and we were unable to recover it. 00:31:20.439 [2024-06-07 21:48:20.694768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.439 [2024-06-07 21:48:20.694798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.439 qpair failed and we were unable to recover it. 00:31:20.439 [2024-06-07 21:48:20.695091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.439 [2024-06-07 21:48:20.695124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.439 qpair failed and we were unable to recover it. 00:31:20.439 [2024-06-07 21:48:20.695325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.439 [2024-06-07 21:48:20.695356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.439 qpair failed and we were unable to recover it. 00:31:20.439 [2024-06-07 21:48:20.695542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.439 [2024-06-07 21:48:20.695572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.439 qpair failed and we were unable to recover it. 00:31:20.439 [2024-06-07 21:48:20.695819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.439 [2024-06-07 21:48:20.695840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7b4000b90 with addr=10.0.0.2, port=4420 00:31:20.439 qpair failed and we were unable to recover it. 00:31:20.439 [2024-06-07 21:48:20.696018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.439 [2024-06-07 21:48:20.696046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7b4000b90 with addr=10.0.0.2, port=4420 00:31:20.439 qpair failed and we were unable to recover it. 00:31:20.439 [2024-06-07 21:48:20.696225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.439 [2024-06-07 21:48:20.696244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7b4000b90 with addr=10.0.0.2, port=4420 00:31:20.439 qpair failed and we were unable to recover it. 00:31:20.439 [2024-06-07 21:48:20.696527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.439 [2024-06-07 21:48:20.696562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.439 qpair failed and we were unable to recover it. 00:31:20.439 [2024-06-07 21:48:20.696821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.439 [2024-06-07 21:48:20.696851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.439 qpair failed and we were unable to recover it. 00:31:20.439 [2024-06-07 21:48:20.697108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.439 [2024-06-07 21:48:20.697118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.439 qpair failed and we were unable to recover it. 00:31:20.439 [2024-06-07 21:48:20.697334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.439 [2024-06-07 21:48:20.697343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.439 qpair failed and we were unable to recover it. 00:31:20.439 [2024-06-07 21:48:20.697505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.439 [2024-06-07 21:48:20.697518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.439 qpair failed and we were unable to recover it. 00:31:20.439 [2024-06-07 21:48:20.697793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.439 [2024-06-07 21:48:20.697802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.439 qpair failed and we were unable to recover it. 00:31:20.439 [2024-06-07 21:48:20.698071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.439 [2024-06-07 21:48:20.698081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.439 qpair failed and we were unable to recover it. 00:31:20.439 [2024-06-07 21:48:20.698242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.439 [2024-06-07 21:48:20.698252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.439 qpair failed and we were unable to recover it. 00:31:20.439 [2024-06-07 21:48:20.698403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.439 [2024-06-07 21:48:20.698412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.439 qpair failed and we were unable to recover it. 00:31:20.439 [2024-06-07 21:48:20.698576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.439 [2024-06-07 21:48:20.698586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.439 qpair failed and we were unable to recover it. 00:31:20.439 [2024-06-07 21:48:20.698840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.439 [2024-06-07 21:48:20.698849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.439 qpair failed and we were unable to recover it. 00:31:20.439 [2024-06-07 21:48:20.699057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.440 [2024-06-07 21:48:20.699067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.440 qpair failed and we were unable to recover it. 00:31:20.440 [2024-06-07 21:48:20.699280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.440 [2024-06-07 21:48:20.699289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.440 qpair failed and we were unable to recover it. 00:31:20.440 [2024-06-07 21:48:20.699504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.440 [2024-06-07 21:48:20.699514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.440 qpair failed and we were unable to recover it. 00:31:20.440 [2024-06-07 21:48:20.699743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.440 [2024-06-07 21:48:20.699752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.440 qpair failed and we were unable to recover it. 00:31:20.440 [2024-06-07 21:48:20.699892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.440 [2024-06-07 21:48:20.699901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.440 qpair failed and we were unable to recover it. 00:31:20.440 [2024-06-07 21:48:20.700100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.440 [2024-06-07 21:48:20.700109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.440 qpair failed and we were unable to recover it. 00:31:20.440 [2024-06-07 21:48:20.700320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.440 [2024-06-07 21:48:20.700330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.440 qpair failed and we were unable to recover it. 00:31:20.440 [2024-06-07 21:48:20.700539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.440 [2024-06-07 21:48:20.700548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.440 qpair failed and we were unable to recover it. 00:31:20.440 [2024-06-07 21:48:20.700764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.440 [2024-06-07 21:48:20.700773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.440 qpair failed and we were unable to recover it. 00:31:20.440 [2024-06-07 21:48:20.700968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.440 [2024-06-07 21:48:20.700978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.440 qpair failed and we were unable to recover it. 00:31:20.440 [2024-06-07 21:48:20.701266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.440 [2024-06-07 21:48:20.701276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.440 qpair failed and we were unable to recover it. 00:31:20.440 [2024-06-07 21:48:20.701474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.440 [2024-06-07 21:48:20.701484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.440 qpair failed and we were unable to recover it. 00:31:20.440 [2024-06-07 21:48:20.701758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.440 [2024-06-07 21:48:20.701767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.440 qpair failed and we were unable to recover it. 00:31:20.440 [2024-06-07 21:48:20.702036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.440 [2024-06-07 21:48:20.702046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.440 qpair failed and we were unable to recover it. 00:31:20.440 [2024-06-07 21:48:20.702185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.440 [2024-06-07 21:48:20.702195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.440 qpair failed and we were unable to recover it. 00:31:20.440 [2024-06-07 21:48:20.702396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.440 [2024-06-07 21:48:20.702405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.440 qpair failed and we were unable to recover it. 00:31:20.440 [2024-06-07 21:48:20.702700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.440 [2024-06-07 21:48:20.702709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.440 qpair failed and we were unable to recover it. 00:31:20.440 [2024-06-07 21:48:20.702912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.440 [2024-06-07 21:48:20.702921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.440 qpair failed and we were unable to recover it. 00:31:20.440 [2024-06-07 21:48:20.703086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.440 [2024-06-07 21:48:20.703096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.440 qpair failed and we were unable to recover it. 00:31:20.440 [2024-06-07 21:48:20.703307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.440 [2024-06-07 21:48:20.703316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.440 qpair failed and we were unable to recover it. 00:31:20.440 [2024-06-07 21:48:20.703617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.440 [2024-06-07 21:48:20.703627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.440 qpair failed and we were unable to recover it. 00:31:20.440 [2024-06-07 21:48:20.703771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.440 [2024-06-07 21:48:20.703781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.440 qpair failed and we were unable to recover it. 00:31:20.440 [2024-06-07 21:48:20.704071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.440 [2024-06-07 21:48:20.704081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.440 qpair failed and we were unable to recover it. 00:31:20.440 [2024-06-07 21:48:20.704306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.440 [2024-06-07 21:48:20.704315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.440 qpair failed and we were unable to recover it. 00:31:20.440 [2024-06-07 21:48:20.704552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.440 [2024-06-07 21:48:20.704561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.440 qpair failed and we were unable to recover it. 00:31:20.714 [2024-06-07 21:48:20.704806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.714 [2024-06-07 21:48:20.704815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.714 qpair failed and we were unable to recover it. 00:31:20.714 [2024-06-07 21:48:20.705033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.714 [2024-06-07 21:48:20.705043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.714 qpair failed and we were unable to recover it. 00:31:20.714 [2024-06-07 21:48:20.705316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.714 [2024-06-07 21:48:20.705326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.714 qpair failed and we were unable to recover it. 00:31:20.714 [2024-06-07 21:48:20.705550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.714 [2024-06-07 21:48:20.705559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.714 qpair failed and we were unable to recover it. 00:31:20.714 [2024-06-07 21:48:20.705695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.714 [2024-06-07 21:48:20.705704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.714 qpair failed and we were unable to recover it. 00:31:20.714 [2024-06-07 21:48:20.706017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.714 [2024-06-07 21:48:20.706071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.714 qpair failed and we were unable to recover it. 00:31:20.714 [2024-06-07 21:48:20.706384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.714 [2024-06-07 21:48:20.706415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.714 qpair failed and we were unable to recover it. 00:31:20.714 [2024-06-07 21:48:20.706670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.714 [2024-06-07 21:48:20.706700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.714 qpair failed and we were unable to recover it. 00:31:20.714 [2024-06-07 21:48:20.706944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.714 [2024-06-07 21:48:20.706974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.714 qpair failed and we were unable to recover it. 00:31:20.714 [2024-06-07 21:48:20.707261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.714 [2024-06-07 21:48:20.707271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.714 qpair failed and we were unable to recover it. 00:31:20.714 [2024-06-07 21:48:20.707470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.714 [2024-06-07 21:48:20.707479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.714 qpair failed and we were unable to recover it. 00:31:20.714 [2024-06-07 21:48:20.707698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.714 [2024-06-07 21:48:20.707708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.714 qpair failed and we were unable to recover it. 00:31:20.714 [2024-06-07 21:48:20.707907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.714 [2024-06-07 21:48:20.707916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.714 qpair failed and we were unable to recover it. 00:31:20.714 [2024-06-07 21:48:20.708203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.714 [2024-06-07 21:48:20.708212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.714 qpair failed and we were unable to recover it. 00:31:20.714 [2024-06-07 21:48:20.708312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.714 [2024-06-07 21:48:20.708320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.714 qpair failed and we were unable to recover it. 00:31:20.714 [2024-06-07 21:48:20.708475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.714 [2024-06-07 21:48:20.708484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.714 qpair failed and we were unable to recover it. 00:31:20.714 [2024-06-07 21:48:20.708766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.714 [2024-06-07 21:48:20.708775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.714 qpair failed and we were unable to recover it. 00:31:20.714 [2024-06-07 21:48:20.709043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.714 [2024-06-07 21:48:20.709053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.714 qpair failed and we were unable to recover it. 00:31:20.714 [2024-06-07 21:48:20.709256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.714 [2024-06-07 21:48:20.709265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.714 qpair failed and we were unable to recover it. 00:31:20.714 [2024-06-07 21:48:20.709354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.715 [2024-06-07 21:48:20.709363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.715 qpair failed and we were unable to recover it. 00:31:20.715 [2024-06-07 21:48:20.709635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.715 [2024-06-07 21:48:20.709645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.715 qpair failed and we were unable to recover it. 00:31:20.715 [2024-06-07 21:48:20.709797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.715 [2024-06-07 21:48:20.709806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.715 qpair failed and we were unable to recover it. 00:31:20.715 [2024-06-07 21:48:20.710013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.715 [2024-06-07 21:48:20.710063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.715 qpair failed and we were unable to recover it. 00:31:20.715 [2024-06-07 21:48:20.710227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.715 [2024-06-07 21:48:20.710258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.715 qpair failed and we were unable to recover it. 00:31:20.715 [2024-06-07 21:48:20.710579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.715 [2024-06-07 21:48:20.710609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.715 qpair failed and we were unable to recover it. 00:31:20.715 [2024-06-07 21:48:20.710853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.715 [2024-06-07 21:48:20.710883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.715 qpair failed and we were unable to recover it. 00:31:20.715 [2024-06-07 21:48:20.711204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.715 [2024-06-07 21:48:20.711235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.715 qpair failed and we were unable to recover it. 00:31:20.715 [2024-06-07 21:48:20.711481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.715 [2024-06-07 21:48:20.711511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.715 qpair failed and we were unable to recover it. 00:31:20.715 [2024-06-07 21:48:20.711798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.715 [2024-06-07 21:48:20.711829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.715 qpair failed and we were unable to recover it. 00:31:20.715 [2024-06-07 21:48:20.712194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.715 [2024-06-07 21:48:20.712225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.715 qpair failed and we were unable to recover it. 00:31:20.715 [2024-06-07 21:48:20.712416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.715 [2024-06-07 21:48:20.712447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.715 qpair failed and we were unable to recover it. 00:31:20.715 [2024-06-07 21:48:20.712701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.715 [2024-06-07 21:48:20.712732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.715 qpair failed and we were unable to recover it. 00:31:20.715 [2024-06-07 21:48:20.713059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.715 [2024-06-07 21:48:20.713069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.715 qpair failed and we were unable to recover it. 00:31:20.715 [2024-06-07 21:48:20.713433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.715 [2024-06-07 21:48:20.713464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.715 qpair failed and we were unable to recover it. 00:31:20.715 [2024-06-07 21:48:20.713773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.715 [2024-06-07 21:48:20.713803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.715 qpair failed and we were unable to recover it. 00:31:20.715 [2024-06-07 21:48:20.714008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.715 [2024-06-07 21:48:20.714060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.715 qpair failed and we were unable to recover it. 00:31:20.715 [2024-06-07 21:48:20.714308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.715 [2024-06-07 21:48:20.714318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.715 qpair failed and we were unable to recover it. 00:31:20.715 [2024-06-07 21:48:20.714586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.715 [2024-06-07 21:48:20.714595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.715 qpair failed and we were unable to recover it. 00:31:20.715 [2024-06-07 21:48:20.714926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.715 [2024-06-07 21:48:20.714956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.715 qpair failed and we were unable to recover it. 00:31:20.715 [2024-06-07 21:48:20.715325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.715 [2024-06-07 21:48:20.715358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.715 qpair failed and we were unable to recover it. 00:31:20.715 [2024-06-07 21:48:20.715694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.715 [2024-06-07 21:48:20.715724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.715 qpair failed and we were unable to recover it. 00:31:20.715 [2024-06-07 21:48:20.716015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.715 [2024-06-07 21:48:20.716057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.715 qpair failed and we were unable to recover it. 00:31:20.715 [2024-06-07 21:48:20.716348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.715 [2024-06-07 21:48:20.716378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.715 qpair failed and we were unable to recover it. 00:31:20.715 [2024-06-07 21:48:20.716736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.715 [2024-06-07 21:48:20.716767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.715 qpair failed and we were unable to recover it. 00:31:20.715 [2024-06-07 21:48:20.717131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.715 [2024-06-07 21:48:20.717162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.715 qpair failed and we were unable to recover it. 00:31:20.715 [2024-06-07 21:48:20.717354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.715 [2024-06-07 21:48:20.717384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.715 qpair failed and we were unable to recover it. 00:31:20.715 [2024-06-07 21:48:20.717643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.715 [2024-06-07 21:48:20.717674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.715 qpair failed and we were unable to recover it. 00:31:20.715 [2024-06-07 21:48:20.718003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.715 [2024-06-07 21:48:20.718012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.715 qpair failed and we were unable to recover it. 00:31:20.715 [2024-06-07 21:48:20.718254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.715 [2024-06-07 21:48:20.718263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.715 qpair failed and we were unable to recover it. 00:31:20.715 [2024-06-07 21:48:20.718478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.715 [2024-06-07 21:48:20.718487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.715 qpair failed and we were unable to recover it. 00:31:20.715 [2024-06-07 21:48:20.718732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.715 [2024-06-07 21:48:20.718741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.715 qpair failed and we were unable to recover it. 00:31:20.715 [2024-06-07 21:48:20.718953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.715 [2024-06-07 21:48:20.718962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.715 qpair failed and we were unable to recover it. 00:31:20.715 [2024-06-07 21:48:20.719169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.715 [2024-06-07 21:48:20.719179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.715 qpair failed and we were unable to recover it. 00:31:20.715 [2024-06-07 21:48:20.719494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.715 [2024-06-07 21:48:20.719503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.715 qpair failed and we were unable to recover it. 00:31:20.715 [2024-06-07 21:48:20.719644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.715 [2024-06-07 21:48:20.719653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.715 qpair failed and we were unable to recover it. 00:31:20.715 [2024-06-07 21:48:20.719809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.715 [2024-06-07 21:48:20.719818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.715 qpair failed and we were unable to recover it. 00:31:20.715 [2024-06-07 21:48:20.720032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.715 [2024-06-07 21:48:20.720041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.716 qpair failed and we were unable to recover it. 00:31:20.716 [2024-06-07 21:48:20.720274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.716 [2024-06-07 21:48:20.720284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.716 qpair failed and we were unable to recover it. 00:31:20.716 [2024-06-07 21:48:20.720494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.716 [2024-06-07 21:48:20.720502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.716 qpair failed and we were unable to recover it. 00:31:20.716 [2024-06-07 21:48:20.720722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.716 [2024-06-07 21:48:20.720753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.716 qpair failed and we were unable to recover it. 00:31:20.716 [2024-06-07 21:48:20.721095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.716 [2024-06-07 21:48:20.721127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.716 qpair failed and we were unable to recover it. 00:31:20.716 [2024-06-07 21:48:20.721333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.716 [2024-06-07 21:48:20.721363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.716 qpair failed and we were unable to recover it. 00:31:20.716 [2024-06-07 21:48:20.721711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.716 [2024-06-07 21:48:20.721742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.716 qpair failed and we were unable to recover it. 00:31:20.716 [2024-06-07 21:48:20.722076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.716 [2024-06-07 21:48:20.722108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.716 qpair failed and we were unable to recover it. 00:31:20.716 [2024-06-07 21:48:20.722407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.716 [2024-06-07 21:48:20.722438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.716 qpair failed and we were unable to recover it. 00:31:20.716 [2024-06-07 21:48:20.722645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.716 [2024-06-07 21:48:20.722675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.716 qpair failed and we were unable to recover it. 00:31:20.716 [2024-06-07 21:48:20.722919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.716 [2024-06-07 21:48:20.722949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.716 qpair failed and we were unable to recover it. 00:31:20.716 [2024-06-07 21:48:20.723280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.716 [2024-06-07 21:48:20.723290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.716 qpair failed and we were unable to recover it. 00:31:20.716 [2024-06-07 21:48:20.723530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.716 [2024-06-07 21:48:20.723539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.716 qpair failed and we were unable to recover it. 00:31:20.716 [2024-06-07 21:48:20.723686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.716 [2024-06-07 21:48:20.723696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.716 qpair failed and we were unable to recover it. 00:31:20.716 [2024-06-07 21:48:20.723910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.716 [2024-06-07 21:48:20.723919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.716 qpair failed and we were unable to recover it. 00:31:20.716 [2024-06-07 21:48:20.724203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.716 [2024-06-07 21:48:20.724235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.716 qpair failed and we were unable to recover it. 00:31:20.716 [2024-06-07 21:48:20.724543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.716 [2024-06-07 21:48:20.724574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.716 qpair failed and we were unable to recover it. 00:31:20.716 [2024-06-07 21:48:20.724906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.716 [2024-06-07 21:48:20.724936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.716 qpair failed and we were unable to recover it. 00:31:20.716 [2024-06-07 21:48:20.725196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.716 [2024-06-07 21:48:20.725233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.716 qpair failed and we were unable to recover it. 00:31:20.716 [2024-06-07 21:48:20.725528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.716 [2024-06-07 21:48:20.725540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.716 qpair failed and we were unable to recover it. 00:31:20.716 [2024-06-07 21:48:20.725749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.716 [2024-06-07 21:48:20.725758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.716 qpair failed and we were unable to recover it. 00:31:20.716 [2024-06-07 21:48:20.726043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.716 [2024-06-07 21:48:20.726074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.716 qpair failed and we were unable to recover it. 00:31:20.716 [2024-06-07 21:48:20.726389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.716 [2024-06-07 21:48:20.726419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.716 qpair failed and we were unable to recover it. 00:31:20.716 [2024-06-07 21:48:20.726695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.716 [2024-06-07 21:48:20.726725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.716 qpair failed and we were unable to recover it. 00:31:20.716 [2024-06-07 21:48:20.727112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.716 [2024-06-07 21:48:20.727145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.716 qpair failed and we were unable to recover it. 00:31:20.716 [2024-06-07 21:48:20.727487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.716 [2024-06-07 21:48:20.727517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.716 qpair failed and we were unable to recover it. 00:31:20.716 [2024-06-07 21:48:20.727767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.716 [2024-06-07 21:48:20.727776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.716 qpair failed and we were unable to recover it. 00:31:20.716 [2024-06-07 21:48:20.727990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.716 [2024-06-07 21:48:20.727999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.716 qpair failed and we were unable to recover it. 00:31:20.716 [2024-06-07 21:48:20.728293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.716 [2024-06-07 21:48:20.728303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.716 qpair failed and we were unable to recover it. 00:31:20.716 [2024-06-07 21:48:20.728537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.716 [2024-06-07 21:48:20.728546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.716 qpair failed and we were unable to recover it. 00:31:20.716 [2024-06-07 21:48:20.728838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.716 [2024-06-07 21:48:20.728847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.716 qpair failed and we were unable to recover it. 00:31:20.716 [2024-06-07 21:48:20.729082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.716 [2024-06-07 21:48:20.729091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.716 qpair failed and we were unable to recover it. 00:31:20.716 [2024-06-07 21:48:20.729385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.716 [2024-06-07 21:48:20.729395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.716 qpair failed and we were unable to recover it. 00:31:20.716 [2024-06-07 21:48:20.729550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.716 [2024-06-07 21:48:20.729560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.716 qpair failed and we were unable to recover it. 00:31:20.716 [2024-06-07 21:48:20.729722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.716 [2024-06-07 21:48:20.729731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.716 qpair failed and we were unable to recover it. 00:31:20.716 [2024-06-07 21:48:20.730039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.716 [2024-06-07 21:48:20.730071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.716 qpair failed and we were unable to recover it. 00:31:20.716 [2024-06-07 21:48:20.730270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.716 [2024-06-07 21:48:20.730300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.716 qpair failed and we were unable to recover it. 00:31:20.716 [2024-06-07 21:48:20.730500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.717 [2024-06-07 21:48:20.730530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.717 qpair failed and we were unable to recover it. 00:31:20.717 [2024-06-07 21:48:20.730863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.717 [2024-06-07 21:48:20.730891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.717 qpair failed and we were unable to recover it. 00:31:20.717 [2024-06-07 21:48:20.731149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.717 [2024-06-07 21:48:20.731180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.717 qpair failed and we were unable to recover it. 00:31:20.717 [2024-06-07 21:48:20.731490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.717 [2024-06-07 21:48:20.731521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.717 qpair failed and we were unable to recover it. 00:31:20.717 [2024-06-07 21:48:20.731708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.717 [2024-06-07 21:48:20.731738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.717 qpair failed and we were unable to recover it. 00:31:20.717 [2024-06-07 21:48:20.731937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.717 [2024-06-07 21:48:20.731967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.717 qpair failed and we were unable to recover it. 00:31:20.717 [2024-06-07 21:48:20.732169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.717 [2024-06-07 21:48:20.732179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.717 qpair failed and we were unable to recover it. 00:31:20.717 [2024-06-07 21:48:20.732419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.717 [2024-06-07 21:48:20.732450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.717 qpair failed and we were unable to recover it. 00:31:20.717 [2024-06-07 21:48:20.732784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.717 [2024-06-07 21:48:20.732814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.717 qpair failed and we were unable to recover it. 00:31:20.717 [2024-06-07 21:48:20.733064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.717 [2024-06-07 21:48:20.733074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.717 qpair failed and we were unable to recover it. 00:31:20.717 [2024-06-07 21:48:20.733342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.717 [2024-06-07 21:48:20.733351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.717 qpair failed and we were unable to recover it. 00:31:20.717 [2024-06-07 21:48:20.733590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.717 [2024-06-07 21:48:20.733599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.717 qpair failed and we were unable to recover it. 00:31:20.717 [2024-06-07 21:48:20.733890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.717 [2024-06-07 21:48:20.733900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.717 qpair failed and we were unable to recover it. 00:31:20.717 [2024-06-07 21:48:20.734180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.717 [2024-06-07 21:48:20.734190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.717 qpair failed and we were unable to recover it. 00:31:20.717 [2024-06-07 21:48:20.734430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.717 [2024-06-07 21:48:20.734439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.717 qpair failed and we were unable to recover it. 00:31:20.717 [2024-06-07 21:48:20.734718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.717 [2024-06-07 21:48:20.734727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.717 qpair failed and we were unable to recover it. 00:31:20.717 [2024-06-07 21:48:20.734994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.717 [2024-06-07 21:48:20.735003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.717 qpair failed and we were unable to recover it. 00:31:20.717 [2024-06-07 21:48:20.735245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.717 [2024-06-07 21:48:20.735254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.717 qpair failed and we were unable to recover it. 00:31:20.717 [2024-06-07 21:48:20.735384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.717 [2024-06-07 21:48:20.735394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.717 qpair failed and we were unable to recover it. 00:31:20.717 [2024-06-07 21:48:20.735605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.717 [2024-06-07 21:48:20.735613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.717 qpair failed and we were unable to recover it. 00:31:20.717 [2024-06-07 21:48:20.735893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.717 [2024-06-07 21:48:20.735924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.717 qpair failed and we were unable to recover it. 00:31:20.717 [2024-06-07 21:48:20.736183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.717 [2024-06-07 21:48:20.736214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.717 qpair failed and we were unable to recover it. 00:31:20.717 [2024-06-07 21:48:20.736457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.717 [2024-06-07 21:48:20.736493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.717 qpair failed and we were unable to recover it. 00:31:20.717 [2024-06-07 21:48:20.736636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.717 [2024-06-07 21:48:20.736666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.717 qpair failed and we were unable to recover it. 00:31:20.717 [2024-06-07 21:48:20.736926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.717 [2024-06-07 21:48:20.736956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.717 qpair failed and we were unable to recover it. 00:31:20.717 [2024-06-07 21:48:20.737149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.717 [2024-06-07 21:48:20.737181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.717 qpair failed and we were unable to recover it. 00:31:20.717 [2024-06-07 21:48:20.737518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.717 [2024-06-07 21:48:20.737548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.717 qpair failed and we were unable to recover it. 00:31:20.717 [2024-06-07 21:48:20.737855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.717 [2024-06-07 21:48:20.737885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.717 qpair failed and we were unable to recover it. 00:31:20.717 [2024-06-07 21:48:20.738221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.717 [2024-06-07 21:48:20.738230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.717 qpair failed and we were unable to recover it. 00:31:20.717 [2024-06-07 21:48:20.738457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.717 [2024-06-07 21:48:20.738466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.717 qpair failed and we were unable to recover it. 00:31:20.717 [2024-06-07 21:48:20.738711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.717 [2024-06-07 21:48:20.738720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.717 qpair failed and we were unable to recover it. 00:31:20.717 [2024-06-07 21:48:20.738885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.717 [2024-06-07 21:48:20.738895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.717 qpair failed and we were unable to recover it. 00:31:20.717 [2024-06-07 21:48:20.739102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.717 [2024-06-07 21:48:20.739112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.717 qpair failed and we were unable to recover it. 00:31:20.717 [2024-06-07 21:48:20.739324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.717 [2024-06-07 21:48:20.739333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.717 qpair failed and we were unable to recover it. 00:31:20.717 [2024-06-07 21:48:20.739599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.717 [2024-06-07 21:48:20.739629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.717 qpair failed and we were unable to recover it. 00:31:20.717 [2024-06-07 21:48:20.739889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.717 [2024-06-07 21:48:20.739920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.717 qpair failed and we were unable to recover it. 00:31:20.717 [2024-06-07 21:48:20.740239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.717 [2024-06-07 21:48:20.740248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.717 qpair failed and we were unable to recover it. 00:31:20.717 [2024-06-07 21:48:20.740488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.718 [2024-06-07 21:48:20.740498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.718 qpair failed and we were unable to recover it. 00:31:20.718 [2024-06-07 21:48:20.740789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.718 [2024-06-07 21:48:20.740798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.718 qpair failed and we were unable to recover it. 00:31:20.718 [2024-06-07 21:48:20.740962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.718 [2024-06-07 21:48:20.740972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.718 qpair failed and we were unable to recover it. 00:31:20.718 [2024-06-07 21:48:20.741187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.718 [2024-06-07 21:48:20.741196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.718 qpair failed and we were unable to recover it. 00:31:20.718 [2024-06-07 21:48:20.741433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.718 [2024-06-07 21:48:20.741463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.718 qpair failed and we were unable to recover it. 00:31:20.718 [2024-06-07 21:48:20.741723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.718 [2024-06-07 21:48:20.741754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.718 qpair failed and we were unable to recover it. 00:31:20.718 [2024-06-07 21:48:20.742094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.718 [2024-06-07 21:48:20.742126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.718 qpair failed and we were unable to recover it. 00:31:20.718 [2024-06-07 21:48:20.742462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.718 [2024-06-07 21:48:20.742493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.718 qpair failed and we were unable to recover it. 00:31:20.718 [2024-06-07 21:48:20.742823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.718 [2024-06-07 21:48:20.742854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.718 qpair failed and we were unable to recover it. 00:31:20.718 [2024-06-07 21:48:20.743066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.718 [2024-06-07 21:48:20.743097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.718 qpair failed and we were unable to recover it. 00:31:20.718 [2024-06-07 21:48:20.743383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.718 [2024-06-07 21:48:20.743414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.718 qpair failed and we were unable to recover it. 00:31:20.718 [2024-06-07 21:48:20.743681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.718 [2024-06-07 21:48:20.743711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.718 qpair failed and we were unable to recover it. 00:31:20.718 [2024-06-07 21:48:20.743966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.718 [2024-06-07 21:48:20.743997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.718 qpair failed and we were unable to recover it. 00:31:20.718 [2024-06-07 21:48:20.744418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.718 [2024-06-07 21:48:20.744487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:20.718 qpair failed and we were unable to recover it. 00:31:20.718 [2024-06-07 21:48:20.744796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.718 [2024-06-07 21:48:20.744864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7b4000b90 with addr=10.0.0.2, port=4420 00:31:20.718 qpair failed and we were unable to recover it. 00:31:20.718 [2024-06-07 21:48:20.745201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.718 [2024-06-07 21:48:20.745270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b4d60 with addr=10.0.0.2, port=4420 00:31:20.718 qpair failed and we were unable to recover it. 00:31:20.718 [2024-06-07 21:48:20.745461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.718 [2024-06-07 21:48:20.745472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.718 qpair failed and we were unable to recover it. 00:31:20.718 [2024-06-07 21:48:20.745770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.718 [2024-06-07 21:48:20.745779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.718 qpair failed and we were unable to recover it. 00:31:20.718 [2024-06-07 21:48:20.745998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.718 [2024-06-07 21:48:20.746008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.718 qpair failed and we were unable to recover it. 00:31:20.718 [2024-06-07 21:48:20.746302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.718 [2024-06-07 21:48:20.746312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.718 qpair failed and we were unable to recover it. 00:31:20.718 [2024-06-07 21:48:20.746454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.718 [2024-06-07 21:48:20.746463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.718 qpair failed and we were unable to recover it. 00:31:20.718 [2024-06-07 21:48:20.746695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.718 [2024-06-07 21:48:20.746704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.718 qpair failed and we were unable to recover it. 00:31:20.718 [2024-06-07 21:48:20.747017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.718 [2024-06-07 21:48:20.747070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.718 qpair failed and we were unable to recover it. 00:31:20.718 [2024-06-07 21:48:20.747346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.718 [2024-06-07 21:48:20.747376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.718 qpair failed and we were unable to recover it. 00:31:20.718 [2024-06-07 21:48:20.747635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.718 [2024-06-07 21:48:20.747666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.718 qpair failed and we were unable to recover it. 00:31:20.718 [2024-06-07 21:48:20.747911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.718 [2024-06-07 21:48:20.747922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.718 qpair failed and we were unable to recover it. 00:31:20.718 [2024-06-07 21:48:20.748132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.718 [2024-06-07 21:48:20.748142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.718 qpair failed and we were unable to recover it. 00:31:20.718 [2024-06-07 21:48:20.748435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.718 [2024-06-07 21:48:20.748444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.718 qpair failed and we were unable to recover it. 00:31:20.718 [2024-06-07 21:48:20.748743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.718 [2024-06-07 21:48:20.748752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.718 qpair failed and we were unable to recover it. 00:31:20.718 [2024-06-07 21:48:20.748909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.718 [2024-06-07 21:48:20.748918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.718 qpair failed and we were unable to recover it. 00:31:20.718 [2024-06-07 21:48:20.749237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.718 [2024-06-07 21:48:20.749247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.718 qpair failed and we were unable to recover it. 00:31:20.718 [2024-06-07 21:48:20.749540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.718 [2024-06-07 21:48:20.749550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.718 qpair failed and we were unable to recover it. 00:31:20.718 [2024-06-07 21:48:20.749873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.718 [2024-06-07 21:48:20.749903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.718 qpair failed and we were unable to recover it. 00:31:20.718 [2024-06-07 21:48:20.750162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.718 [2024-06-07 21:48:20.750193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.718 qpair failed and we were unable to recover it. 00:31:20.718 [2024-06-07 21:48:20.750447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.718 [2024-06-07 21:48:20.750478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.718 qpair failed and we were unable to recover it. 00:31:20.718 [2024-06-07 21:48:20.750791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.718 [2024-06-07 21:48:20.750821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.718 qpair failed and we were unable to recover it. 00:31:20.718 [2024-06-07 21:48:20.751137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.718 [2024-06-07 21:48:20.751147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.718 qpair failed and we were unable to recover it. 00:31:20.718 [2024-06-07 21:48:20.751384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.718 [2024-06-07 21:48:20.751393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.719 qpair failed and we were unable to recover it. 00:31:20.719 [2024-06-07 21:48:20.751608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.719 [2024-06-07 21:48:20.751618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.719 qpair failed and we were unable to recover it. 00:31:20.719 [2024-06-07 21:48:20.751863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.719 [2024-06-07 21:48:20.751872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.719 qpair failed and we were unable to recover it. 00:31:20.719 [2024-06-07 21:48:20.752105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.719 [2024-06-07 21:48:20.752114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.719 qpair failed and we were unable to recover it. 00:31:20.719 [2024-06-07 21:48:20.752313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.719 [2024-06-07 21:48:20.752323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.719 qpair failed and we were unable to recover it. 00:31:20.719 [2024-06-07 21:48:20.752524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.719 [2024-06-07 21:48:20.752533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.719 qpair failed and we were unable to recover it. 00:31:20.719 [2024-06-07 21:48:20.752708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.719 [2024-06-07 21:48:20.752717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.719 qpair failed and we were unable to recover it. 00:31:20.719 [2024-06-07 21:48:20.752959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.719 [2024-06-07 21:48:20.752968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.719 qpair failed and we were unable to recover it. 00:31:20.719 [2024-06-07 21:48:20.753248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.719 [2024-06-07 21:48:20.753257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.719 qpair failed and we were unable to recover it. 00:31:20.719 [2024-06-07 21:48:20.753499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.719 [2024-06-07 21:48:20.753508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.719 qpair failed and we were unable to recover it. 00:31:20.719 [2024-06-07 21:48:20.753773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.719 [2024-06-07 21:48:20.753782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.719 qpair failed and we were unable to recover it. 00:31:20.719 [2024-06-07 21:48:20.753992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.719 [2024-06-07 21:48:20.754001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.719 qpair failed and we were unable to recover it. 00:31:20.719 [2024-06-07 21:48:20.754155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.719 [2024-06-07 21:48:20.754165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.719 qpair failed and we were unable to recover it. 00:31:20.719 [2024-06-07 21:48:20.754319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.719 [2024-06-07 21:48:20.754328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.719 qpair failed and we were unable to recover it. 00:31:20.719 [2024-06-07 21:48:20.754621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.719 [2024-06-07 21:48:20.754630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.719 qpair failed and we were unable to recover it. 00:31:20.719 [2024-06-07 21:48:20.754772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.719 [2024-06-07 21:48:20.754781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.719 qpair failed and we were unable to recover it. 00:31:20.719 [2024-06-07 21:48:20.755035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.719 [2024-06-07 21:48:20.755066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.719 qpair failed and we were unable to recover it. 00:31:20.719 [2024-06-07 21:48:20.755274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.719 [2024-06-07 21:48:20.755304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.719 qpair failed and we were unable to recover it. 00:31:20.719 [2024-06-07 21:48:20.755638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.719 [2024-06-07 21:48:20.755668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.719 qpair failed and we were unable to recover it. 00:31:20.719 [2024-06-07 21:48:20.756040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.719 [2024-06-07 21:48:20.756072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.719 qpair failed and we were unable to recover it. 00:31:20.719 [2024-06-07 21:48:20.756316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.719 [2024-06-07 21:48:20.756346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.719 qpair failed and we were unable to recover it. 00:31:20.719 [2024-06-07 21:48:20.756588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.719 [2024-06-07 21:48:20.756619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.719 qpair failed and we were unable to recover it. 00:31:20.719 [2024-06-07 21:48:20.756805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.719 [2024-06-07 21:48:20.756815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.719 qpair failed and we were unable to recover it. 00:31:20.719 [2024-06-07 21:48:20.757042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.719 [2024-06-07 21:48:20.757067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.719 qpair failed and we were unable to recover it. 00:31:20.719 [2024-06-07 21:48:20.757345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.719 [2024-06-07 21:48:20.757355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.719 qpair failed and we were unable to recover it. 00:31:20.719 [2024-06-07 21:48:20.757580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.719 [2024-06-07 21:48:20.757589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.719 qpair failed and we were unable to recover it. 00:31:20.719 [2024-06-07 21:48:20.757815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.719 [2024-06-07 21:48:20.757824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.719 qpair failed and we were unable to recover it. 00:31:20.719 [2024-06-07 21:48:20.758121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.719 [2024-06-07 21:48:20.758131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.719 qpair failed and we were unable to recover it. 00:31:20.719 [2024-06-07 21:48:20.758346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.719 [2024-06-07 21:48:20.758356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.719 qpair failed and we were unable to recover it. 00:31:20.719 [2024-06-07 21:48:20.758565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.719 [2024-06-07 21:48:20.758574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.719 qpair failed and we were unable to recover it. 00:31:20.719 [2024-06-07 21:48:20.758728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.719 [2024-06-07 21:48:20.758737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.719 qpair failed and we were unable to recover it. 00:31:20.720 [2024-06-07 21:48:20.759050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.720 [2024-06-07 21:48:20.759081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.720 qpair failed and we were unable to recover it. 00:31:20.720 [2024-06-07 21:48:20.759357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.720 [2024-06-07 21:48:20.759387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.720 qpair failed and we were unable to recover it. 00:31:20.720 [2024-06-07 21:48:20.759724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.720 [2024-06-07 21:48:20.759754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.720 qpair failed and we were unable to recover it. 00:31:20.720 [2024-06-07 21:48:20.760077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.720 [2024-06-07 21:48:20.760108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.720 qpair failed and we were unable to recover it. 00:31:20.720 [2024-06-07 21:48:20.760368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.720 [2024-06-07 21:48:20.760399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.720 qpair failed and we were unable to recover it. 00:31:20.720 [2024-06-07 21:48:20.760585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.720 [2024-06-07 21:48:20.760615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.720 qpair failed and we were unable to recover it. 00:31:20.720 [2024-06-07 21:48:20.760907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.720 [2024-06-07 21:48:20.760938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.720 qpair failed and we were unable to recover it. 00:31:20.720 [2024-06-07 21:48:20.761245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.720 [2024-06-07 21:48:20.761255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.720 qpair failed and we were unable to recover it. 00:31:20.720 [2024-06-07 21:48:20.761418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.720 [2024-06-07 21:48:20.761427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.720 qpair failed and we were unable to recover it. 00:31:20.720 [2024-06-07 21:48:20.761721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.720 [2024-06-07 21:48:20.761730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.720 qpair failed and we were unable to recover it. 00:31:20.720 [2024-06-07 21:48:20.761843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.720 [2024-06-07 21:48:20.761852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.720 qpair failed and we were unable to recover it. 00:31:20.720 [2024-06-07 21:48:20.761998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.720 [2024-06-07 21:48:20.762008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.720 qpair failed and we were unable to recover it. 00:31:20.720 [2024-06-07 21:48:20.762227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.720 [2024-06-07 21:48:20.762258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.720 qpair failed and we were unable to recover it. 00:31:20.720 [2024-06-07 21:48:20.762514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.720 [2024-06-07 21:48:20.762545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.720 qpair failed and we were unable to recover it. 00:31:20.720 [2024-06-07 21:48:20.762873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.720 [2024-06-07 21:48:20.762882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.720 qpair failed and we were unable to recover it. 00:31:20.720 [2024-06-07 21:48:20.763149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.720 [2024-06-07 21:48:20.763159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.720 qpair failed and we were unable to recover it. 00:31:20.720 [2024-06-07 21:48:20.763422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.720 [2024-06-07 21:48:20.763442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.720 qpair failed and we were unable to recover it. 00:31:20.720 [2024-06-07 21:48:20.763736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.720 [2024-06-07 21:48:20.763745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.720 qpair failed and we were unable to recover it. 00:31:20.720 [2024-06-07 21:48:20.763951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.720 [2024-06-07 21:48:20.763960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.720 qpair failed and we were unable to recover it. 00:31:20.720 [2024-06-07 21:48:20.764092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.720 [2024-06-07 21:48:20.764101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.720 qpair failed and we were unable to recover it. 00:31:20.720 [2024-06-07 21:48:20.764339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.720 [2024-06-07 21:48:20.764349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.720 qpair failed and we were unable to recover it. 00:31:20.720 [2024-06-07 21:48:20.764488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.720 [2024-06-07 21:48:20.764497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.720 qpair failed and we were unable to recover it. 00:31:20.720 [2024-06-07 21:48:20.764638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.720 [2024-06-07 21:48:20.764647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.720 qpair failed and we were unable to recover it. 00:31:20.720 [2024-06-07 21:48:20.764861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.720 [2024-06-07 21:48:20.764870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.720 qpair failed and we were unable to recover it. 00:31:20.720 [2024-06-07 21:48:20.765108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.720 [2024-06-07 21:48:20.765118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.720 qpair failed and we were unable to recover it. 00:31:20.720 [2024-06-07 21:48:20.765410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.720 [2024-06-07 21:48:20.765420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.720 qpair failed and we were unable to recover it. 00:31:20.720 [2024-06-07 21:48:20.765545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.720 [2024-06-07 21:48:20.765554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.720 qpair failed and we were unable to recover it. 00:31:20.720 [2024-06-07 21:48:20.765775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.720 [2024-06-07 21:48:20.765784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.720 qpair failed and we were unable to recover it. 00:31:20.720 [2024-06-07 21:48:20.765924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.720 [2024-06-07 21:48:20.765933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.720 qpair failed and we were unable to recover it. 00:31:20.720 [2024-06-07 21:48:20.766231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.720 [2024-06-07 21:48:20.766240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.720 qpair failed and we were unable to recover it. 00:31:20.720 [2024-06-07 21:48:20.766441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.720 [2024-06-07 21:48:20.766450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.720 qpair failed and we were unable to recover it. 00:31:20.720 [2024-06-07 21:48:20.766664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.720 [2024-06-07 21:48:20.766673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.720 qpair failed and we were unable to recover it. 00:31:20.720 [2024-06-07 21:48:20.766885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.720 [2024-06-07 21:48:20.766894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.720 qpair failed and we were unable to recover it. 00:31:20.720 [2024-06-07 21:48:20.767159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.720 [2024-06-07 21:48:20.767168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.720 qpair failed and we were unable to recover it. 00:31:20.720 [2024-06-07 21:48:20.767404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.720 [2024-06-07 21:48:20.767414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.720 qpair failed and we were unable to recover it. 00:31:20.720 [2024-06-07 21:48:20.767707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.720 [2024-06-07 21:48:20.767716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.720 qpair failed and we were unable to recover it. 00:31:20.720 [2024-06-07 21:48:20.767958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.720 [2024-06-07 21:48:20.767968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.720 qpair failed and we were unable to recover it. 00:31:20.720 [2024-06-07 21:48:20.768181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.720 [2024-06-07 21:48:20.768194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.720 qpair failed and we were unable to recover it. 00:31:20.721 [2024-06-07 21:48:20.768406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.721 [2024-06-07 21:48:20.768415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.721 qpair failed and we were unable to recover it. 00:31:20.721 [2024-06-07 21:48:20.768688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.721 [2024-06-07 21:48:20.768718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.721 qpair failed and we were unable to recover it. 00:31:20.721 [2024-06-07 21:48:20.768958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.721 [2024-06-07 21:48:20.768989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.721 qpair failed and we were unable to recover it. 00:31:20.721 [2024-06-07 21:48:20.769303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.721 [2024-06-07 21:48:20.769313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.721 qpair failed and we were unable to recover it. 00:31:20.721 [2024-06-07 21:48:20.769613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.721 [2024-06-07 21:48:20.769622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.721 qpair failed and we were unable to recover it. 00:31:20.721 [2024-06-07 21:48:20.769780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.721 [2024-06-07 21:48:20.769811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.721 qpair failed and we were unable to recover it. 00:31:20.721 [2024-06-07 21:48:20.770085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.721 [2024-06-07 21:48:20.770118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.721 qpair failed and we were unable to recover it. 00:31:20.721 [2024-06-07 21:48:20.770361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.721 [2024-06-07 21:48:20.770392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.721 qpair failed and we were unable to recover it. 00:31:20.721 [2024-06-07 21:48:20.770741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.721 [2024-06-07 21:48:20.770771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.721 qpair failed and we were unable to recover it. 00:31:20.721 [2024-06-07 21:48:20.771104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.721 [2024-06-07 21:48:20.771135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.721 qpair failed and we were unable to recover it. 00:31:20.721 [2024-06-07 21:48:20.771334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.721 [2024-06-07 21:48:20.771364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.721 qpair failed and we were unable to recover it. 00:31:20.721 [2024-06-07 21:48:20.771646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.721 [2024-06-07 21:48:20.771677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.721 qpair failed and we were unable to recover it. 00:31:20.721 [2024-06-07 21:48:20.771922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.721 [2024-06-07 21:48:20.771931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.721 qpair failed and we were unable to recover it. 00:31:20.721 [2024-06-07 21:48:20.772132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.721 [2024-06-07 21:48:20.772142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.721 qpair failed and we were unable to recover it. 00:31:20.721 [2024-06-07 21:48:20.772408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.721 [2024-06-07 21:48:20.772417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.721 qpair failed and we were unable to recover it. 00:31:20.721 [2024-06-07 21:48:20.772721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.721 [2024-06-07 21:48:20.772731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.721 qpair failed and we were unable to recover it. 00:31:20.721 [2024-06-07 21:48:20.773024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.721 [2024-06-07 21:48:20.773038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.721 qpair failed and we were unable to recover it. 00:31:20.721 [2024-06-07 21:48:20.773242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.721 [2024-06-07 21:48:20.773273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.721 qpair failed and we were unable to recover it. 00:31:20.721 [2024-06-07 21:48:20.773462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.721 [2024-06-07 21:48:20.773493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.721 qpair failed and we were unable to recover it. 00:31:20.721 [2024-06-07 21:48:20.773765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.721 [2024-06-07 21:48:20.773795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.721 qpair failed and we were unable to recover it. 00:31:20.721 [2024-06-07 21:48:20.774157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.721 [2024-06-07 21:48:20.774189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.721 qpair failed and we were unable to recover it. 00:31:20.721 [2024-06-07 21:48:20.774449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.721 [2024-06-07 21:48:20.774480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.721 qpair failed and we were unable to recover it. 00:31:20.721 [2024-06-07 21:48:20.774663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.721 [2024-06-07 21:48:20.774693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.721 qpair failed and we were unable to recover it. 00:31:20.721 [2024-06-07 21:48:20.775048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.721 [2024-06-07 21:48:20.775080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.721 qpair failed and we were unable to recover it. 00:31:20.721 [2024-06-07 21:48:20.775422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.721 [2024-06-07 21:48:20.775452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.721 qpair failed and we were unable to recover it. 00:31:20.721 [2024-06-07 21:48:20.775768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.721 [2024-06-07 21:48:20.775798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.721 qpair failed and we were unable to recover it. 00:31:20.721 [2024-06-07 21:48:20.776056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.721 [2024-06-07 21:48:20.776088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.721 qpair failed and we were unable to recover it. 00:31:20.721 [2024-06-07 21:48:20.776425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.721 [2024-06-07 21:48:20.776455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.721 qpair failed and we were unable to recover it. 00:31:20.721 [2024-06-07 21:48:20.776738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.721 [2024-06-07 21:48:20.776769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.721 qpair failed and we were unable to recover it. 00:31:20.721 [2024-06-07 21:48:20.776964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.721 [2024-06-07 21:48:20.776974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.721 qpair failed and we were unable to recover it. 00:31:20.721 [2024-06-07 21:48:20.777204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.721 [2024-06-07 21:48:20.777214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.721 qpair failed and we were unable to recover it. 00:31:20.721 [2024-06-07 21:48:20.777457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.721 [2024-06-07 21:48:20.777466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.721 qpair failed and we were unable to recover it. 00:31:20.721 [2024-06-07 21:48:20.777756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.721 [2024-06-07 21:48:20.777765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.721 qpair failed and we were unable to recover it. 00:31:20.721 [2024-06-07 21:48:20.778051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.721 [2024-06-07 21:48:20.778060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.721 qpair failed and we were unable to recover it. 00:31:20.721 [2024-06-07 21:48:20.778301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.721 [2024-06-07 21:48:20.778310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.721 qpair failed and we were unable to recover it. 00:31:20.721 [2024-06-07 21:48:20.778596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.721 [2024-06-07 21:48:20.778605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.721 qpair failed and we were unable to recover it. 00:31:20.721 [2024-06-07 21:48:20.778800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.721 [2024-06-07 21:48:20.778809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.721 qpair failed and we were unable to recover it. 00:31:20.721 [2024-06-07 21:48:20.779021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.721 [2024-06-07 21:48:20.779033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.721 qpair failed and we were unable to recover it. 00:31:20.722 [2024-06-07 21:48:20.779145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.722 [2024-06-07 21:48:20.779154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.722 qpair failed and we were unable to recover it. 00:31:20.722 [2024-06-07 21:48:20.779429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.722 [2024-06-07 21:48:20.779439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.722 qpair failed and we were unable to recover it. 00:31:20.722 [2024-06-07 21:48:20.779735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.722 [2024-06-07 21:48:20.779744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.722 qpair failed and we were unable to recover it. 00:31:20.722 [2024-06-07 21:48:20.779978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.722 [2024-06-07 21:48:20.779987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.722 qpair failed and we were unable to recover it. 00:31:20.722 [2024-06-07 21:48:20.780197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.722 [2024-06-07 21:48:20.780207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.722 qpair failed and we were unable to recover it. 00:31:20.722 [2024-06-07 21:48:20.780498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.722 [2024-06-07 21:48:20.780528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.722 qpair failed and we were unable to recover it. 00:31:20.722 [2024-06-07 21:48:20.780870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.722 [2024-06-07 21:48:20.780900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.722 qpair failed and we were unable to recover it. 00:31:20.722 [2024-06-07 21:48:20.781152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.722 [2024-06-07 21:48:20.781183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.722 qpair failed and we were unable to recover it. 00:31:20.722 [2024-06-07 21:48:20.781518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.722 [2024-06-07 21:48:20.781548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.722 qpair failed and we were unable to recover it. 00:31:20.722 [2024-06-07 21:48:20.781858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.722 [2024-06-07 21:48:20.781888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.722 qpair failed and we were unable to recover it. 00:31:20.722 [2024-06-07 21:48:20.782144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.722 [2024-06-07 21:48:20.782177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.722 qpair failed and we were unable to recover it. 00:31:20.722 [2024-06-07 21:48:20.782437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.722 [2024-06-07 21:48:20.782467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.722 qpair failed and we were unable to recover it. 00:31:20.722 [2024-06-07 21:48:20.782801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.722 [2024-06-07 21:48:20.782832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.722 qpair failed and we were unable to recover it. 00:31:20.722 [2024-06-07 21:48:20.783171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.722 [2024-06-07 21:48:20.783203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.722 qpair failed and we were unable to recover it. 00:31:20.722 [2024-06-07 21:48:20.783540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.722 [2024-06-07 21:48:20.783570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.722 qpair failed and we were unable to recover it. 00:31:20.722 [2024-06-07 21:48:20.783862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.722 [2024-06-07 21:48:20.783892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.722 qpair failed and we were unable to recover it. 00:31:20.722 [2024-06-07 21:48:20.784232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.722 [2024-06-07 21:48:20.784262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.722 qpair failed and we were unable to recover it. 00:31:20.722 [2024-06-07 21:48:20.784601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.722 [2024-06-07 21:48:20.784630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.722 qpair failed and we were unable to recover it. 00:31:20.722 [2024-06-07 21:48:20.784978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.722 [2024-06-07 21:48:20.785009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.722 qpair failed and we were unable to recover it. 00:31:20.722 [2024-06-07 21:48:20.785298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.722 [2024-06-07 21:48:20.785330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.722 qpair failed and we were unable to recover it. 00:31:20.722 [2024-06-07 21:48:20.785651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.722 [2024-06-07 21:48:20.785681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.722 qpair failed and we were unable to recover it. 00:31:20.722 [2024-06-07 21:48:20.785870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.722 [2024-06-07 21:48:20.785901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.722 qpair failed and we were unable to recover it. 00:31:20.722 [2024-06-07 21:48:20.786171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.722 [2024-06-07 21:48:20.786203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.722 qpair failed and we were unable to recover it. 00:31:20.722 [2024-06-07 21:48:20.786452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.722 [2024-06-07 21:48:20.786460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.722 qpair failed and we were unable to recover it. 00:31:20.722 [2024-06-07 21:48:20.786749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.722 [2024-06-07 21:48:20.786758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.722 qpair failed and we were unable to recover it. 00:31:20.722 [2024-06-07 21:48:20.786969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.722 [2024-06-07 21:48:20.786978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.722 qpair failed and we were unable to recover it. 00:31:20.722 [2024-06-07 21:48:20.787203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.722 [2024-06-07 21:48:20.787212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.722 qpair failed and we were unable to recover it. 00:31:20.722 [2024-06-07 21:48:20.787483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.722 [2024-06-07 21:48:20.787493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.722 qpair failed and we were unable to recover it. 00:31:20.722 [2024-06-07 21:48:20.787726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.722 [2024-06-07 21:48:20.787735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.722 qpair failed and we were unable to recover it. 00:31:20.722 [2024-06-07 21:48:20.787880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.722 [2024-06-07 21:48:20.787890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.722 qpair failed and we were unable to recover it. 00:31:20.722 [2024-06-07 21:48:20.788041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.722 [2024-06-07 21:48:20.788050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.722 qpair failed and we were unable to recover it. 00:31:20.722 [2024-06-07 21:48:20.788286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.722 [2024-06-07 21:48:20.788295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.722 qpair failed and we were unable to recover it. 00:31:20.722 [2024-06-07 21:48:20.788438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.722 [2024-06-07 21:48:20.788448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.722 qpair failed and we were unable to recover it. 00:31:20.722 [2024-06-07 21:48:20.788725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.722 [2024-06-07 21:48:20.788755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.722 qpair failed and we were unable to recover it. 00:31:20.722 [2024-06-07 21:48:20.789001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.722 [2024-06-07 21:48:20.789038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.722 qpair failed and we were unable to recover it. 00:31:20.722 [2024-06-07 21:48:20.789370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.722 [2024-06-07 21:48:20.789379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.722 qpair failed and we were unable to recover it. 00:31:20.722 [2024-06-07 21:48:20.789576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.722 [2024-06-07 21:48:20.789585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.722 qpair failed and we were unable to recover it. 00:31:20.722 [2024-06-07 21:48:20.789879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.722 [2024-06-07 21:48:20.789888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.722 qpair failed and we were unable to recover it. 00:31:20.723 [2024-06-07 21:48:20.790088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.723 [2024-06-07 21:48:20.790097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.723 qpair failed and we were unable to recover it. 00:31:20.723 [2024-06-07 21:48:20.790312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.723 [2024-06-07 21:48:20.790321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.723 qpair failed and we were unable to recover it. 00:31:20.723 [2024-06-07 21:48:20.790559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.723 [2024-06-07 21:48:20.790588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.723 qpair failed and we were unable to recover it. 00:31:20.723 [2024-06-07 21:48:20.790846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.723 [2024-06-07 21:48:20.790881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.723 qpair failed and we were unable to recover it. 00:31:20.723 [2024-06-07 21:48:20.791133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.723 [2024-06-07 21:48:20.791165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.723 qpair failed and we were unable to recover it. 00:31:20.723 [2024-06-07 21:48:20.791422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.723 [2024-06-07 21:48:20.791452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.723 qpair failed and we were unable to recover it. 00:31:20.723 [2024-06-07 21:48:20.791702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.723 [2024-06-07 21:48:20.791733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.723 qpair failed and we were unable to recover it. 00:31:20.723 [2024-06-07 21:48:20.792052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.723 [2024-06-07 21:48:20.792083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.723 qpair failed and we were unable to recover it. 00:31:20.723 [2024-06-07 21:48:20.792410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.723 [2024-06-07 21:48:20.792420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.723 qpair failed and we were unable to recover it. 00:31:20.723 [2024-06-07 21:48:20.792688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.723 [2024-06-07 21:48:20.792697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.723 qpair failed and we were unable to recover it. 00:31:20.723 [2024-06-07 21:48:20.792858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.723 [2024-06-07 21:48:20.792867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.723 qpair failed and we were unable to recover it. 00:31:20.723 [2024-06-07 21:48:20.793046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.723 [2024-06-07 21:48:20.793056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.723 qpair failed and we were unable to recover it. 00:31:20.723 [2024-06-07 21:48:20.793258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.723 [2024-06-07 21:48:20.793267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.723 qpair failed and we were unable to recover it. 00:31:20.723 [2024-06-07 21:48:20.793426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.723 [2024-06-07 21:48:20.793435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.723 qpair failed and we were unable to recover it. 00:31:20.723 [2024-06-07 21:48:20.793581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.723 [2024-06-07 21:48:20.793590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.723 qpair failed and we were unable to recover it. 00:31:20.723 [2024-06-07 21:48:20.793795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.723 [2024-06-07 21:48:20.793804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.723 qpair failed and we were unable to recover it. 00:31:20.723 [2024-06-07 21:48:20.794020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.723 [2024-06-07 21:48:20.794038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.723 qpair failed and we were unable to recover it. 00:31:20.723 [2024-06-07 21:48:20.794201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.723 [2024-06-07 21:48:20.794210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.723 qpair failed and we were unable to recover it. 00:31:20.723 [2024-06-07 21:48:20.794416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.723 [2024-06-07 21:48:20.794447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.723 qpair failed and we were unable to recover it. 00:31:20.723 [2024-06-07 21:48:20.794789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.723 [2024-06-07 21:48:20.794819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.723 qpair failed and we were unable to recover it. 00:31:20.723 [2024-06-07 21:48:20.794971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.723 [2024-06-07 21:48:20.794981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.723 qpair failed and we were unable to recover it. 00:31:20.723 [2024-06-07 21:48:20.795200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.723 [2024-06-07 21:48:20.795232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.723 qpair failed and we were unable to recover it. 00:31:20.723 [2024-06-07 21:48:20.795520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.723 [2024-06-07 21:48:20.795550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.723 qpair failed and we were unable to recover it. 00:31:20.723 [2024-06-07 21:48:20.795888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.723 [2024-06-07 21:48:20.795919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.723 qpair failed and we were unable to recover it. 00:31:20.723 [2024-06-07 21:48:20.796126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.723 [2024-06-07 21:48:20.796157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.723 qpair failed and we were unable to recover it. 00:31:20.723 [2024-06-07 21:48:20.796512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.723 [2024-06-07 21:48:20.796542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.723 qpair failed and we were unable to recover it. 00:31:20.723 [2024-06-07 21:48:20.796822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.723 [2024-06-07 21:48:20.796852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.723 qpair failed and we were unable to recover it. 00:31:20.723 [2024-06-07 21:48:20.797104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.723 [2024-06-07 21:48:20.797135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.723 qpair failed and we were unable to recover it. 00:31:20.723 [2024-06-07 21:48:20.797436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.723 [2024-06-07 21:48:20.797467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.723 qpair failed and we were unable to recover it. 00:31:20.723 [2024-06-07 21:48:20.797668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.723 [2024-06-07 21:48:20.797698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.723 qpair failed and we were unable to recover it. 00:31:20.724 [2024-06-07 21:48:20.798058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.724 [2024-06-07 21:48:20.798127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:20.724 qpair failed and we were unable to recover it. 00:31:20.724 [2024-06-07 21:48:20.798423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.724 [2024-06-07 21:48:20.798442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:20.724 qpair failed and we were unable to recover it. 00:31:20.724 [2024-06-07 21:48:20.798668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.724 [2024-06-07 21:48:20.798686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:20.724 qpair failed and we were unable to recover it. 00:31:20.724 [2024-06-07 21:48:20.798941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.724 [2024-06-07 21:48:20.798959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:20.724 qpair failed and we were unable to recover it. 00:31:20.724 [2024-06-07 21:48:20.799246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.724 [2024-06-07 21:48:20.799265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:20.724 qpair failed and we were unable to recover it. 00:31:20.724 [2024-06-07 21:48:20.799554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.724 [2024-06-07 21:48:20.799572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:20.724 qpair failed and we were unable to recover it. 00:31:20.724 [2024-06-07 21:48:20.799847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.724 [2024-06-07 21:48:20.799858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.724 qpair failed and we were unable to recover it. 00:31:20.724 [2024-06-07 21:48:20.800162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.724 [2024-06-07 21:48:20.800171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.724 qpair failed and we were unable to recover it. 00:31:20.724 [2024-06-07 21:48:20.800383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.724 [2024-06-07 21:48:20.800392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.724 qpair failed and we were unable to recover it. 00:31:20.724 [2024-06-07 21:48:20.800620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.724 [2024-06-07 21:48:20.800629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.724 qpair failed and we were unable to recover it. 00:31:20.724 [2024-06-07 21:48:20.800788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.724 [2024-06-07 21:48:20.800797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.724 qpair failed and we were unable to recover it. 00:31:20.724 [2024-06-07 21:48:20.801100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.724 [2024-06-07 21:48:20.801131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.724 qpair failed and we were unable to recover it. 00:31:20.724 [2024-06-07 21:48:20.801469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.724 [2024-06-07 21:48:20.801499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.724 qpair failed and we were unable to recover it. 00:31:20.724 [2024-06-07 21:48:20.801764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.724 [2024-06-07 21:48:20.801799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.724 qpair failed and we were unable to recover it. 00:31:20.724 [2024-06-07 21:48:20.802048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.724 [2024-06-07 21:48:20.802057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.724 qpair failed and we were unable to recover it. 00:31:20.724 [2024-06-07 21:48:20.802290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.724 [2024-06-07 21:48:20.802299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.724 qpair failed and we were unable to recover it. 00:31:20.724 [2024-06-07 21:48:20.802508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.724 [2024-06-07 21:48:20.802517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.724 qpair failed and we were unable to recover it. 00:31:20.724 [2024-06-07 21:48:20.802783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.724 [2024-06-07 21:48:20.802792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.724 qpair failed and we were unable to recover it. 00:31:20.724 [2024-06-07 21:48:20.803078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.724 [2024-06-07 21:48:20.803087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.724 qpair failed and we were unable to recover it. 00:31:20.724 [2024-06-07 21:48:20.803361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.724 [2024-06-07 21:48:20.803370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.724 qpair failed and we were unable to recover it. 00:31:20.724 [2024-06-07 21:48:20.803612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.724 [2024-06-07 21:48:20.803621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.724 qpair failed and we were unable to recover it. 00:31:20.724 [2024-06-07 21:48:20.803836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.724 [2024-06-07 21:48:20.803845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.724 qpair failed and we were unable to recover it. 00:31:20.724 [2024-06-07 21:48:20.804052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.724 [2024-06-07 21:48:20.804061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.724 qpair failed and we were unable to recover it. 00:31:20.724 [2024-06-07 21:48:20.804329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.724 [2024-06-07 21:48:20.804338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.724 qpair failed and we were unable to recover it. 00:31:20.724 [2024-06-07 21:48:20.804490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.724 [2024-06-07 21:48:20.804499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.724 qpair failed and we were unable to recover it. 00:31:20.724 [2024-06-07 21:48:20.804792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.724 [2024-06-07 21:48:20.804801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.724 qpair failed and we were unable to recover it. 00:31:20.724 [2024-06-07 21:48:20.805057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.724 [2024-06-07 21:48:20.805088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.724 qpair failed and we were unable to recover it. 00:31:20.724 [2024-06-07 21:48:20.805282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.724 [2024-06-07 21:48:20.805312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.724 qpair failed and we were unable to recover it. 00:31:20.724 [2024-06-07 21:48:20.805509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.724 [2024-06-07 21:48:20.805540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.724 qpair failed and we were unable to recover it. 00:31:20.724 [2024-06-07 21:48:20.805876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.724 [2024-06-07 21:48:20.805907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.724 qpair failed and we were unable to recover it. 00:31:20.724 [2024-06-07 21:48:20.806122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.724 [2024-06-07 21:48:20.806154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.724 qpair failed and we were unable to recover it. 00:31:20.724 [2024-06-07 21:48:20.806463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.724 [2024-06-07 21:48:20.806493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.724 qpair failed and we were unable to recover it. 00:31:20.724 [2024-06-07 21:48:20.806826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.724 [2024-06-07 21:48:20.806856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.724 qpair failed and we were unable to recover it. 00:31:20.724 [2024-06-07 21:48:20.807132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.724 [2024-06-07 21:48:20.807164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.724 qpair failed and we were unable to recover it. 00:31:20.724 [2024-06-07 21:48:20.807441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.724 [2024-06-07 21:48:20.807471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.724 qpair failed and we were unable to recover it. 00:31:20.724 [2024-06-07 21:48:20.807759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.724 [2024-06-07 21:48:20.807791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.724 qpair failed and we were unable to recover it. 00:31:20.724 [2024-06-07 21:48:20.808059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.724 [2024-06-07 21:48:20.808081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.724 qpair failed and we were unable to recover it. 00:31:20.725 [2024-06-07 21:48:20.808369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.725 [2024-06-07 21:48:20.808378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.725 qpair failed and we were unable to recover it. 00:31:20.725 [2024-06-07 21:48:20.808528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.725 [2024-06-07 21:48:20.808538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.725 qpair failed and we were unable to recover it. 00:31:20.725 [2024-06-07 21:48:20.808754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.725 [2024-06-07 21:48:20.808762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.725 qpair failed and we were unable to recover it. 00:31:20.725 [2024-06-07 21:48:20.809055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.725 [2024-06-07 21:48:20.809102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.725 qpair failed and we were unable to recover it. 00:31:20.725 [2024-06-07 21:48:20.809362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.725 [2024-06-07 21:48:20.809393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.725 qpair failed and we were unable to recover it. 00:31:20.725 [2024-06-07 21:48:20.809683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.725 [2024-06-07 21:48:20.809714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.725 qpair failed and we were unable to recover it. 00:31:20.725 [2024-06-07 21:48:20.809973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.725 [2024-06-07 21:48:20.810004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.725 qpair failed and we were unable to recover it. 00:31:20.725 [2024-06-07 21:48:20.810370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.725 [2024-06-07 21:48:20.810401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.725 qpair failed and we were unable to recover it. 00:31:20.725 [2024-06-07 21:48:20.810642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.725 [2024-06-07 21:48:20.810673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.725 qpair failed and we were unable to recover it. 00:31:20.725 [2024-06-07 21:48:20.810926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.725 [2024-06-07 21:48:20.810956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.725 qpair failed and we were unable to recover it. 00:31:20.725 [2024-06-07 21:48:20.811247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.725 [2024-06-07 21:48:20.811278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.725 qpair failed and we were unable to recover it. 00:31:20.725 [2024-06-07 21:48:20.811530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.725 [2024-06-07 21:48:20.811560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.725 qpair failed and we were unable to recover it. 00:31:20.725 [2024-06-07 21:48:20.811806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.725 [2024-06-07 21:48:20.811837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.725 qpair failed and we were unable to recover it. 00:31:20.725 [2024-06-07 21:48:20.812222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.725 [2024-06-07 21:48:20.812253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.725 qpair failed and we were unable to recover it. 00:31:20.725 [2024-06-07 21:48:20.812566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.725 [2024-06-07 21:48:20.812596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.725 qpair failed and we were unable to recover it. 00:31:20.725 [2024-06-07 21:48:20.812870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.725 [2024-06-07 21:48:20.812901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.725 qpair failed and we were unable to recover it. 00:31:20.725 [2024-06-07 21:48:20.813241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.725 [2024-06-07 21:48:20.813277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.725 qpair failed and we were unable to recover it. 00:31:20.725 [2024-06-07 21:48:20.813592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.725 [2024-06-07 21:48:20.813622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.725 qpair failed and we were unable to recover it. 00:31:20.725 [2024-06-07 21:48:20.813875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.725 [2024-06-07 21:48:20.813905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.725 qpair failed and we were unable to recover it. 00:31:20.725 [2024-06-07 21:48:20.814154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.725 [2024-06-07 21:48:20.814163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.725 qpair failed and we were unable to recover it. 00:31:20.725 [2024-06-07 21:48:20.814427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.725 [2024-06-07 21:48:20.814436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.725 qpair failed and we were unable to recover it. 00:31:20.725 [2024-06-07 21:48:20.814585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.725 [2024-06-07 21:48:20.814594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.725 qpair failed and we were unable to recover it. 00:31:20.725 [2024-06-07 21:48:20.814758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.725 [2024-06-07 21:48:20.814767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.725 qpair failed and we were unable to recover it. 00:31:20.725 [2024-06-07 21:48:20.815062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.725 [2024-06-07 21:48:20.815094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.725 qpair failed and we were unable to recover it. 00:31:20.725 [2024-06-07 21:48:20.815355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.725 [2024-06-07 21:48:20.815385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.725 qpair failed and we were unable to recover it. 00:31:20.725 [2024-06-07 21:48:20.815626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.725 [2024-06-07 21:48:20.815656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.725 qpair failed and we were unable to recover it. 00:31:20.725 [2024-06-07 21:48:20.815973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.725 [2024-06-07 21:48:20.816004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.725 qpair failed and we were unable to recover it. 00:31:20.725 [2024-06-07 21:48:20.816356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.725 [2024-06-07 21:48:20.816387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.725 qpair failed and we were unable to recover it. 00:31:20.725 [2024-06-07 21:48:20.816668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.725 [2024-06-07 21:48:20.816698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.725 qpair failed and we were unable to recover it. 00:31:20.725 [2024-06-07 21:48:20.816948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.725 [2024-06-07 21:48:20.816978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.725 qpair failed and we were unable to recover it. 00:31:20.725 [2024-06-07 21:48:20.817333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.725 [2024-06-07 21:48:20.817365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.725 qpair failed and we were unable to recover it. 00:31:20.725 [2024-06-07 21:48:20.817517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.725 [2024-06-07 21:48:20.817547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.725 qpair failed and we were unable to recover it. 00:31:20.725 [2024-06-07 21:48:20.817907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.725 [2024-06-07 21:48:20.817938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.725 qpair failed and we were unable to recover it. 00:31:20.725 [2024-06-07 21:48:20.818194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.725 [2024-06-07 21:48:20.818225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.725 qpair failed and we were unable to recover it. 00:31:20.725 [2024-06-07 21:48:20.818525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.725 [2024-06-07 21:48:20.818534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.725 qpair failed and we were unable to recover it. 00:31:20.725 [2024-06-07 21:48:20.818681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.725 [2024-06-07 21:48:20.818690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.725 qpair failed and we were unable to recover it. 00:31:20.725 [2024-06-07 21:48:20.818916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.725 [2024-06-07 21:48:20.818925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.725 qpair failed and we were unable to recover it. 00:31:20.725 [2024-06-07 21:48:20.819240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.726 [2024-06-07 21:48:20.819250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.726 qpair failed and we were unable to recover it. 00:31:20.726 [2024-06-07 21:48:20.819513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.726 [2024-06-07 21:48:20.819523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.726 qpair failed and we were unable to recover it. 00:31:20.726 [2024-06-07 21:48:20.819768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.726 [2024-06-07 21:48:20.819777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.726 qpair failed and we were unable to recover it. 00:31:20.726 [2024-06-07 21:48:20.820019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.726 [2024-06-07 21:48:20.820032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.726 qpair failed and we were unable to recover it. 00:31:20.726 [2024-06-07 21:48:20.820229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.726 [2024-06-07 21:48:20.820239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.726 qpair failed and we were unable to recover it. 00:31:20.726 [2024-06-07 21:48:20.820471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.726 [2024-06-07 21:48:20.820502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.726 qpair failed and we were unable to recover it. 00:31:20.726 [2024-06-07 21:48:20.820911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.726 [2024-06-07 21:48:20.820980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b4d60 with addr=10.0.0.2, port=4420 00:31:20.726 qpair failed and we were unable to recover it. 00:31:20.726 [2024-06-07 21:48:20.821296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.726 [2024-06-07 21:48:20.821365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:20.726 qpair failed and we were unable to recover it. 00:31:20.726 [2024-06-07 21:48:20.821713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.726 [2024-06-07 21:48:20.821747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:20.726 qpair failed and we were unable to recover it. 00:31:20.726 [2024-06-07 21:48:20.822014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.726 [2024-06-07 21:48:20.822059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:20.726 qpair failed and we were unable to recover it. 00:31:20.726 [2024-06-07 21:48:20.822313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.726 [2024-06-07 21:48:20.822330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:20.726 qpair failed and we were unable to recover it. 00:31:20.726 [2024-06-07 21:48:20.822493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.726 [2024-06-07 21:48:20.822511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:20.726 qpair failed and we were unable to recover it. 00:31:20.726 [2024-06-07 21:48:20.822748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.726 [2024-06-07 21:48:20.822781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.726 qpair failed and we were unable to recover it. 00:31:20.726 [2024-06-07 21:48:20.823046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.726 [2024-06-07 21:48:20.823077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.726 qpair failed and we were unable to recover it. 00:31:20.726 [2024-06-07 21:48:20.823410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.726 [2024-06-07 21:48:20.823419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.726 qpair failed and we were unable to recover it. 00:31:20.726 [2024-06-07 21:48:20.823682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.726 [2024-06-07 21:48:20.823692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.726 qpair failed and we were unable to recover it. 00:31:20.726 [2024-06-07 21:48:20.823927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.726 [2024-06-07 21:48:20.823936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.726 qpair failed and we were unable to recover it. 00:31:20.726 [2024-06-07 21:48:20.824228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.726 [2024-06-07 21:48:20.824238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.726 qpair failed and we were unable to recover it. 00:31:20.726 [2024-06-07 21:48:20.824443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.726 [2024-06-07 21:48:20.824452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.726 qpair failed and we were unable to recover it. 00:31:20.726 [2024-06-07 21:48:20.824723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.726 [2024-06-07 21:48:20.824736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.726 qpair failed and we were unable to recover it. 00:31:20.726 [2024-06-07 21:48:20.824952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.726 [2024-06-07 21:48:20.824961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.726 qpair failed and we were unable to recover it. 00:31:20.726 [2024-06-07 21:48:20.825172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.726 [2024-06-07 21:48:20.825182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.726 qpair failed and we were unable to recover it. 00:31:20.726 [2024-06-07 21:48:20.825384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.726 [2024-06-07 21:48:20.825393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.726 qpair failed and we were unable to recover it. 00:31:20.726 [2024-06-07 21:48:20.825590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.726 [2024-06-07 21:48:20.825599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.726 qpair failed and we were unable to recover it. 00:31:20.726 [2024-06-07 21:48:20.825893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.726 [2024-06-07 21:48:20.825923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.726 qpair failed and we were unable to recover it. 00:31:20.726 [2024-06-07 21:48:20.826270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.726 [2024-06-07 21:48:20.826302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.726 qpair failed and we were unable to recover it. 00:31:20.726 [2024-06-07 21:48:20.826506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.726 [2024-06-07 21:48:20.826536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.726 qpair failed and we were unable to recover it. 00:31:20.726 [2024-06-07 21:48:20.826797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.726 [2024-06-07 21:48:20.826827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.726 qpair failed and we were unable to recover it. 00:31:20.726 [2024-06-07 21:48:20.827150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.726 [2024-06-07 21:48:20.827160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.726 qpair failed and we were unable to recover it. 00:31:20.726 [2024-06-07 21:48:20.827427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.726 [2024-06-07 21:48:20.827436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.726 qpair failed and we were unable to recover it. 00:31:20.726 [2024-06-07 21:48:20.827574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.726 [2024-06-07 21:48:20.827583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.726 qpair failed and we were unable to recover it. 00:31:20.726 [2024-06-07 21:48:20.827828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.726 [2024-06-07 21:48:20.827837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.726 qpair failed and we were unable to recover it. 00:31:20.726 [2024-06-07 21:48:20.828107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.726 [2024-06-07 21:48:20.828116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.726 qpair failed and we were unable to recover it. 00:31:20.726 [2024-06-07 21:48:20.828444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.726 [2024-06-07 21:48:20.828454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.726 qpair failed and we were unable to recover it. 00:31:20.726 [2024-06-07 21:48:20.828748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.726 [2024-06-07 21:48:20.828778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.726 qpair failed and we were unable to recover it. 00:31:20.726 [2024-06-07 21:48:20.829055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.726 [2024-06-07 21:48:20.829086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.726 qpair failed and we were unable to recover it. 00:31:20.726 [2024-06-07 21:48:20.829306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.726 [2024-06-07 21:48:20.829337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.726 qpair failed and we were unable to recover it. 00:31:20.726 [2024-06-07 21:48:20.829670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.726 [2024-06-07 21:48:20.829700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.726 qpair failed and we were unable to recover it. 00:31:20.726 [2024-06-07 21:48:20.829982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.727 [2024-06-07 21:48:20.830012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.727 qpair failed and we were unable to recover it. 00:31:20.727 [2024-06-07 21:48:20.830268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.727 [2024-06-07 21:48:20.830299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.727 qpair failed and we were unable to recover it. 00:31:20.727 [2024-06-07 21:48:20.830612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.727 [2024-06-07 21:48:20.830642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.727 qpair failed and we were unable to recover it. 00:31:20.727 [2024-06-07 21:48:20.830979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.727 [2024-06-07 21:48:20.831009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.727 qpair failed and we were unable to recover it. 00:31:20.727 [2024-06-07 21:48:20.831375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.727 [2024-06-07 21:48:20.831404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.727 qpair failed and we were unable to recover it. 00:31:20.727 [2024-06-07 21:48:20.831674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.727 [2024-06-07 21:48:20.831704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.727 qpair failed and we were unable to recover it. 00:31:20.727 [2024-06-07 21:48:20.831960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.727 [2024-06-07 21:48:20.831990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.727 qpair failed and we were unable to recover it. 00:31:20.727 [2024-06-07 21:48:20.832354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.727 [2024-06-07 21:48:20.832386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.727 qpair failed and we were unable to recover it. 00:31:20.727 [2024-06-07 21:48:20.832651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.727 [2024-06-07 21:48:20.832689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b4d60 with addr=10.0.0.2, port=4420 00:31:20.727 qpair failed and we were unable to recover it. 00:31:20.727 [2024-06-07 21:48:20.833042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.727 [2024-06-07 21:48:20.833074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b4d60 with addr=10.0.0.2, port=4420 00:31:20.727 qpair failed and we were unable to recover it. 00:31:20.727 [2024-06-07 21:48:20.833400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.727 [2024-06-07 21:48:20.833432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b4d60 with addr=10.0.0.2, port=4420 00:31:20.727 qpair failed and we were unable to recover it. 00:31:20.727 [2024-06-07 21:48:20.833687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.727 [2024-06-07 21:48:20.833698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.727 qpair failed and we were unable to recover it. 00:31:20.727 [2024-06-07 21:48:20.833909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.727 [2024-06-07 21:48:20.833918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.727 qpair failed and we were unable to recover it. 00:31:20.727 [2024-06-07 21:48:20.834132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.727 [2024-06-07 21:48:20.834141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.727 qpair failed and we were unable to recover it. 00:31:20.727 [2024-06-07 21:48:20.834362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.727 [2024-06-07 21:48:20.834372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.727 qpair failed and we were unable to recover it. 00:31:20.727 [2024-06-07 21:48:20.834589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.727 [2024-06-07 21:48:20.834598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.727 qpair failed and we were unable to recover it. 00:31:20.727 [2024-06-07 21:48:20.834764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.727 [2024-06-07 21:48:20.834773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.727 qpair failed and we were unable to recover it. 00:31:20.727 [2024-06-07 21:48:20.835070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.727 [2024-06-07 21:48:20.835080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.727 qpair failed and we were unable to recover it. 00:31:20.727 [2024-06-07 21:48:20.835233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.727 [2024-06-07 21:48:20.835276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.727 qpair failed and we were unable to recover it. 00:31:20.727 [2024-06-07 21:48:20.835515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.727 [2024-06-07 21:48:20.835545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.727 qpair failed and we were unable to recover it. 00:31:20.727 [2024-06-07 21:48:20.835805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.727 [2024-06-07 21:48:20.835835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.727 qpair failed and we were unable to recover it. 00:31:20.727 [2024-06-07 21:48:20.836145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.727 [2024-06-07 21:48:20.836154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.727 qpair failed and we were unable to recover it. 00:31:20.727 [2024-06-07 21:48:20.836446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.727 [2024-06-07 21:48:20.836455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.727 qpair failed and we were unable to recover it. 00:31:20.727 [2024-06-07 21:48:20.836665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.727 [2024-06-07 21:48:20.836674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.727 qpair failed and we were unable to recover it. 00:31:20.727 [2024-06-07 21:48:20.836885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.727 [2024-06-07 21:48:20.836894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.727 qpair failed and we were unable to recover it. 00:31:20.727 [2024-06-07 21:48:20.837162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.727 [2024-06-07 21:48:20.837171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.727 qpair failed and we were unable to recover it. 00:31:20.727 [2024-06-07 21:48:20.837406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.727 [2024-06-07 21:48:20.837415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.727 qpair failed and we were unable to recover it. 00:31:20.727 [2024-06-07 21:48:20.837561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.727 [2024-06-07 21:48:20.837570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.727 qpair failed and we were unable to recover it. 00:31:20.727 [2024-06-07 21:48:20.837793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.727 [2024-06-07 21:48:20.837802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.727 qpair failed and we were unable to recover it. 00:31:20.727 [2024-06-07 21:48:20.838103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.727 [2024-06-07 21:48:20.838134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.727 qpair failed and we were unable to recover it. 00:31:20.727 [2024-06-07 21:48:20.838471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.727 [2024-06-07 21:48:20.838501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.727 qpair failed and we were unable to recover it. 00:31:20.727 [2024-06-07 21:48:20.838838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.727 [2024-06-07 21:48:20.838867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.727 qpair failed and we were unable to recover it. 00:31:20.727 [2024-06-07 21:48:20.839071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.727 [2024-06-07 21:48:20.839102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.727 qpair failed and we were unable to recover it. 00:31:20.727 [2024-06-07 21:48:20.839358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.727 [2024-06-07 21:48:20.839388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.727 qpair failed and we were unable to recover it. 00:31:20.727 [2024-06-07 21:48:20.839698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.727 [2024-06-07 21:48:20.839728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.727 qpair failed and we were unable to recover it. 00:31:20.727 [2024-06-07 21:48:20.839982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.727 [2024-06-07 21:48:20.840012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.727 qpair failed and we were unable to recover it. 00:31:20.727 [2024-06-07 21:48:20.840244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.727 [2024-06-07 21:48:20.840282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.727 qpair failed and we were unable to recover it. 00:31:20.727 [2024-06-07 21:48:20.840570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.727 [2024-06-07 21:48:20.840579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.727 qpair failed and we were unable to recover it. 00:31:20.728 [2024-06-07 21:48:20.840845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.728 [2024-06-07 21:48:20.840854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.728 qpair failed and we were unable to recover it. 00:31:20.728 [2024-06-07 21:48:20.841103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.728 [2024-06-07 21:48:20.841135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.728 qpair failed and we were unable to recover it. 00:31:20.728 [2024-06-07 21:48:20.841469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.728 [2024-06-07 21:48:20.841499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.728 qpair failed and we were unable to recover it. 00:31:20.728 [2024-06-07 21:48:20.841693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.728 [2024-06-07 21:48:20.841723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.728 qpair failed and we were unable to recover it. 00:31:20.728 [2024-06-07 21:48:20.842042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.728 [2024-06-07 21:48:20.842074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.728 qpair failed and we were unable to recover it. 00:31:20.728 [2024-06-07 21:48:20.842328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.728 [2024-06-07 21:48:20.842358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.728 qpair failed and we were unable to recover it. 00:31:20.728 [2024-06-07 21:48:20.842551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.728 [2024-06-07 21:48:20.842581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.728 qpair failed and we were unable to recover it. 00:31:20.728 [2024-06-07 21:48:20.842913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.728 [2024-06-07 21:48:20.842944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.728 qpair failed and we were unable to recover it. 00:31:20.728 [2024-06-07 21:48:20.843132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.728 [2024-06-07 21:48:20.843164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.728 qpair failed and we were unable to recover it. 00:31:20.728 [2024-06-07 21:48:20.843390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.728 [2024-06-07 21:48:20.843400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.728 qpair failed and we were unable to recover it. 00:31:20.728 [2024-06-07 21:48:20.843696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.728 [2024-06-07 21:48:20.843707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.728 qpair failed and we were unable to recover it. 00:31:20.728 [2024-06-07 21:48:20.843909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.728 [2024-06-07 21:48:20.843918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.728 qpair failed and we were unable to recover it. 00:31:20.728 [2024-06-07 21:48:20.844183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.728 [2024-06-07 21:48:20.844193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.728 qpair failed and we were unable to recover it. 00:31:20.728 [2024-06-07 21:48:20.844407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.728 [2024-06-07 21:48:20.844416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.728 qpair failed and we were unable to recover it. 00:31:20.728 [2024-06-07 21:48:20.844696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.728 [2024-06-07 21:48:20.844705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.728 qpair failed and we were unable to recover it. 00:31:20.728 [2024-06-07 21:48:20.844914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.728 [2024-06-07 21:48:20.844923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.728 qpair failed and we were unable to recover it. 00:31:20.728 [2024-06-07 21:48:20.845138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.728 [2024-06-07 21:48:20.845148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.728 qpair failed and we were unable to recover it. 00:31:20.728 [2024-06-07 21:48:20.845286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.728 [2024-06-07 21:48:20.845295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.728 qpair failed and we were unable to recover it. 00:31:20.728 [2024-06-07 21:48:20.845519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.728 [2024-06-07 21:48:20.845549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.728 qpair failed and we were unable to recover it. 00:31:20.728 [2024-06-07 21:48:20.845805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.728 [2024-06-07 21:48:20.845836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.728 qpair failed and we were unable to recover it. 00:31:20.728 [2024-06-07 21:48:20.846174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.728 [2024-06-07 21:48:20.846205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.728 qpair failed and we were unable to recover it. 00:31:20.728 [2024-06-07 21:48:20.846461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.728 [2024-06-07 21:48:20.846491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.728 qpair failed and we were unable to recover it. 00:31:20.728 [2024-06-07 21:48:20.846835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.728 [2024-06-07 21:48:20.846866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.728 qpair failed and we were unable to recover it. 00:31:20.728 [2024-06-07 21:48:20.847116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.728 [2024-06-07 21:48:20.847126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.728 qpair failed and we were unable to recover it. 00:31:20.728 [2024-06-07 21:48:20.847416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.728 [2024-06-07 21:48:20.847425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.728 qpair failed and we were unable to recover it. 00:31:20.728 [2024-06-07 21:48:20.847634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.728 [2024-06-07 21:48:20.847643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.728 qpair failed and we were unable to recover it. 00:31:20.728 [2024-06-07 21:48:20.847960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.728 [2024-06-07 21:48:20.847969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.728 qpair failed and we were unable to recover it. 00:31:20.728 [2024-06-07 21:48:20.848111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.728 [2024-06-07 21:48:20.848120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.728 qpair failed and we were unable to recover it. 00:31:20.728 [2024-06-07 21:48:20.848383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.728 [2024-06-07 21:48:20.848392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.728 qpair failed and we were unable to recover it. 00:31:20.728 [2024-06-07 21:48:20.848598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.728 [2024-06-07 21:48:20.848607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.728 qpair failed and we were unable to recover it. 00:31:20.728 [2024-06-07 21:48:20.848812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.728 [2024-06-07 21:48:20.848822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.728 qpair failed and we were unable to recover it. 00:31:20.728 [2024-06-07 21:48:20.849123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.729 [2024-06-07 21:48:20.849155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.729 qpair failed and we were unable to recover it. 00:31:20.729 [2024-06-07 21:48:20.849400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.729 [2024-06-07 21:48:20.849430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.729 qpair failed and we were unable to recover it. 00:31:20.729 [2024-06-07 21:48:20.849632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.729 [2024-06-07 21:48:20.849662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.729 qpair failed and we were unable to recover it. 00:31:20.729 [2024-06-07 21:48:20.849861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.729 [2024-06-07 21:48:20.849891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.729 qpair failed and we were unable to recover it. 00:31:20.729 [2024-06-07 21:48:20.850147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.729 [2024-06-07 21:48:20.850178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.729 qpair failed and we were unable to recover it. 00:31:20.729 [2024-06-07 21:48:20.850490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.729 [2024-06-07 21:48:20.850520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.729 qpair failed and we were unable to recover it. 00:31:20.729 [2024-06-07 21:48:20.850835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.729 [2024-06-07 21:48:20.850865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.729 qpair failed and we were unable to recover it. 00:31:20.729 [2024-06-07 21:48:20.851175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.729 [2024-06-07 21:48:20.851206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.729 qpair failed and we were unable to recover it. 00:31:20.729 [2024-06-07 21:48:20.851418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.729 [2024-06-07 21:48:20.851448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.729 qpair failed and we were unable to recover it. 00:31:20.729 [2024-06-07 21:48:20.851775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.729 [2024-06-07 21:48:20.851805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.729 qpair failed and we were unable to recover it. 00:31:20.729 [2024-06-07 21:48:20.852145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.729 [2024-06-07 21:48:20.852176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.729 qpair failed and we were unable to recover it. 00:31:20.729 [2024-06-07 21:48:20.852373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.729 [2024-06-07 21:48:20.852404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.729 qpair failed and we were unable to recover it. 00:31:20.729 [2024-06-07 21:48:20.852734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.729 [2024-06-07 21:48:20.852764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.729 qpair failed and we were unable to recover it. 00:31:20.729 [2024-06-07 21:48:20.853093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.729 [2024-06-07 21:48:20.853124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.729 qpair failed and we were unable to recover it. 00:31:20.729 [2024-06-07 21:48:20.853375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.729 [2024-06-07 21:48:20.853385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.729 qpair failed and we were unable to recover it. 00:31:20.729 [2024-06-07 21:48:20.853530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.729 [2024-06-07 21:48:20.853539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.729 qpair failed and we were unable to recover it. 00:31:20.729 [2024-06-07 21:48:20.853696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.729 [2024-06-07 21:48:20.853706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.729 qpair failed and we were unable to recover it. 00:31:20.729 [2024-06-07 21:48:20.853970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.729 [2024-06-07 21:48:20.853979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.729 qpair failed and we were unable to recover it. 00:31:20.729 [2024-06-07 21:48:20.854200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.729 [2024-06-07 21:48:20.854209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.729 qpair failed and we were unable to recover it. 00:31:20.729 [2024-06-07 21:48:20.854422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.729 [2024-06-07 21:48:20.854433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.729 qpair failed and we were unable to recover it. 00:31:20.729 [2024-06-07 21:48:20.854698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.729 [2024-06-07 21:48:20.854707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.729 qpair failed and we were unable to recover it. 00:31:20.729 [2024-06-07 21:48:20.855011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.729 [2024-06-07 21:48:20.855020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.729 qpair failed and we were unable to recover it. 00:31:20.729 [2024-06-07 21:48:20.855232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.729 [2024-06-07 21:48:20.855241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.729 qpair failed and we were unable to recover it. 00:31:20.729 [2024-06-07 21:48:20.855484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.729 [2024-06-07 21:48:20.855493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.729 qpair failed and we were unable to recover it. 00:31:20.729 [2024-06-07 21:48:20.855773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.729 [2024-06-07 21:48:20.855782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.729 qpair failed and we were unable to recover it. 00:31:20.729 [2024-06-07 21:48:20.856075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.729 [2024-06-07 21:48:20.856084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.729 qpair failed and we were unable to recover it. 00:31:20.729 [2024-06-07 21:48:20.856299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.729 [2024-06-07 21:48:20.856309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.729 qpair failed and we were unable to recover it. 00:31:20.729 [2024-06-07 21:48:20.856467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.729 [2024-06-07 21:48:20.856476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.729 qpair failed and we were unable to recover it. 00:31:20.729 [2024-06-07 21:48:20.856706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.729 [2024-06-07 21:48:20.856716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.729 qpair failed and we were unable to recover it. 00:31:20.729 [2024-06-07 21:48:20.856919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.729 [2024-06-07 21:48:20.856949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.729 qpair failed and we were unable to recover it. 00:31:20.729 [2024-06-07 21:48:20.857284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.729 [2024-06-07 21:48:20.857315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.729 qpair failed and we were unable to recover it. 00:31:20.729 [2024-06-07 21:48:20.857563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.729 [2024-06-07 21:48:20.857572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.729 qpair failed and we were unable to recover it. 00:31:20.729 [2024-06-07 21:48:20.857728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.729 [2024-06-07 21:48:20.857737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.729 qpair failed and we were unable to recover it. 00:31:20.729 [2024-06-07 21:48:20.858038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.729 [2024-06-07 21:48:20.858068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.729 qpair failed and we were unable to recover it. 00:31:20.729 [2024-06-07 21:48:20.858321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.729 [2024-06-07 21:48:20.858352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.729 qpair failed and we were unable to recover it. 00:31:20.729 [2024-06-07 21:48:20.858603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.729 [2024-06-07 21:48:20.858633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.729 qpair failed and we were unable to recover it. 00:31:20.729 [2024-06-07 21:48:20.858990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.729 [2024-06-07 21:48:20.859020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.729 qpair failed and we were unable to recover it. 00:31:20.729 [2024-06-07 21:48:20.859276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.729 [2024-06-07 21:48:20.859306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.729 qpair failed and we were unable to recover it. 00:31:20.730 [2024-06-07 21:48:20.859649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.730 [2024-06-07 21:48:20.859679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.730 qpair failed and we were unable to recover it. 00:31:20.730 [2024-06-07 21:48:20.859948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.730 [2024-06-07 21:48:20.859978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.730 qpair failed and we were unable to recover it. 00:31:20.730 [2024-06-07 21:48:20.860262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.730 [2024-06-07 21:48:20.860294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.730 qpair failed and we were unable to recover it. 00:31:20.730 [2024-06-07 21:48:20.860605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.730 [2024-06-07 21:48:20.860635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.730 qpair failed and we were unable to recover it. 00:31:20.730 [2024-06-07 21:48:20.860885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.730 [2024-06-07 21:48:20.860928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.730 qpair failed and we were unable to recover it. 00:31:20.730 [2024-06-07 21:48:20.861142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.730 [2024-06-07 21:48:20.861152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.730 qpair failed and we were unable to recover it. 00:31:20.730 [2024-06-07 21:48:20.861472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.730 [2024-06-07 21:48:20.861481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.730 qpair failed and we were unable to recover it. 00:31:20.730 [2024-06-07 21:48:20.861748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.730 [2024-06-07 21:48:20.861757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.730 qpair failed and we were unable to recover it. 00:31:20.730 [2024-06-07 21:48:20.861922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.730 [2024-06-07 21:48:20.861931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.730 qpair failed and we were unable to recover it. 00:31:20.730 [2024-06-07 21:48:20.862225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.730 [2024-06-07 21:48:20.862235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.730 qpair failed and we were unable to recover it. 00:31:20.730 [2024-06-07 21:48:20.862526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.730 [2024-06-07 21:48:20.862535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.730 qpair failed and we were unable to recover it. 00:31:20.730 [2024-06-07 21:48:20.862804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.730 [2024-06-07 21:48:20.862813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.730 qpair failed and we were unable to recover it. 00:31:20.730 [2024-06-07 21:48:20.863046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.730 [2024-06-07 21:48:20.863055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.730 qpair failed and we were unable to recover it. 00:31:20.730 [2024-06-07 21:48:20.863209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.730 [2024-06-07 21:48:20.863218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.730 qpair failed and we were unable to recover it. 00:31:20.730 [2024-06-07 21:48:20.863381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.730 [2024-06-07 21:48:20.863390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.730 qpair failed and we were unable to recover it. 00:31:20.730 [2024-06-07 21:48:20.863605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.730 [2024-06-07 21:48:20.863614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.730 qpair failed and we were unable to recover it. 00:31:20.730 [2024-06-07 21:48:20.863822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.730 [2024-06-07 21:48:20.863831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.730 qpair failed and we were unable to recover it. 00:31:20.730 [2024-06-07 21:48:20.864123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.730 [2024-06-07 21:48:20.864132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.730 qpair failed and we were unable to recover it. 00:31:20.730 [2024-06-07 21:48:20.864345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.730 [2024-06-07 21:48:20.864355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.730 qpair failed and we were unable to recover it. 00:31:20.730 [2024-06-07 21:48:20.864571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.730 [2024-06-07 21:48:20.864601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.730 qpair failed and we were unable to recover it. 00:31:20.730 [2024-06-07 21:48:20.864885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.730 [2024-06-07 21:48:20.864915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.730 qpair failed and we were unable to recover it. 00:31:20.730 [2024-06-07 21:48:20.865171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.730 [2024-06-07 21:48:20.865184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.730 qpair failed and we were unable to recover it. 00:31:20.730 [2024-06-07 21:48:20.865395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.730 [2024-06-07 21:48:20.865425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.730 qpair failed and we were unable to recover it. 00:31:20.730 [2024-06-07 21:48:20.865668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.730 [2024-06-07 21:48:20.865698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.730 qpair failed and we were unable to recover it. 00:31:20.730 [2024-06-07 21:48:20.865889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.730 [2024-06-07 21:48:20.865918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.730 qpair failed and we were unable to recover it. 00:31:20.730 [2024-06-07 21:48:20.866234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.730 [2024-06-07 21:48:20.866244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.730 qpair failed and we were unable to recover it. 00:31:20.730 [2024-06-07 21:48:20.866548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.730 [2024-06-07 21:48:20.866558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.730 qpair failed and we were unable to recover it. 00:31:20.730 [2024-06-07 21:48:20.866846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.730 [2024-06-07 21:48:20.866855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.730 qpair failed and we were unable to recover it. 00:31:20.730 [2024-06-07 21:48:20.867123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.730 [2024-06-07 21:48:20.867132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.730 qpair failed and we were unable to recover it. 00:31:20.730 [2024-06-07 21:48:20.867449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.730 [2024-06-07 21:48:20.867458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.730 qpair failed and we were unable to recover it. 00:31:20.730 [2024-06-07 21:48:20.867758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.730 [2024-06-07 21:48:20.867789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.730 qpair failed and we were unable to recover it. 00:31:20.730 [2024-06-07 21:48:20.868101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.730 [2024-06-07 21:48:20.868150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.730 qpair failed and we were unable to recover it. 00:31:20.730 [2024-06-07 21:48:20.868442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.730 [2024-06-07 21:48:20.868472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.730 qpair failed and we were unable to recover it. 00:31:20.730 [2024-06-07 21:48:20.868786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.730 [2024-06-07 21:48:20.868816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.730 qpair failed and we were unable to recover it. 00:31:20.730 [2024-06-07 21:48:20.869075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.730 [2024-06-07 21:48:20.869106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.730 qpair failed and we were unable to recover it. 00:31:20.730 [2024-06-07 21:48:20.869415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.730 [2024-06-07 21:48:20.869424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.730 qpair failed and we were unable to recover it. 00:31:20.730 [2024-06-07 21:48:20.869738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.730 [2024-06-07 21:48:20.869748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.730 qpair failed and we were unable to recover it. 00:31:20.730 [2024-06-07 21:48:20.869954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.730 [2024-06-07 21:48:20.869963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.730 qpair failed and we were unable to recover it. 00:31:20.731 [2024-06-07 21:48:20.870097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.731 [2024-06-07 21:48:20.870106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.731 qpair failed and we were unable to recover it. 00:31:20.731 [2024-06-07 21:48:20.870392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.731 [2024-06-07 21:48:20.870423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.731 qpair failed and we were unable to recover it. 00:31:20.731 [2024-06-07 21:48:20.870680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.731 [2024-06-07 21:48:20.870710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.731 qpair failed and we were unable to recover it. 00:31:20.731 [2024-06-07 21:48:20.870954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.731 [2024-06-07 21:48:20.870984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.731 qpair failed and we were unable to recover it. 00:31:20.731 [2024-06-07 21:48:20.871319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.731 [2024-06-07 21:48:20.871350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.731 qpair failed and we were unable to recover it. 00:31:20.731 [2024-06-07 21:48:20.871707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.731 [2024-06-07 21:48:20.871737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.731 qpair failed and we were unable to recover it. 00:31:20.731 [2024-06-07 21:48:20.872015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.731 [2024-06-07 21:48:20.872063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.731 qpair failed and we were unable to recover it. 00:31:20.731 [2024-06-07 21:48:20.872319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.731 [2024-06-07 21:48:20.872350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.731 qpair failed and we were unable to recover it. 00:31:20.731 [2024-06-07 21:48:20.872604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.731 [2024-06-07 21:48:20.872634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.731 qpair failed and we were unable to recover it. 00:31:20.731 [2024-06-07 21:48:20.872919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.731 [2024-06-07 21:48:20.872949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.731 qpair failed and we were unable to recover it. 00:31:20.731 [2024-06-07 21:48:20.873280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.731 [2024-06-07 21:48:20.873290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.731 qpair failed and we were unable to recover it. 00:31:20.731 [2024-06-07 21:48:20.873505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.731 [2024-06-07 21:48:20.873514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.731 qpair failed and we were unable to recover it. 00:31:20.731 [2024-06-07 21:48:20.873668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.731 [2024-06-07 21:48:20.873677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.731 qpair failed and we were unable to recover it. 00:31:20.731 [2024-06-07 21:48:20.873875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.731 [2024-06-07 21:48:20.873884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.731 qpair failed and we were unable to recover it. 00:31:20.731 [2024-06-07 21:48:20.874095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.731 [2024-06-07 21:48:20.874104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.731 qpair failed and we were unable to recover it. 00:31:20.731 [2024-06-07 21:48:20.874215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.731 [2024-06-07 21:48:20.874225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.731 qpair failed and we were unable to recover it. 00:31:20.731 [2024-06-07 21:48:20.874426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.731 [2024-06-07 21:48:20.874456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.731 qpair failed and we were unable to recover it. 00:31:20.731 [2024-06-07 21:48:20.874713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.731 [2024-06-07 21:48:20.874744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.731 qpair failed and we were unable to recover it. 00:31:20.731 [2024-06-07 21:48:20.875061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.731 [2024-06-07 21:48:20.875092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.731 qpair failed and we were unable to recover it. 00:31:20.731 [2024-06-07 21:48:20.875416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.731 [2024-06-07 21:48:20.875447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.731 qpair failed and we were unable to recover it. 00:31:20.731 [2024-06-07 21:48:20.875772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.731 [2024-06-07 21:48:20.875802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.731 qpair failed and we were unable to recover it. 00:31:20.731 [2024-06-07 21:48:20.876137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.731 [2024-06-07 21:48:20.876168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.731 qpair failed and we were unable to recover it. 00:31:20.731 [2024-06-07 21:48:20.876501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.731 [2024-06-07 21:48:20.876531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.731 qpair failed and we were unable to recover it. 00:31:20.731 [2024-06-07 21:48:20.876872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.731 [2024-06-07 21:48:20.876908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.731 qpair failed and we were unable to recover it. 00:31:20.731 [2024-06-07 21:48:20.877086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.731 [2024-06-07 21:48:20.877096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.731 qpair failed and we were unable to recover it. 00:31:20.731 [2024-06-07 21:48:20.877356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.731 [2024-06-07 21:48:20.877386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.731 qpair failed and we were unable to recover it. 00:31:20.731 [2024-06-07 21:48:20.877591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.731 [2024-06-07 21:48:20.877622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.731 qpair failed and we were unable to recover it. 00:31:20.731 [2024-06-07 21:48:20.877957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.731 [2024-06-07 21:48:20.877987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.731 qpair failed and we were unable to recover it. 00:31:20.731 [2024-06-07 21:48:20.878317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.731 [2024-06-07 21:48:20.878348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.731 qpair failed and we were unable to recover it. 00:31:20.731 [2024-06-07 21:48:20.878651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.731 [2024-06-07 21:48:20.878660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.731 qpair failed and we were unable to recover it. 00:31:20.731 [2024-06-07 21:48:20.878968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.731 [2024-06-07 21:48:20.878977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.731 qpair failed and we were unable to recover it. 00:31:20.731 [2024-06-07 21:48:20.879194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.731 [2024-06-07 21:48:20.879204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.731 qpair failed and we were unable to recover it. 00:31:20.731 [2024-06-07 21:48:20.879498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.731 [2024-06-07 21:48:20.879506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.731 qpair failed and we were unable to recover it. 00:31:20.731 [2024-06-07 21:48:20.879717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.731 [2024-06-07 21:48:20.879726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.731 qpair failed and we were unable to recover it. 00:31:20.731 [2024-06-07 21:48:20.879877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.731 [2024-06-07 21:48:20.879886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.731 qpair failed and we were unable to recover it. 00:31:20.731 [2024-06-07 21:48:20.880156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.731 [2024-06-07 21:48:20.880165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.731 qpair failed and we were unable to recover it. 00:31:20.731 [2024-06-07 21:48:20.880402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.732 [2024-06-07 21:48:20.880433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.732 qpair failed and we were unable to recover it. 00:31:20.732 [2024-06-07 21:48:20.880805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.732 [2024-06-07 21:48:20.880835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.732 qpair failed and we were unable to recover it. 00:31:20.732 [2024-06-07 21:48:20.881039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.732 [2024-06-07 21:48:20.881070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.732 qpair failed and we were unable to recover it. 00:31:20.732 [2024-06-07 21:48:20.881272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.732 [2024-06-07 21:48:20.881303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.732 qpair failed and we were unable to recover it. 00:31:20.732 [2024-06-07 21:48:20.881506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.732 [2024-06-07 21:48:20.881515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.732 qpair failed and we were unable to recover it. 00:31:20.732 [2024-06-07 21:48:20.881738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.732 [2024-06-07 21:48:20.881747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.732 qpair failed and we were unable to recover it. 00:31:20.732 [2024-06-07 21:48:20.881869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.732 [2024-06-07 21:48:20.881879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.732 qpair failed and we were unable to recover it. 00:31:20.732 [2024-06-07 21:48:20.882087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.732 [2024-06-07 21:48:20.882096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.732 qpair failed and we were unable to recover it. 00:31:20.732 [2024-06-07 21:48:20.882368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.732 [2024-06-07 21:48:20.882377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.732 qpair failed and we were unable to recover it. 00:31:20.732 [2024-06-07 21:48:20.882664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.732 [2024-06-07 21:48:20.882695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.732 qpair failed and we were unable to recover it. 00:31:20.732 [2024-06-07 21:48:20.882952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.732 [2024-06-07 21:48:20.882982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.732 qpair failed and we were unable to recover it. 00:31:20.732 [2024-06-07 21:48:20.883271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.732 [2024-06-07 21:48:20.883280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.732 qpair failed and we were unable to recover it. 00:31:20.732 [2024-06-07 21:48:20.883546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.732 [2024-06-07 21:48:20.883555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.732 qpair failed and we were unable to recover it. 00:31:20.732 [2024-06-07 21:48:20.883847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.732 [2024-06-07 21:48:20.883855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.732 qpair failed and we were unable to recover it. 00:31:20.732 [2024-06-07 21:48:20.884167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.732 [2024-06-07 21:48:20.884199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.732 qpair failed and we were unable to recover it. 00:31:20.732 [2024-06-07 21:48:20.884468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.732 [2024-06-07 21:48:20.884498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.732 qpair failed and we were unable to recover it. 00:31:20.732 [2024-06-07 21:48:20.884757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.732 [2024-06-07 21:48:20.884787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.732 qpair failed and we were unable to recover it. 00:31:20.732 [2024-06-07 21:48:20.885042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.732 [2024-06-07 21:48:20.885073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.732 qpair failed and we were unable to recover it. 00:31:20.732 [2024-06-07 21:48:20.885356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.732 [2024-06-07 21:48:20.885386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.732 qpair failed and we were unable to recover it. 00:31:20.732 [2024-06-07 21:48:20.885667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.732 [2024-06-07 21:48:20.885675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.732 qpair failed and we were unable to recover it. 00:31:20.732 [2024-06-07 21:48:20.885882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.732 [2024-06-07 21:48:20.885890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.732 qpair failed and we were unable to recover it. 00:31:20.732 [2024-06-07 21:48:20.886089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.732 [2024-06-07 21:48:20.886099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.732 qpair failed and we were unable to recover it. 00:31:20.732 [2024-06-07 21:48:20.886320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.732 [2024-06-07 21:48:20.886351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.732 qpair failed and we were unable to recover it. 00:31:20.732 [2024-06-07 21:48:20.886611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.732 [2024-06-07 21:48:20.886641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.732 qpair failed and we were unable to recover it. 00:31:20.732 [2024-06-07 21:48:20.886927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.732 [2024-06-07 21:48:20.886957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.732 qpair failed and we were unable to recover it. 00:31:20.732 [2024-06-07 21:48:20.887315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.732 [2024-06-07 21:48:20.887346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.732 qpair failed and we were unable to recover it. 00:31:20.732 [2024-06-07 21:48:20.887599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.732 [2024-06-07 21:48:20.887608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.732 qpair failed and we were unable to recover it. 00:31:20.732 [2024-06-07 21:48:20.887920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.732 [2024-06-07 21:48:20.887930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.732 qpair failed and we were unable to recover it. 00:31:20.732 [2024-06-07 21:48:20.888228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.732 [2024-06-07 21:48:20.888238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.732 qpair failed and we were unable to recover it. 00:31:20.732 [2024-06-07 21:48:20.888516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.732 [2024-06-07 21:48:20.888547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.732 qpair failed and we were unable to recover it. 00:31:20.732 [2024-06-07 21:48:20.888886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.732 [2024-06-07 21:48:20.888915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.732 qpair failed and we were unable to recover it. 00:31:20.732 [2024-06-07 21:48:20.889160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.732 [2024-06-07 21:48:20.889191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.732 qpair failed and we were unable to recover it. 00:31:20.732 [2024-06-07 21:48:20.889449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.732 [2024-06-07 21:48:20.889480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.732 qpair failed and we were unable to recover it. 00:31:20.732 [2024-06-07 21:48:20.889675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.732 [2024-06-07 21:48:20.889705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.732 qpair failed and we were unable to recover it. 00:31:20.732 [2024-06-07 21:48:20.890035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.732 [2024-06-07 21:48:20.890062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.732 qpair failed and we were unable to recover it. 00:31:20.732 [2024-06-07 21:48:20.890330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.732 [2024-06-07 21:48:20.890339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.732 qpair failed and we were unable to recover it. 00:31:20.732 [2024-06-07 21:48:20.890603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.732 [2024-06-07 21:48:20.890612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.732 qpair failed and we were unable to recover it. 00:31:20.732 [2024-06-07 21:48:20.890753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.732 [2024-06-07 21:48:20.890762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.732 qpair failed and we were unable to recover it. 00:31:20.733 [2024-06-07 21:48:20.891062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.733 [2024-06-07 21:48:20.891093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.733 qpair failed and we were unable to recover it. 00:31:20.733 [2024-06-07 21:48:20.891294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.733 [2024-06-07 21:48:20.891325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.733 qpair failed and we were unable to recover it. 00:31:20.733 [2024-06-07 21:48:20.891655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.733 [2024-06-07 21:48:20.891685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.733 qpair failed and we were unable to recover it. 00:31:20.733 [2024-06-07 21:48:20.891967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.733 [2024-06-07 21:48:20.891998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.733 qpair failed and we were unable to recover it. 00:31:20.733 [2024-06-07 21:48:20.892316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.733 [2024-06-07 21:48:20.892358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:20.733 qpair failed and we were unable to recover it. 00:31:20.733 [2024-06-07 21:48:20.892683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.733 [2024-06-07 21:48:20.892702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:20.733 qpair failed and we were unable to recover it. 00:31:20.733 [2024-06-07 21:48:20.892867] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2a00 is same with the state(5) to be set 00:31:20.733 [2024-06-07 21:48:20.893146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.733 [2024-06-07 21:48:20.893157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.733 qpair failed and we were unable to recover it. 00:31:20.733 [2024-06-07 21:48:20.893323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.733 [2024-06-07 21:48:20.893332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.733 qpair failed and we were unable to recover it. 00:31:20.733 [2024-06-07 21:48:20.893624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.733 [2024-06-07 21:48:20.893633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.733 qpair failed and we were unable to recover it. 00:31:20.733 [2024-06-07 21:48:20.893851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.733 [2024-06-07 21:48:20.893860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.733 qpair failed and we were unable to recover it. 00:31:20.733 [2024-06-07 21:48:20.894077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.733 [2024-06-07 21:48:20.894086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.733 qpair failed and we were unable to recover it. 00:31:20.733 [2024-06-07 21:48:20.894305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.733 [2024-06-07 21:48:20.894335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.733 qpair failed and we were unable to recover it. 00:31:20.733 [2024-06-07 21:48:20.894647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.733 [2024-06-07 21:48:20.894678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.733 qpair failed and we were unable to recover it. 00:31:20.733 [2024-06-07 21:48:20.894988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.733 [2024-06-07 21:48:20.895018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.733 qpair failed and we were unable to recover it. 00:31:20.733 [2024-06-07 21:48:20.895305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.733 [2024-06-07 21:48:20.895335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.733 qpair failed and we were unable to recover it. 00:31:20.733 [2024-06-07 21:48:20.895665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.733 [2024-06-07 21:48:20.895675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.733 qpair failed and we were unable to recover it. 00:31:20.733 [2024-06-07 21:48:20.896018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.733 [2024-06-07 21:48:20.896061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.733 qpair failed and we were unable to recover it. 00:31:20.733 [2024-06-07 21:48:20.896268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.733 [2024-06-07 21:48:20.896299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.733 qpair failed and we were unable to recover it. 00:31:20.733 [2024-06-07 21:48:20.896604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.733 [2024-06-07 21:48:20.896634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.733 qpair failed and we were unable to recover it. 00:31:20.733 [2024-06-07 21:48:20.896918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.733 [2024-06-07 21:48:20.896949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.733 qpair failed and we were unable to recover it. 00:31:20.733 [2024-06-07 21:48:20.897263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.733 [2024-06-07 21:48:20.897294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.733 qpair failed and we were unable to recover it. 00:31:20.733 [2024-06-07 21:48:20.897627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.733 [2024-06-07 21:48:20.897657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.733 qpair failed and we were unable to recover it. 00:31:20.733 [2024-06-07 21:48:20.897924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.733 [2024-06-07 21:48:20.897954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.733 qpair failed and we were unable to recover it. 00:31:20.733 [2024-06-07 21:48:20.898290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.733 [2024-06-07 21:48:20.898321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.733 qpair failed and we were unable to recover it. 00:31:20.733 [2024-06-07 21:48:20.898657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.733 [2024-06-07 21:48:20.898688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.733 qpair failed and we were unable to recover it. 00:31:20.733 [2024-06-07 21:48:20.899037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.733 [2024-06-07 21:48:20.899067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.733 qpair failed and we were unable to recover it. 00:31:20.733 [2024-06-07 21:48:20.899313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.733 [2024-06-07 21:48:20.899323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.733 qpair failed and we were unable to recover it. 00:31:20.733 [2024-06-07 21:48:20.899605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.733 [2024-06-07 21:48:20.899614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.733 qpair failed and we were unable to recover it. 00:31:20.733 [2024-06-07 21:48:20.899890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.733 [2024-06-07 21:48:20.899900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.733 qpair failed and we were unable to recover it. 00:31:20.733 [2024-06-07 21:48:20.900118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.733 [2024-06-07 21:48:20.900127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.733 qpair failed and we were unable to recover it. 00:31:20.733 [2024-06-07 21:48:20.900344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.733 [2024-06-07 21:48:20.900353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.734 qpair failed and we were unable to recover it. 00:31:20.734 [2024-06-07 21:48:20.900631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.734 [2024-06-07 21:48:20.900640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.734 qpair failed and we were unable to recover it. 00:31:20.734 [2024-06-07 21:48:20.900936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.734 [2024-06-07 21:48:20.900945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.734 qpair failed and we were unable to recover it. 00:31:20.734 [2024-06-07 21:48:20.901096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.734 [2024-06-07 21:48:20.901105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.734 qpair failed and we were unable to recover it. 00:31:20.734 [2024-06-07 21:48:20.901327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.734 [2024-06-07 21:48:20.901336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.734 qpair failed and we were unable to recover it. 00:31:20.734 [2024-06-07 21:48:20.901491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.734 [2024-06-07 21:48:20.901500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.734 qpair failed and we were unable to recover it. 00:31:20.734 [2024-06-07 21:48:20.901715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.734 [2024-06-07 21:48:20.901745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.734 qpair failed and we were unable to recover it. 00:31:20.734 [2024-06-07 21:48:20.902081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.734 [2024-06-07 21:48:20.902112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.734 qpair failed and we were unable to recover it. 00:31:20.734 [2024-06-07 21:48:20.902416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.734 [2024-06-07 21:48:20.902425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.734 qpair failed and we were unable to recover it. 00:31:20.734 [2024-06-07 21:48:20.902641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.734 [2024-06-07 21:48:20.902650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.734 qpair failed and we were unable to recover it. 00:31:20.734 [2024-06-07 21:48:20.902970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.734 [2024-06-07 21:48:20.902979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.734 qpair failed and we were unable to recover it. 00:31:20.734 [2024-06-07 21:48:20.903274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.734 [2024-06-07 21:48:20.903283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.734 qpair failed and we were unable to recover it. 00:31:20.734 [2024-06-07 21:48:20.903494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.734 [2024-06-07 21:48:20.903505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.734 qpair failed and we were unable to recover it. 00:31:20.734 [2024-06-07 21:48:20.903762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.734 [2024-06-07 21:48:20.903792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.734 qpair failed and we were unable to recover it. 00:31:20.734 [2024-06-07 21:48:20.904018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.734 [2024-06-07 21:48:20.904057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.734 qpair failed and we were unable to recover it. 00:31:20.734 [2024-06-07 21:48:20.904328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.734 [2024-06-07 21:48:20.904359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.734 qpair failed and we were unable to recover it. 00:31:20.734 [2024-06-07 21:48:20.904694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.734 [2024-06-07 21:48:20.904703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.734 qpair failed and we were unable to recover it. 00:31:20.734 [2024-06-07 21:48:20.904943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.734 [2024-06-07 21:48:20.904952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.734 qpair failed and we were unable to recover it. 00:31:20.734 [2024-06-07 21:48:20.905160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.734 [2024-06-07 21:48:20.905169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.734 qpair failed and we were unable to recover it. 00:31:20.734 [2024-06-07 21:48:20.905436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.734 [2024-06-07 21:48:20.905445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.734 qpair failed and we were unable to recover it. 00:31:20.734 [2024-06-07 21:48:20.905668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.734 [2024-06-07 21:48:20.905677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.734 qpair failed and we were unable to recover it. 00:31:20.734 [2024-06-07 21:48:20.905824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.734 [2024-06-07 21:48:20.905833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.734 qpair failed and we were unable to recover it. 00:31:20.734 [2024-06-07 21:48:20.905987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.734 [2024-06-07 21:48:20.905996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.734 qpair failed and we were unable to recover it. 00:31:20.734 [2024-06-07 21:48:20.906266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.734 [2024-06-07 21:48:20.906275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.734 qpair failed and we were unable to recover it. 00:31:20.734 [2024-06-07 21:48:20.906511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.734 [2024-06-07 21:48:20.906520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.734 qpair failed and we were unable to recover it. 00:31:20.734 [2024-06-07 21:48:20.906667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.734 [2024-06-07 21:48:20.906677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.734 qpair failed and we were unable to recover it. 00:31:20.734 [2024-06-07 21:48:20.906992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.734 [2024-06-07 21:48:20.907022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.734 qpair failed and we were unable to recover it. 00:31:20.734 [2024-06-07 21:48:20.907346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.734 [2024-06-07 21:48:20.907356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.734 qpair failed and we were unable to recover it. 00:31:20.734 [2024-06-07 21:48:20.907586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.734 [2024-06-07 21:48:20.907595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.734 qpair failed and we were unable to recover it. 00:31:20.734 [2024-06-07 21:48:20.907888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.734 [2024-06-07 21:48:20.907917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.734 qpair failed and we were unable to recover it. 00:31:20.734 [2024-06-07 21:48:20.908193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.734 [2024-06-07 21:48:20.908224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.734 qpair failed and we were unable to recover it. 00:31:20.734 [2024-06-07 21:48:20.908535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.734 [2024-06-07 21:48:20.908565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.734 qpair failed and we were unable to recover it. 00:31:20.734 [2024-06-07 21:48:20.908847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.734 [2024-06-07 21:48:20.908878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.734 qpair failed and we were unable to recover it. 00:31:20.734 [2024-06-07 21:48:20.909211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.734 [2024-06-07 21:48:20.909243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.734 qpair failed and we were unable to recover it. 00:31:20.734 [2024-06-07 21:48:20.909502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.734 [2024-06-07 21:48:20.909532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.734 qpair failed and we were unable to recover it. 00:31:20.734 [2024-06-07 21:48:20.909850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.734 [2024-06-07 21:48:20.909881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.734 qpair failed and we were unable to recover it. 00:31:20.734 [2024-06-07 21:48:20.910141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.734 [2024-06-07 21:48:20.910172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.734 qpair failed and we were unable to recover it. 00:31:20.734 [2024-06-07 21:48:20.910348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.734 [2024-06-07 21:48:20.910379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.734 qpair failed and we were unable to recover it. 00:31:20.734 [2024-06-07 21:48:20.910683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.735 [2024-06-07 21:48:20.910692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.735 qpair failed and we were unable to recover it. 00:31:20.735 [2024-06-07 21:48:20.910986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.735 [2024-06-07 21:48:20.910995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.735 qpair failed and we were unable to recover it. 00:31:20.735 [2024-06-07 21:48:20.911295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.735 [2024-06-07 21:48:20.911326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.735 qpair failed and we were unable to recover it. 00:31:20.735 [2024-06-07 21:48:20.911663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.735 [2024-06-07 21:48:20.911694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.735 qpair failed and we were unable to recover it. 00:31:20.735 [2024-06-07 21:48:20.911949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.735 [2024-06-07 21:48:20.911979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.735 qpair failed and we were unable to recover it. 00:31:20.735 [2024-06-07 21:48:20.912180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.735 [2024-06-07 21:48:20.912190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.735 qpair failed and we were unable to recover it. 00:31:20.735 [2024-06-07 21:48:20.912486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.735 [2024-06-07 21:48:20.912516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.735 qpair failed and we were unable to recover it. 00:31:20.735 [2024-06-07 21:48:20.912850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.735 [2024-06-07 21:48:20.912880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.735 qpair failed and we were unable to recover it. 00:31:20.735 [2024-06-07 21:48:20.913161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.735 [2024-06-07 21:48:20.913193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.735 qpair failed and we were unable to recover it. 00:31:20.735 [2024-06-07 21:48:20.913392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.735 [2024-06-07 21:48:20.913423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.735 qpair failed and we were unable to recover it. 00:31:20.735 [2024-06-07 21:48:20.913679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.735 [2024-06-07 21:48:20.913688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.735 qpair failed and we were unable to recover it. 00:31:20.735 [2024-06-07 21:48:20.913980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.735 [2024-06-07 21:48:20.913989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.735 qpair failed and we were unable to recover it. 00:31:20.735 [2024-06-07 21:48:20.914134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.735 [2024-06-07 21:48:20.914144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.735 qpair failed and we were unable to recover it. 00:31:20.735 [2024-06-07 21:48:20.914438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.735 [2024-06-07 21:48:20.914468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.735 qpair failed and we were unable to recover it. 00:31:20.735 [2024-06-07 21:48:20.914650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.735 [2024-06-07 21:48:20.914687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.735 qpair failed and we were unable to recover it. 00:31:20.735 [2024-06-07 21:48:20.914933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.735 [2024-06-07 21:48:20.914963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.735 qpair failed and we were unable to recover it. 00:31:20.735 [2024-06-07 21:48:20.915247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.735 [2024-06-07 21:48:20.915278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.735 qpair failed and we were unable to recover it. 00:31:20.735 [2024-06-07 21:48:20.915523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.735 [2024-06-07 21:48:20.915554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.735 qpair failed and we were unable to recover it. 00:31:20.735 [2024-06-07 21:48:20.915795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.735 [2024-06-07 21:48:20.915805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.735 qpair failed and we were unable to recover it. 00:31:20.735 [2024-06-07 21:48:20.916084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.735 [2024-06-07 21:48:20.916094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.735 qpair failed and we were unable to recover it. 00:31:20.735 [2024-06-07 21:48:20.916394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.735 [2024-06-07 21:48:20.916404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.735 qpair failed and we were unable to recover it. 00:31:20.735 [2024-06-07 21:48:20.916647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.735 [2024-06-07 21:48:20.916657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.735 qpair failed and we were unable to recover it. 00:31:20.735 [2024-06-07 21:48:20.916818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.735 [2024-06-07 21:48:20.916828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.735 qpair failed and we were unable to recover it. 00:31:20.735 [2024-06-07 21:48:20.916987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.735 [2024-06-07 21:48:20.916996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.735 qpair failed and we were unable to recover it. 00:31:20.735 [2024-06-07 21:48:20.917243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.735 [2024-06-07 21:48:20.917253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.735 qpair failed and we were unable to recover it. 00:31:20.735 [2024-06-07 21:48:20.917512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.735 [2024-06-07 21:48:20.917521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.735 qpair failed and we were unable to recover it. 00:31:20.735 [2024-06-07 21:48:20.917784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.735 [2024-06-07 21:48:20.917793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.735 qpair failed and we were unable to recover it. 00:31:20.735 [2024-06-07 21:48:20.918058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.735 [2024-06-07 21:48:20.918067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.735 qpair failed and we were unable to recover it. 00:31:20.735 [2024-06-07 21:48:20.918265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.735 [2024-06-07 21:48:20.918274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.735 qpair failed and we were unable to recover it. 00:31:20.735 [2024-06-07 21:48:20.918488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.735 [2024-06-07 21:48:20.918497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.735 qpair failed and we were unable to recover it. 00:31:20.735 [2024-06-07 21:48:20.918693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.735 [2024-06-07 21:48:20.918702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.735 qpair failed and we were unable to recover it. 00:31:20.735 [2024-06-07 21:48:20.918855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.735 [2024-06-07 21:48:20.918864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.735 qpair failed and we were unable to recover it. 00:31:20.735 [2024-06-07 21:48:20.919003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.735 [2024-06-07 21:48:20.919012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.735 qpair failed and we were unable to recover it. 00:31:20.735 [2024-06-07 21:48:20.919225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.735 [2024-06-07 21:48:20.919235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.735 qpair failed and we were unable to recover it. 00:31:20.735 [2024-06-07 21:48:20.919553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.735 [2024-06-07 21:48:20.919563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.735 qpair failed and we were unable to recover it. 00:31:20.735 [2024-06-07 21:48:20.919830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.735 [2024-06-07 21:48:20.919839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.735 qpair failed and we were unable to recover it. 00:31:20.735 [2024-06-07 21:48:20.919994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.735 [2024-06-07 21:48:20.920004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.735 qpair failed and we were unable to recover it. 00:31:20.735 [2024-06-07 21:48:20.920149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.735 [2024-06-07 21:48:20.920159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.736 qpair failed and we were unable to recover it. 00:31:20.736 [2024-06-07 21:48:20.920431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.736 [2024-06-07 21:48:20.920461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.736 qpair failed and we were unable to recover it. 00:31:20.736 [2024-06-07 21:48:20.920721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.736 [2024-06-07 21:48:20.920751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.736 qpair failed and we were unable to recover it. 00:31:20.736 [2024-06-07 21:48:20.921079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.736 [2024-06-07 21:48:20.921110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.736 qpair failed and we were unable to recover it. 00:31:20.736 [2024-06-07 21:48:20.921470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.736 [2024-06-07 21:48:20.921500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.736 qpair failed and we were unable to recover it. 00:31:20.736 [2024-06-07 21:48:20.921682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.736 [2024-06-07 21:48:20.921712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.736 qpair failed and we were unable to recover it. 00:31:20.736 [2024-06-07 21:48:20.921971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.736 [2024-06-07 21:48:20.922002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.736 qpair failed and we were unable to recover it. 00:31:20.736 [2024-06-07 21:48:20.922309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.736 [2024-06-07 21:48:20.922318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.736 qpair failed and we were unable to recover it. 00:31:20.736 [2024-06-07 21:48:20.922514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.736 [2024-06-07 21:48:20.922523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.736 qpair failed and we were unable to recover it. 00:31:20.736 [2024-06-07 21:48:20.922738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.736 [2024-06-07 21:48:20.922747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.736 qpair failed and we were unable to recover it. 00:31:20.736 [2024-06-07 21:48:20.922910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.736 [2024-06-07 21:48:20.922920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.736 qpair failed and we were unable to recover it. 00:31:20.736 [2024-06-07 21:48:20.923191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.736 [2024-06-07 21:48:20.923200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.736 qpair failed and we were unable to recover it. 00:31:20.736 [2024-06-07 21:48:20.923407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.736 [2024-06-07 21:48:20.923416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.736 qpair failed and we were unable to recover it. 00:31:20.736 [2024-06-07 21:48:20.923708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.736 [2024-06-07 21:48:20.923717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.736 qpair failed and we were unable to recover it. 00:31:20.736 [2024-06-07 21:48:20.923953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.736 [2024-06-07 21:48:20.923962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.736 qpair failed and we were unable to recover it. 00:31:20.736 [2024-06-07 21:48:20.924170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.736 [2024-06-07 21:48:20.924179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.736 qpair failed and we were unable to recover it. 00:31:20.736 [2024-06-07 21:48:20.924375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.736 [2024-06-07 21:48:20.924384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.736 qpair failed and we were unable to recover it. 00:31:20.736 [2024-06-07 21:48:20.924674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.736 [2024-06-07 21:48:20.924686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.736 qpair failed and we were unable to recover it. 00:31:20.736 [2024-06-07 21:48:20.925009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.736 [2024-06-07 21:48:20.925018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.736 qpair failed and we were unable to recover it. 00:31:20.736 [2024-06-07 21:48:20.925328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.736 [2024-06-07 21:48:20.925360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.736 qpair failed and we were unable to recover it. 00:31:20.736 [2024-06-07 21:48:20.925553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.736 [2024-06-07 21:48:20.925583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.736 qpair failed and we were unable to recover it. 00:31:20.736 [2024-06-07 21:48:20.925781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.736 [2024-06-07 21:48:20.925811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.736 qpair failed and we were unable to recover it. 00:31:20.736 [2024-06-07 21:48:20.926057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.736 [2024-06-07 21:48:20.926089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.736 qpair failed and we were unable to recover it. 00:31:20.736 [2024-06-07 21:48:20.926346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.736 [2024-06-07 21:48:20.926355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.736 qpair failed and we were unable to recover it. 00:31:20.736 [2024-06-07 21:48:20.926618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.736 [2024-06-07 21:48:20.926628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.736 qpair failed and we were unable to recover it. 00:31:20.736 [2024-06-07 21:48:20.926862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.736 [2024-06-07 21:48:20.926871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.736 qpair failed and we were unable to recover it. 00:31:20.736 [2024-06-07 21:48:20.927080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.736 [2024-06-07 21:48:20.927090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.736 qpair failed and we were unable to recover it. 00:31:20.736 [2024-06-07 21:48:20.927372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.736 [2024-06-07 21:48:20.927401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.736 qpair failed and we were unable to recover it. 00:31:20.736 [2024-06-07 21:48:20.927709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.736 [2024-06-07 21:48:20.927740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.736 qpair failed and we were unable to recover it. 00:31:20.736 [2024-06-07 21:48:20.928062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.736 [2024-06-07 21:48:20.928093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.736 qpair failed and we were unable to recover it. 00:31:20.736 [2024-06-07 21:48:20.928434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.736 [2024-06-07 21:48:20.928465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.736 qpair failed and we were unable to recover it. 00:31:20.736 [2024-06-07 21:48:20.928800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.736 [2024-06-07 21:48:20.928831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.736 qpair failed and we were unable to recover it. 00:31:20.736 [2024-06-07 21:48:20.929112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.736 [2024-06-07 21:48:20.929143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.736 qpair failed and we were unable to recover it. 00:31:20.736 [2024-06-07 21:48:20.929348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.736 [2024-06-07 21:48:20.929379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.736 qpair failed and we were unable to recover it. 00:31:20.736 [2024-06-07 21:48:20.929628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.736 [2024-06-07 21:48:20.929658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.736 qpair failed and we were unable to recover it. 00:31:20.736 [2024-06-07 21:48:20.929969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.736 [2024-06-07 21:48:20.929999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.736 qpair failed and we were unable to recover it. 00:31:20.736 [2024-06-07 21:48:20.930300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.736 [2024-06-07 21:48:20.930320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.736 qpair failed and we were unable to recover it. 00:31:20.736 [2024-06-07 21:48:20.930522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.736 [2024-06-07 21:48:20.930531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.736 qpair failed and we were unable to recover it. 00:31:20.736 [2024-06-07 21:48:20.930824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.736 [2024-06-07 21:48:20.930833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.737 qpair failed and we were unable to recover it. 00:31:20.737 [2024-06-07 21:48:20.930980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.737 [2024-06-07 21:48:20.930990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.737 qpair failed and we were unable to recover it. 00:31:20.737 [2024-06-07 21:48:20.931203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.737 [2024-06-07 21:48:20.931212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.737 qpair failed and we were unable to recover it. 00:31:20.737 [2024-06-07 21:48:20.931409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.737 [2024-06-07 21:48:20.931436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.737 qpair failed and we were unable to recover it. 00:31:20.737 [2024-06-07 21:48:20.931748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.737 [2024-06-07 21:48:20.931778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.737 qpair failed and we were unable to recover it. 00:31:20.737 [2024-06-07 21:48:20.932145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.737 [2024-06-07 21:48:20.932176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.737 qpair failed and we were unable to recover it. 00:31:20.737 [2024-06-07 21:48:20.932516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.737 [2024-06-07 21:48:20.932547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.737 qpair failed and we were unable to recover it. 00:31:20.737 [2024-06-07 21:48:20.932888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.737 [2024-06-07 21:48:20.932918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.737 qpair failed and we were unable to recover it. 00:31:20.737 [2024-06-07 21:48:20.933249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.737 [2024-06-07 21:48:20.933279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.737 qpair failed and we were unable to recover it. 00:31:20.737 [2024-06-07 21:48:20.933557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.737 [2024-06-07 21:48:20.933588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.737 qpair failed and we were unable to recover it. 00:31:20.737 [2024-06-07 21:48:20.933939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.737 [2024-06-07 21:48:20.933948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.737 qpair failed and we were unable to recover it. 00:31:20.737 [2024-06-07 21:48:20.934218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.737 [2024-06-07 21:48:20.934249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.737 qpair failed and we were unable to recover it. 00:31:20.737 [2024-06-07 21:48:20.934585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.737 [2024-06-07 21:48:20.934616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.737 qpair failed and we were unable to recover it. 00:31:20.737 [2024-06-07 21:48:20.934875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.737 [2024-06-07 21:48:20.934906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.737 qpair failed and we were unable to recover it. 00:31:20.737 [2024-06-07 21:48:20.935253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.737 [2024-06-07 21:48:20.935284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.737 qpair failed and we were unable to recover it. 00:31:20.737 [2024-06-07 21:48:20.935567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.737 [2024-06-07 21:48:20.935596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.737 qpair failed and we were unable to recover it. 00:31:20.737 [2024-06-07 21:48:20.935936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.737 [2024-06-07 21:48:20.935966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.737 qpair failed and we were unable to recover it. 00:31:20.737 [2024-06-07 21:48:20.936287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.737 [2024-06-07 21:48:20.936319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.737 qpair failed and we were unable to recover it. 00:31:20.737 [2024-06-07 21:48:20.936591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.737 [2024-06-07 21:48:20.936622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.737 qpair failed and we were unable to recover it. 00:31:20.737 [2024-06-07 21:48:20.936934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.737 [2024-06-07 21:48:20.936973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.737 qpair failed and we were unable to recover it. 00:31:20.737 [2024-06-07 21:48:20.937365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.737 [2024-06-07 21:48:20.937397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.737 qpair failed and we were unable to recover it. 00:31:20.737 [2024-06-07 21:48:20.937647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.737 [2024-06-07 21:48:20.937677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.737 qpair failed and we were unable to recover it. 00:31:20.737 [2024-06-07 21:48:20.937922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.737 [2024-06-07 21:48:20.937952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.737 qpair failed and we were unable to recover it. 00:31:20.737 [2024-06-07 21:48:20.938243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.737 [2024-06-07 21:48:20.938274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.737 qpair failed and we were unable to recover it. 00:31:20.737 [2024-06-07 21:48:20.938550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.737 [2024-06-07 21:48:20.938581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.737 qpair failed and we were unable to recover it. 00:31:20.737 [2024-06-07 21:48:20.938943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.737 [2024-06-07 21:48:20.938973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.737 qpair failed and we were unable to recover it. 00:31:20.737 [2024-06-07 21:48:20.939239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.737 [2024-06-07 21:48:20.939269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.737 qpair failed and we were unable to recover it. 00:31:20.737 [2024-06-07 21:48:20.939522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.737 [2024-06-07 21:48:20.939552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.737 qpair failed and we were unable to recover it. 00:31:20.737 [2024-06-07 21:48:20.939782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.737 [2024-06-07 21:48:20.939791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.737 qpair failed and we were unable to recover it. 00:31:20.737 [2024-06-07 21:48:20.940084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.737 [2024-06-07 21:48:20.940094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.737 qpair failed and we were unable to recover it. 00:31:20.737 [2024-06-07 21:48:20.940385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.737 [2024-06-07 21:48:20.940395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.737 qpair failed and we were unable to recover it. 00:31:20.737 [2024-06-07 21:48:20.940604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.737 [2024-06-07 21:48:20.940614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.737 qpair failed and we were unable to recover it. 00:31:20.737 [2024-06-07 21:48:20.940881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.738 [2024-06-07 21:48:20.940890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.738 qpair failed and we were unable to recover it. 00:31:20.738 [2024-06-07 21:48:20.941134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.738 [2024-06-07 21:48:20.941143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.738 qpair failed and we were unable to recover it. 00:31:20.738 [2024-06-07 21:48:20.941391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.738 [2024-06-07 21:48:20.941400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.738 qpair failed and we were unable to recover it. 00:31:20.738 [2024-06-07 21:48:20.941632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.738 [2024-06-07 21:48:20.941641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.738 qpair failed and we were unable to recover it. 00:31:20.738 [2024-06-07 21:48:20.941864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.738 [2024-06-07 21:48:20.941873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.738 qpair failed and we were unable to recover it. 00:31:20.738 [2024-06-07 21:48:20.942117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.738 [2024-06-07 21:48:20.942148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.738 qpair failed and we were unable to recover it. 00:31:20.738 [2024-06-07 21:48:20.942468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.738 [2024-06-07 21:48:20.942498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.738 qpair failed and we were unable to recover it. 00:31:20.738 [2024-06-07 21:48:20.942754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.738 [2024-06-07 21:48:20.942785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.738 qpair failed and we were unable to recover it. 00:31:20.738 [2024-06-07 21:48:20.943044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.738 [2024-06-07 21:48:20.943075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.738 qpair failed and we were unable to recover it. 00:31:20.738 [2024-06-07 21:48:20.943263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.738 [2024-06-07 21:48:20.943272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.738 qpair failed and we were unable to recover it. 00:31:20.738 [2024-06-07 21:48:20.943488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.738 [2024-06-07 21:48:20.943498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.738 qpair failed and we were unable to recover it. 00:31:20.738 [2024-06-07 21:48:20.943762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.738 [2024-06-07 21:48:20.943771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.738 qpair failed and we were unable to recover it. 00:31:20.738 [2024-06-07 21:48:20.944087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.738 [2024-06-07 21:48:20.944097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.738 qpair failed and we were unable to recover it. 00:31:20.738 [2024-06-07 21:48:20.944296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.738 [2024-06-07 21:48:20.944306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.738 qpair failed and we were unable to recover it. 00:31:20.738 [2024-06-07 21:48:20.944464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.738 [2024-06-07 21:48:20.944473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.738 qpair failed and we were unable to recover it. 00:31:20.738 [2024-06-07 21:48:20.944783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.738 [2024-06-07 21:48:20.944792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.738 qpair failed and we were unable to recover it. 00:31:20.738 [2024-06-07 21:48:20.945014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.738 [2024-06-07 21:48:20.945023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.738 qpair failed and we were unable to recover it. 00:31:20.738 [2024-06-07 21:48:20.945328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.738 [2024-06-07 21:48:20.945337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.738 qpair failed and we were unable to recover it. 00:31:20.738 [2024-06-07 21:48:20.945632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.738 [2024-06-07 21:48:20.945641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.738 qpair failed and we were unable to recover it. 00:31:20.738 [2024-06-07 21:48:20.945930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.738 [2024-06-07 21:48:20.945959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.738 qpair failed and we were unable to recover it. 00:31:20.738 [2024-06-07 21:48:20.946270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.738 [2024-06-07 21:48:20.946301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.738 qpair failed and we were unable to recover it. 00:31:20.738 [2024-06-07 21:48:20.946481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.738 [2024-06-07 21:48:20.946490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.738 qpair failed and we were unable to recover it. 00:31:20.738 [2024-06-07 21:48:20.946651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.738 [2024-06-07 21:48:20.946660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.738 qpair failed and we were unable to recover it. 00:31:20.738 [2024-06-07 21:48:20.946977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.738 [2024-06-07 21:48:20.947007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.738 qpair failed and we were unable to recover it. 00:31:20.738 [2024-06-07 21:48:20.947280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.738 [2024-06-07 21:48:20.947311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.738 qpair failed and we were unable to recover it. 00:31:20.738 [2024-06-07 21:48:20.947483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.738 [2024-06-07 21:48:20.947492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.738 qpair failed and we were unable to recover it. 00:31:20.738 [2024-06-07 21:48:20.947797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.738 [2024-06-07 21:48:20.947806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.738 qpair failed and we were unable to recover it. 00:31:20.738 [2024-06-07 21:48:20.948126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.738 [2024-06-07 21:48:20.948137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.738 qpair failed and we were unable to recover it. 00:31:20.738 [2024-06-07 21:48:20.948335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.738 [2024-06-07 21:48:20.948345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.738 qpair failed and we were unable to recover it. 00:31:20.738 [2024-06-07 21:48:20.948575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.738 [2024-06-07 21:48:20.948584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.738 qpair failed and we were unable to recover it. 00:31:20.738 [2024-06-07 21:48:20.948850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.738 [2024-06-07 21:48:20.948859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.738 qpair failed and we were unable to recover it. 00:31:20.738 [2024-06-07 21:48:20.949014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.738 [2024-06-07 21:48:20.949023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.738 qpair failed and we were unable to recover it. 00:31:20.738 [2024-06-07 21:48:20.949371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.738 [2024-06-07 21:48:20.949402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.738 qpair failed and we were unable to recover it. 00:31:20.738 [2024-06-07 21:48:20.949772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.738 [2024-06-07 21:48:20.949802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.738 qpair failed and we were unable to recover it. 00:31:20.738 [2024-06-07 21:48:20.950083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.738 [2024-06-07 21:48:20.950114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.738 qpair failed and we were unable to recover it. 00:31:20.738 [2024-06-07 21:48:20.950371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.738 [2024-06-07 21:48:20.950401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.738 qpair failed and we were unable to recover it. 00:31:20.738 [2024-06-07 21:48:20.950677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.738 [2024-06-07 21:48:20.950708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.738 qpair failed and we were unable to recover it. 00:31:20.738 [2024-06-07 21:48:20.950965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.738 [2024-06-07 21:48:20.950994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.738 qpair failed and we were unable to recover it. 00:31:20.738 [2024-06-07 21:48:20.951345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.739 [2024-06-07 21:48:20.951354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.739 qpair failed and we were unable to recover it. 00:31:20.739 [2024-06-07 21:48:20.951574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.739 [2024-06-07 21:48:20.951582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.739 qpair failed and we were unable to recover it. 00:31:20.739 [2024-06-07 21:48:20.951853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.739 [2024-06-07 21:48:20.951861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.739 qpair failed and we were unable to recover it. 00:31:20.739 [2024-06-07 21:48:20.952079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.739 [2024-06-07 21:48:20.952087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.739 qpair failed and we were unable to recover it. 00:31:20.739 [2024-06-07 21:48:20.952386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.739 [2024-06-07 21:48:20.952394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.739 qpair failed and we were unable to recover it. 00:31:20.739 [2024-06-07 21:48:20.952710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.739 [2024-06-07 21:48:20.952718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.739 qpair failed and we were unable to recover it. 00:31:20.739 [2024-06-07 21:48:20.952854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.739 [2024-06-07 21:48:20.952862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.739 qpair failed and we were unable to recover it. 00:31:20.739 [2024-06-07 21:48:20.953072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.739 [2024-06-07 21:48:20.953080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.739 qpair failed and we were unable to recover it. 00:31:20.739 [2024-06-07 21:48:20.953344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.739 [2024-06-07 21:48:20.953352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.739 qpair failed and we were unable to recover it. 00:31:20.739 [2024-06-07 21:48:20.953578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.739 [2024-06-07 21:48:20.953586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.739 qpair failed and we were unable to recover it. 00:31:20.739 [2024-06-07 21:48:20.953819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.739 [2024-06-07 21:48:20.953827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.739 qpair failed and we were unable to recover it. 00:31:20.739 [2024-06-07 21:48:20.954068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.739 [2024-06-07 21:48:20.954077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.739 qpair failed and we were unable to recover it. 00:31:20.739 [2024-06-07 21:48:20.954343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.739 [2024-06-07 21:48:20.954351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.739 qpair failed and we were unable to recover it. 00:31:20.739 [2024-06-07 21:48:20.954569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.739 [2024-06-07 21:48:20.954578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.739 qpair failed and we were unable to recover it. 00:31:20.739 [2024-06-07 21:48:20.954794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.739 [2024-06-07 21:48:20.954802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.739 qpair failed and we were unable to recover it. 00:31:20.739 [2024-06-07 21:48:20.954962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.739 [2024-06-07 21:48:20.954971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.739 qpair failed and we were unable to recover it. 00:31:20.739 [2024-06-07 21:48:20.955199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.739 [2024-06-07 21:48:20.955209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.739 qpair failed and we were unable to recover it. 00:31:20.739 [2024-06-07 21:48:20.955414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.739 [2024-06-07 21:48:20.955423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.739 qpair failed and we were unable to recover it. 00:31:20.739 [2024-06-07 21:48:20.955625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.739 [2024-06-07 21:48:20.955634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.739 qpair failed and we were unable to recover it. 00:31:20.739 [2024-06-07 21:48:20.955903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.739 [2024-06-07 21:48:20.955911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.739 qpair failed and we were unable to recover it. 00:31:20.739 [2024-06-07 21:48:20.956119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.739 [2024-06-07 21:48:20.956128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.739 qpair failed and we were unable to recover it. 00:31:20.739 [2024-06-07 21:48:20.956283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.739 [2024-06-07 21:48:20.956293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.739 qpair failed and we were unable to recover it. 00:31:20.739 [2024-06-07 21:48:20.956587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.739 [2024-06-07 21:48:20.956595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.739 qpair failed and we were unable to recover it. 00:31:20.739 [2024-06-07 21:48:20.956756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.739 [2024-06-07 21:48:20.956764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.739 qpair failed and we were unable to recover it. 00:31:20.739 [2024-06-07 21:48:20.957033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.739 [2024-06-07 21:48:20.957042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.739 qpair failed and we were unable to recover it. 00:31:20.739 [2024-06-07 21:48:20.957335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.739 [2024-06-07 21:48:20.957344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.739 qpair failed and we were unable to recover it. 00:31:20.739 [2024-06-07 21:48:20.957558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.739 [2024-06-07 21:48:20.957589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.739 qpair failed and we were unable to recover it. 00:31:20.739 [2024-06-07 21:48:20.957843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.739 [2024-06-07 21:48:20.957873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.739 qpair failed and we were unable to recover it. 00:31:20.739 [2024-06-07 21:48:20.958077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.739 [2024-06-07 21:48:20.958108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.739 qpair failed and we were unable to recover it. 00:31:20.739 [2024-06-07 21:48:20.958465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.739 [2024-06-07 21:48:20.958500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.739 qpair failed and we were unable to recover it. 00:31:20.739 [2024-06-07 21:48:20.958770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.739 [2024-06-07 21:48:20.958800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.739 qpair failed and we were unable to recover it. 00:31:20.739 [2024-06-07 21:48:20.959062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.739 [2024-06-07 21:48:20.959093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.739 qpair failed and we were unable to recover it. 00:31:20.739 [2024-06-07 21:48:20.959337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.739 [2024-06-07 21:48:20.959367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.739 qpair failed and we were unable to recover it. 00:31:20.739 [2024-06-07 21:48:20.959654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.739 [2024-06-07 21:48:20.959664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.739 qpair failed and we were unable to recover it. 00:31:20.739 [2024-06-07 21:48:20.959880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.739 [2024-06-07 21:48:20.959889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.739 qpair failed and we were unable to recover it. 00:31:20.739 [2024-06-07 21:48:20.960091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.739 [2024-06-07 21:48:20.960100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.739 qpair failed and we were unable to recover it. 00:31:20.739 [2024-06-07 21:48:20.960328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.739 [2024-06-07 21:48:20.960337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.739 qpair failed and we were unable to recover it. 00:31:20.739 [2024-06-07 21:48:20.960618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.739 [2024-06-07 21:48:20.960649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.740 qpair failed and we were unable to recover it. 00:31:20.740 [2024-06-07 21:48:20.960959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.740 [2024-06-07 21:48:20.960990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.740 qpair failed and we were unable to recover it. 00:31:20.740 [2024-06-07 21:48:20.961182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.740 [2024-06-07 21:48:20.961214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.740 qpair failed and we were unable to recover it. 00:31:20.740 [2024-06-07 21:48:20.961551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.740 [2024-06-07 21:48:20.961582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.740 qpair failed and we were unable to recover it. 00:31:20.740 [2024-06-07 21:48:20.961873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.740 [2024-06-07 21:48:20.961883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.740 qpair failed and we were unable to recover it. 00:31:20.740 [2024-06-07 21:48:20.962043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.740 [2024-06-07 21:48:20.962053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.740 qpair failed and we were unable to recover it. 00:31:20.740 [2024-06-07 21:48:20.962280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.740 [2024-06-07 21:48:20.962311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.740 qpair failed and we were unable to recover it. 00:31:20.740 [2024-06-07 21:48:20.962494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.740 [2024-06-07 21:48:20.962525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.740 qpair failed and we were unable to recover it. 00:31:20.740 [2024-06-07 21:48:20.962868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.740 [2024-06-07 21:48:20.962898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.740 qpair failed and we were unable to recover it. 00:31:20.740 [2024-06-07 21:48:20.963287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.740 [2024-06-07 21:48:20.963319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.740 qpair failed and we were unable to recover it. 00:31:20.740 [2024-06-07 21:48:20.963587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.740 [2024-06-07 21:48:20.963596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.740 qpair failed and we were unable to recover it. 00:31:20.740 [2024-06-07 21:48:20.963758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.740 [2024-06-07 21:48:20.963767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.740 qpair failed and we were unable to recover it. 00:31:20.740 [2024-06-07 21:48:20.963897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.740 [2024-06-07 21:48:20.963906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.740 qpair failed and we were unable to recover it. 00:31:20.740 [2024-06-07 21:48:20.964181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.740 [2024-06-07 21:48:20.964190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.740 qpair failed and we were unable to recover it. 00:31:20.740 [2024-06-07 21:48:20.964435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.740 [2024-06-07 21:48:20.964465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.740 qpair failed and we were unable to recover it. 00:31:20.740 [2024-06-07 21:48:20.964652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.740 [2024-06-07 21:48:20.964682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.740 qpair failed and we were unable to recover it. 00:31:20.740 [2024-06-07 21:48:20.964927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.740 [2024-06-07 21:48:20.964957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.740 qpair failed and we were unable to recover it. 00:31:20.740 [2024-06-07 21:48:20.965242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.740 [2024-06-07 21:48:20.965273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.740 qpair failed and we were unable to recover it. 00:31:20.740 [2024-06-07 21:48:20.965479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.740 [2024-06-07 21:48:20.965509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.740 qpair failed and we were unable to recover it. 00:31:20.740 [2024-06-07 21:48:20.965825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.740 [2024-06-07 21:48:20.965855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.740 qpair failed and we were unable to recover it. 00:31:20.740 [2024-06-07 21:48:20.966128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.740 [2024-06-07 21:48:20.966138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.740 qpair failed and we were unable to recover it. 00:31:20.740 [2024-06-07 21:48:20.966357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.740 [2024-06-07 21:48:20.966370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.740 qpair failed and we were unable to recover it. 00:31:20.740 [2024-06-07 21:48:20.966638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.740 [2024-06-07 21:48:20.966647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.740 qpair failed and we were unable to recover it. 00:31:20.740 [2024-06-07 21:48:20.966807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.740 [2024-06-07 21:48:20.966838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.740 qpair failed and we were unable to recover it. 00:31:20.740 [2024-06-07 21:48:20.967089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.740 [2024-06-07 21:48:20.967119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.740 qpair failed and we were unable to recover it. 00:31:20.740 [2024-06-07 21:48:20.967456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.740 [2024-06-07 21:48:20.967487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.740 qpair failed and we were unable to recover it. 00:31:20.740 [2024-06-07 21:48:20.967804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.740 [2024-06-07 21:48:20.967834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.740 qpair failed and we were unable to recover it. 00:31:20.740 [2024-06-07 21:48:20.968218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.740 [2024-06-07 21:48:20.968249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.740 qpair failed and we were unable to recover it. 00:31:20.740 [2024-06-07 21:48:20.968491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.740 [2024-06-07 21:48:20.968516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:20.740 qpair failed and we were unable to recover it. 00:31:21.009 [2024-06-07 21:48:20.968713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.009 [2024-06-07 21:48:20.968722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.009 qpair failed and we were unable to recover it. 00:31:21.009 [2024-06-07 21:48:20.969004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.009 [2024-06-07 21:48:20.969013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.009 qpair failed and we were unable to recover it. 00:31:21.009 [2024-06-07 21:48:20.969283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.009 [2024-06-07 21:48:20.969292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.009 qpair failed and we were unable to recover it. 00:31:21.009 [2024-06-07 21:48:20.969560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.009 [2024-06-07 21:48:20.969571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.009 qpair failed and we were unable to recover it. 00:31:21.009 [2024-06-07 21:48:20.969725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.009 [2024-06-07 21:48:20.969734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.009 qpair failed and we were unable to recover it. 00:31:21.009 [2024-06-07 21:48:20.969884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.009 [2024-06-07 21:48:20.969894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.009 qpair failed and we were unable to recover it. 00:31:21.009 [2024-06-07 21:48:20.970128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.009 [2024-06-07 21:48:20.970138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.009 qpair failed and we were unable to recover it. 00:31:21.009 [2024-06-07 21:48:20.970338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.009 [2024-06-07 21:48:20.970347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.009 qpair failed and we were unable to recover it. 00:31:21.009 [2024-06-07 21:48:20.970556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.009 [2024-06-07 21:48:20.970566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.009 qpair failed and we were unable to recover it. 00:31:21.009 [2024-06-07 21:48:20.970858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.009 [2024-06-07 21:48:20.970868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.009 qpair failed and we were unable to recover it. 00:31:21.009 [2024-06-07 21:48:20.971091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.009 [2024-06-07 21:48:20.971101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.009 qpair failed and we were unable to recover it. 00:31:21.009 [2024-06-07 21:48:20.971311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.009 [2024-06-07 21:48:20.971320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.009 qpair failed and we were unable to recover it. 00:31:21.009 [2024-06-07 21:48:20.971524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.009 [2024-06-07 21:48:20.971533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.009 qpair failed and we were unable to recover it. 00:31:21.009 [2024-06-07 21:48:20.971740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.009 [2024-06-07 21:48:20.971750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.009 qpair failed and we were unable to recover it. 00:31:21.009 [2024-06-07 21:48:20.972043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.009 [2024-06-07 21:48:20.972053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.009 qpair failed and we were unable to recover it. 00:31:21.009 [2024-06-07 21:48:20.972263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.009 [2024-06-07 21:48:20.972272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.009 qpair failed and we were unable to recover it. 00:31:21.009 [2024-06-07 21:48:20.972542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.009 [2024-06-07 21:48:20.972572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.009 qpair failed and we were unable to recover it. 00:31:21.009 [2024-06-07 21:48:20.972850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.009 [2024-06-07 21:48:20.972880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.009 qpair failed and we were unable to recover it. 00:31:21.009 [2024-06-07 21:48:20.973150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.009 [2024-06-07 21:48:20.973181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.009 qpair failed and we were unable to recover it. 00:31:21.009 [2024-06-07 21:48:20.973537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.009 [2024-06-07 21:48:20.973567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-07 21:48:20.973825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-07 21:48:20.973856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-07 21:48:20.974164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-07 21:48:20.974195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-07 21:48:20.974506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-07 21:48:20.974536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-07 21:48:20.974792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-07 21:48:20.974823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-07 21:48:20.975083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-07 21:48:20.975114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-07 21:48:20.975310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-07 21:48:20.975341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-07 21:48:20.975623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-07 21:48:20.975653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-07 21:48:20.975990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-07 21:48:20.976021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-07 21:48:20.976347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-07 21:48:20.976377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-07 21:48:20.976710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-07 21:48:20.976740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-07 21:48:20.977090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-07 21:48:20.977158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7b4000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-07 21:48:20.977510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-07 21:48:20.977544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7b4000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-07 21:48:20.977806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-07 21:48:20.977838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7b4000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-07 21:48:20.978069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-07 21:48:20.978080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-07 21:48:20.978349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-07 21:48:20.978358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-07 21:48:20.978626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-07 21:48:20.978635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-07 21:48:20.978781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-07 21:48:20.978790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-07 21:48:20.979065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-07 21:48:20.979075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-07 21:48:20.979371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-07 21:48:20.979380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-07 21:48:20.979535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-07 21:48:20.979544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-07 21:48:20.979760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-07 21:48:20.979769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-07 21:48:20.979915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-07 21:48:20.979924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-07 21:48:20.980217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-07 21:48:20.980227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-07 21:48:20.980438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-07 21:48:20.980450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-07 21:48:20.980743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-07 21:48:20.980752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-07 21:48:20.981032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-07 21:48:20.981041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-07 21:48:20.981251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-07 21:48:20.981261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-07 21:48:20.981533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-07 21:48:20.981543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-07 21:48:20.981783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-07 21:48:20.981792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-07 21:48:20.982004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-07 21:48:20.982014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-07 21:48:20.982283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-07 21:48:20.982293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-07 21:48:20.982508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-07 21:48:20.982518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-07 21:48:20.982742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-07 21:48:20.982751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-07 21:48:20.983017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-07 21:48:20.983031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-07 21:48:20.983238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-07 21:48:20.983248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-07 21:48:20.983381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-07 21:48:20.983390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-07 21:48:20.983587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.010 [2024-06-07 21:48:20.983596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.010 qpair failed and we were unable to recover it. 00:31:21.010 [2024-06-07 21:48:20.983805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-07 21:48:20.983835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-07 21:48:20.984140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-07 21:48:20.984172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-07 21:48:20.984437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-07 21:48:20.984468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-07 21:48:20.984813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-07 21:48:20.984843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-07 21:48:20.985044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-07 21:48:20.985075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-07 21:48:20.985413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-07 21:48:20.985443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-07 21:48:20.985755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-07 21:48:20.985786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-07 21:48:20.985984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-07 21:48:20.986015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-07 21:48:20.986353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-07 21:48:20.986383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-07 21:48:20.986695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-07 21:48:20.986725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-07 21:48:20.986982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-07 21:48:20.987012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-07 21:48:20.987346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-07 21:48:20.987381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-07 21:48:20.987524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-07 21:48:20.987533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-07 21:48:20.987801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-07 21:48:20.987810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-07 21:48:20.988114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-07 21:48:20.988146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-07 21:48:20.988459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-07 21:48:20.988490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-07 21:48:20.988689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-07 21:48:20.988698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-07 21:48:20.988965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-07 21:48:20.988974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-07 21:48:20.989125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-07 21:48:20.989135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-07 21:48:20.989345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-07 21:48:20.989354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-07 21:48:20.989567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-07 21:48:20.989596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-07 21:48:20.989933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-07 21:48:20.989963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-07 21:48:20.990292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-07 21:48:20.990323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-07 21:48:20.990574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-07 21:48:20.990604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-07 21:48:20.990985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-07 21:48:20.991015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-07 21:48:20.991216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-07 21:48:20.991246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-07 21:48:20.991447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-07 21:48:20.991492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-07 21:48:20.991725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-07 21:48:20.991735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-07 21:48:20.991964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-07 21:48:20.991974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-07 21:48:20.992238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-07 21:48:20.992247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-07 21:48:20.992520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-07 21:48:20.992529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-07 21:48:20.992793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-07 21:48:20.992802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-07 21:48:20.993013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-07 21:48:20.993023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-07 21:48:20.993225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-07 21:48:20.993234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-07 21:48:20.993470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-07 21:48:20.993501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-07 21:48:20.993702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-07 21:48:20.993732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-07 21:48:20.993983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-07 21:48:20.994013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.011 qpair failed and we were unable to recover it. 00:31:21.011 [2024-06-07 21:48:20.994278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.011 [2024-06-07 21:48:20.994308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-07 21:48:20.994554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-07 21:48:20.994563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-07 21:48:20.994719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-07 21:48:20.994728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-07 21:48:20.994995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-07 21:48:20.995004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-07 21:48:20.995245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-07 21:48:20.995255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-07 21:48:20.995521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-07 21:48:20.995530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-07 21:48:20.995686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-07 21:48:20.995696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-07 21:48:20.996013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-07 21:48:20.996023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-07 21:48:20.996185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-07 21:48:20.996195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-07 21:48:20.996487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-07 21:48:20.996496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-07 21:48:20.996695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-07 21:48:20.996704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-07 21:48:20.996850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-07 21:48:20.996859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-07 21:48:20.996984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-07 21:48:20.996994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-07 21:48:20.997152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-07 21:48:20.997161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-07 21:48:20.997378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-07 21:48:20.997387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-07 21:48:20.997654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-07 21:48:20.997663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-07 21:48:20.997848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-07 21:48:20.997878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-07 21:48:20.998138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-07 21:48:20.998169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-07 21:48:20.998423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-07 21:48:20.998453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-07 21:48:20.998729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-07 21:48:20.998738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-07 21:48:20.998878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-07 21:48:20.998887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-07 21:48:20.999104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-07 21:48:20.999113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-07 21:48:20.999239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-07 21:48:20.999249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-07 21:48:20.999542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-07 21:48:20.999551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-07 21:48:20.999750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-07 21:48:20.999760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-07 21:48:20.999977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-07 21:48:21.000007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-07 21:48:21.000330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-07 21:48:21.000361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-07 21:48:21.000552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-07 21:48:21.000562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-07 21:48:21.000783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-07 21:48:21.000813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-07 21:48:21.001149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-07 21:48:21.001187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-07 21:48:21.001458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-07 21:48:21.001488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-07 21:48:21.001726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-07 21:48:21.001757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-07 21:48:21.002016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-07 21:48:21.002058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-07 21:48:21.002342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-07 21:48:21.002373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-07 21:48:21.002618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-07 21:48:21.002648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-07 21:48:21.002928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-07 21:48:21.002937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-07 21:48:21.003259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-07 21:48:21.003269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.012 [2024-06-07 21:48:21.003399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.012 [2024-06-07 21:48:21.003408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.012 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-07 21:48:21.003675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-07 21:48:21.003684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-07 21:48:21.003950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-07 21:48:21.003959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-07 21:48:21.004109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-07 21:48:21.004119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-07 21:48:21.004385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-07 21:48:21.004395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-07 21:48:21.004607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-07 21:48:21.004616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-07 21:48:21.004771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-07 21:48:21.004780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-07 21:48:21.005063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-07 21:48:21.005095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-07 21:48:21.005292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-07 21:48:21.005322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-07 21:48:21.005594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-07 21:48:21.005625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-07 21:48:21.005879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-07 21:48:21.005909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-07 21:48:21.006174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-07 21:48:21.006205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-07 21:48:21.006493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-07 21:48:21.006503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-07 21:48:21.006704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-07 21:48:21.006713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-07 21:48:21.006839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-07 21:48:21.006848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-07 21:48:21.006999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-07 21:48:21.007008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-07 21:48:21.007173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-07 21:48:21.007182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-07 21:48:21.007406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-07 21:48:21.007416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-07 21:48:21.007678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-07 21:48:21.007708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-07 21:48:21.008065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-07 21:48:21.008135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-07 21:48:21.008432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-07 21:48:21.008451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-07 21:48:21.008788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-07 21:48:21.008805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-07 21:48:21.009022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-07 21:48:21.009074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-07 21:48:21.009283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-07 21:48:21.009313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-07 21:48:21.009572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-07 21:48:21.009604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-07 21:48:21.009893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-07 21:48:21.009902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-07 21:48:21.010111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-07 21:48:21.010120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-07 21:48:21.010335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-07 21:48:21.010344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-07 21:48:21.010658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-07 21:48:21.010668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-07 21:48:21.010958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-07 21:48:21.010967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-07 21:48:21.011231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-07 21:48:21.011240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-07 21:48:21.011538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-07 21:48:21.011547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-07 21:48:21.011756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-07 21:48:21.011767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-07 21:48:21.012042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-07 21:48:21.012053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-07 21:48:21.012264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-07 21:48:21.012273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-07 21:48:21.012414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-07 21:48:21.012422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-07 21:48:21.012576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-07 21:48:21.012585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.013 [2024-06-07 21:48:21.012793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.013 [2024-06-07 21:48:21.012802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.013 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-07 21:48:21.013069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-07 21:48:21.013079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-07 21:48:21.013245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-07 21:48:21.013255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-07 21:48:21.013535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-07 21:48:21.013544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-07 21:48:21.013774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-07 21:48:21.013804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-07 21:48:21.013999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-07 21:48:21.014051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-07 21:48:21.014298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-07 21:48:21.014329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-07 21:48:21.014505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-07 21:48:21.014514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-07 21:48:21.014738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-07 21:48:21.014747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-07 21:48:21.014960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-07 21:48:21.014990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-07 21:48:21.015204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-07 21:48:21.015236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-07 21:48:21.015495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-07 21:48:21.015526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-07 21:48:21.015815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-07 21:48:21.015846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-07 21:48:21.016064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-07 21:48:21.016097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-07 21:48:21.016408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-07 21:48:21.016439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-07 21:48:21.016673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-07 21:48:21.016683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-07 21:48:21.016881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-07 21:48:21.016890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-07 21:48:21.017159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-07 21:48:21.017191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-07 21:48:21.017384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-07 21:48:21.017414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-07 21:48:21.017666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-07 21:48:21.017697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-07 21:48:21.017999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-07 21:48:21.018009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-07 21:48:21.018255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-07 21:48:21.018264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-07 21:48:21.018556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-07 21:48:21.018566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-07 21:48:21.018770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-07 21:48:21.018779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-07 21:48:21.018995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-07 21:48:21.019004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-07 21:48:21.019210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-07 21:48:21.019219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-07 21:48:21.019442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-07 21:48:21.019451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-07 21:48:21.019599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-07 21:48:21.019608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-07 21:48:21.019758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-07 21:48:21.019767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-07 21:48:21.019970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-07 21:48:21.019979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.014 [2024-06-07 21:48:21.020213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.014 [2024-06-07 21:48:21.020223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.014 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-07 21:48:21.020443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-07 21:48:21.020474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-07 21:48:21.020733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-07 21:48:21.020764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-07 21:48:21.021044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-07 21:48:21.021076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-07 21:48:21.021288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-07 21:48:21.021318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-07 21:48:21.021651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-07 21:48:21.021686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-07 21:48:21.021910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-07 21:48:21.021920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-07 21:48:21.022184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-07 21:48:21.022194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-07 21:48:21.022429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-07 21:48:21.022438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-07 21:48:21.022631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-07 21:48:21.022640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-07 21:48:21.022767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-07 21:48:21.022776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-07 21:48:21.022917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-07 21:48:21.022926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-07 21:48:21.023141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-07 21:48:21.023150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-07 21:48:21.023363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-07 21:48:21.023372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-07 21:48:21.023594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-07 21:48:21.023603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-07 21:48:21.023811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-07 21:48:21.023820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-07 21:48:21.024073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-07 21:48:21.024104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-07 21:48:21.024296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-07 21:48:21.024326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-07 21:48:21.024664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-07 21:48:21.024695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-07 21:48:21.024950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-07 21:48:21.024959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-07 21:48:21.025253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-07 21:48:21.025262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-07 21:48:21.025417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-07 21:48:21.025426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-07 21:48:21.025691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-07 21:48:21.025700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-07 21:48:21.025981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-07 21:48:21.026011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-07 21:48:21.026342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-07 21:48:21.026373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-07 21:48:21.026677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-07 21:48:21.026686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-07 21:48:21.026895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-07 21:48:21.026904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-07 21:48:21.027169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-07 21:48:21.027179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-07 21:48:21.027404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-07 21:48:21.027413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-07 21:48:21.027561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-07 21:48:21.027571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-07 21:48:21.027771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-07 21:48:21.027804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-07 21:48:21.028013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-07 21:48:21.028052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-07 21:48:21.028303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-07 21:48:21.028337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-07 21:48:21.028713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-07 21:48:21.028722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-07 21:48:21.029001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-07 21:48:21.029038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-07 21:48:21.029307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-07 21:48:21.029337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-07 21:48:21.029614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-07 21:48:21.029645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-07 21:48:21.029903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-07 21:48:21.029933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.015 [2024-06-07 21:48:21.030244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.015 [2024-06-07 21:48:21.030276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.015 qpair failed and we were unable to recover it. 00:31:21.016 [2024-06-07 21:48:21.030534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.016 [2024-06-07 21:48:21.030574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.016 qpair failed and we were unable to recover it. 00:31:21.016 [2024-06-07 21:48:21.030808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.016 [2024-06-07 21:48:21.030818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.016 qpair failed and we were unable to recover it. 00:31:21.016 [2024-06-07 21:48:21.030978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.016 [2024-06-07 21:48:21.030987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.016 qpair failed and we were unable to recover it. 00:31:21.016 [2024-06-07 21:48:21.031187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.016 [2024-06-07 21:48:21.031197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.016 qpair failed and we were unable to recover it. 00:31:21.016 [2024-06-07 21:48:21.031417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.016 [2024-06-07 21:48:21.031426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.016 qpair failed and we were unable to recover it. 00:31:21.016 [2024-06-07 21:48:21.031625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.016 [2024-06-07 21:48:21.031634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.016 qpair failed and we were unable to recover it. 00:31:21.016 [2024-06-07 21:48:21.031898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.016 [2024-06-07 21:48:21.031907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.016 qpair failed and we were unable to recover it. 00:31:21.016 [2024-06-07 21:48:21.032062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.016 [2024-06-07 21:48:21.032072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.016 qpair failed and we were unable to recover it. 00:31:21.016 [2024-06-07 21:48:21.032269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.016 [2024-06-07 21:48:21.032278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.016 qpair failed and we were unable to recover it. 00:31:21.016 [2024-06-07 21:48:21.032428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.016 [2024-06-07 21:48:21.032437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.016 qpair failed and we were unable to recover it. 00:31:21.016 [2024-06-07 21:48:21.032774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.016 [2024-06-07 21:48:21.032783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.016 qpair failed and we were unable to recover it. 00:31:21.016 [2024-06-07 21:48:21.032937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.016 [2024-06-07 21:48:21.032947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.016 qpair failed and we were unable to recover it. 00:31:21.016 [2024-06-07 21:48:21.033221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.016 [2024-06-07 21:48:21.033230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.016 qpair failed and we were unable to recover it. 00:31:21.016 [2024-06-07 21:48:21.033427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.016 [2024-06-07 21:48:21.033436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.016 qpair failed and we were unable to recover it. 00:31:21.016 [2024-06-07 21:48:21.033623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.016 [2024-06-07 21:48:21.033653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.016 qpair failed and we were unable to recover it. 00:31:21.016 [2024-06-07 21:48:21.033912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.016 [2024-06-07 21:48:21.033943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.016 qpair failed and we were unable to recover it. 00:31:21.016 [2024-06-07 21:48:21.034215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.016 [2024-06-07 21:48:21.034246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.016 qpair failed and we were unable to recover it. 00:31:21.016 [2024-06-07 21:48:21.034563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.016 [2024-06-07 21:48:21.034594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.016 qpair failed and we were unable to recover it. 00:31:21.016 [2024-06-07 21:48:21.034929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.016 [2024-06-07 21:48:21.034938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.016 qpair failed and we were unable to recover it. 00:31:21.016 [2024-06-07 21:48:21.035232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.016 [2024-06-07 21:48:21.035263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.016 qpair failed and we were unable to recover it. 00:31:21.016 [2024-06-07 21:48:21.035527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.016 [2024-06-07 21:48:21.035557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.016 qpair failed and we were unable to recover it. 00:31:21.016 [2024-06-07 21:48:21.035813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.016 [2024-06-07 21:48:21.035843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.016 qpair failed and we were unable to recover it. 00:31:21.016 [2024-06-07 21:48:21.036152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.016 [2024-06-07 21:48:21.036161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.016 qpair failed and we were unable to recover it. 00:31:21.016 [2024-06-07 21:48:21.036384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.016 [2024-06-07 21:48:21.036394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.016 qpair failed and we were unable to recover it. 00:31:21.016 [2024-06-07 21:48:21.036605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.016 [2024-06-07 21:48:21.036614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.016 qpair failed and we were unable to recover it. 00:31:21.016 [2024-06-07 21:48:21.036890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.016 [2024-06-07 21:48:21.036920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.016 qpair failed and we were unable to recover it. 00:31:21.016 [2024-06-07 21:48:21.037258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.016 [2024-06-07 21:48:21.037289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.016 qpair failed and we were unable to recover it. 00:31:21.016 [2024-06-07 21:48:21.037611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.016 [2024-06-07 21:48:21.037641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.016 qpair failed and we were unable to recover it. 00:31:21.016 [2024-06-07 21:48:21.037955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.016 [2024-06-07 21:48:21.037964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.016 qpair failed and we were unable to recover it. 00:31:21.016 [2024-06-07 21:48:21.038175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.016 [2024-06-07 21:48:21.038184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.016 qpair failed and we were unable to recover it. 00:31:21.016 [2024-06-07 21:48:21.038440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.016 [2024-06-07 21:48:21.038449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.016 qpair failed and we were unable to recover it. 00:31:21.016 [2024-06-07 21:48:21.038648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.016 [2024-06-07 21:48:21.038658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.016 qpair failed and we were unable to recover it. 00:31:21.016 [2024-06-07 21:48:21.038937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.016 [2024-06-07 21:48:21.038946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.016 qpair failed and we were unable to recover it. 00:31:21.016 [2024-06-07 21:48:21.039213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.016 [2024-06-07 21:48:21.039224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.016 qpair failed and we were unable to recover it. 00:31:21.016 [2024-06-07 21:48:21.039491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.016 [2024-06-07 21:48:21.039500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.016 qpair failed and we were unable to recover it. 00:31:21.016 [2024-06-07 21:48:21.039764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.016 [2024-06-07 21:48:21.039773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.016 qpair failed and we were unable to recover it. 00:31:21.016 [2024-06-07 21:48:21.039922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.016 [2024-06-07 21:48:21.039932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.016 qpair failed and we were unable to recover it. 00:31:21.017 [2024-06-07 21:48:21.040145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.017 [2024-06-07 21:48:21.040154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.017 qpair failed and we were unable to recover it. 00:31:21.017 [2024-06-07 21:48:21.040428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.017 [2024-06-07 21:48:21.040458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.017 qpair failed and we were unable to recover it. 00:31:21.017 [2024-06-07 21:48:21.040797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.017 [2024-06-07 21:48:21.040827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.017 qpair failed and we were unable to recover it. 00:31:21.017 [2024-06-07 21:48:21.041018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.017 [2024-06-07 21:48:21.041031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.017 qpair failed and we were unable to recover it. 00:31:21.017 [2024-06-07 21:48:21.041349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.017 [2024-06-07 21:48:21.041359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.017 qpair failed and we were unable to recover it. 00:31:21.017 [2024-06-07 21:48:21.041584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.017 [2024-06-07 21:48:21.041593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.017 qpair failed and we were unable to recover it. 00:31:21.017 [2024-06-07 21:48:21.041796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.017 [2024-06-07 21:48:21.041805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.017 qpair failed and we were unable to recover it. 00:31:21.017 [2024-06-07 21:48:21.042039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.017 [2024-06-07 21:48:21.042049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.017 qpair failed and we were unable to recover it. 00:31:21.017 [2024-06-07 21:48:21.042337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.017 [2024-06-07 21:48:21.042346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.017 qpair failed and we were unable to recover it. 00:31:21.017 [2024-06-07 21:48:21.042497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.017 [2024-06-07 21:48:21.042506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.017 qpair failed and we were unable to recover it. 00:31:21.017 [2024-06-07 21:48:21.042812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.017 [2024-06-07 21:48:21.042842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.017 qpair failed and we were unable to recover it. 00:31:21.017 [2024-06-07 21:48:21.043117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.017 [2024-06-07 21:48:21.043148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.017 qpair failed and we were unable to recover it. 00:31:21.017 [2024-06-07 21:48:21.043430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.017 [2024-06-07 21:48:21.043460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.017 qpair failed and we were unable to recover it. 00:31:21.017 [2024-06-07 21:48:21.043657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.017 [2024-06-07 21:48:21.043666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.017 qpair failed and we were unable to recover it. 00:31:21.017 [2024-06-07 21:48:21.043879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.017 [2024-06-07 21:48:21.043888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.017 qpair failed and we were unable to recover it. 00:31:21.017 [2024-06-07 21:48:21.044164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.017 [2024-06-07 21:48:21.044195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.017 qpair failed and we were unable to recover it. 00:31:21.017 [2024-06-07 21:48:21.044511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.017 [2024-06-07 21:48:21.044542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.017 qpair failed and we were unable to recover it. 00:31:21.017 [2024-06-07 21:48:21.044727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.017 [2024-06-07 21:48:21.044736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.017 qpair failed and we were unable to recover it. 00:31:21.017 [2024-06-07 21:48:21.045038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.017 [2024-06-07 21:48:21.045048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.017 qpair failed and we were unable to recover it. 00:31:21.017 [2024-06-07 21:48:21.045248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.017 [2024-06-07 21:48:21.045257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.017 qpair failed and we were unable to recover it. 00:31:21.017 [2024-06-07 21:48:21.045481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.017 [2024-06-07 21:48:21.045490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.017 qpair failed and we were unable to recover it. 00:31:21.017 [2024-06-07 21:48:21.045713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.017 [2024-06-07 21:48:21.045723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.017 qpair failed and we were unable to recover it. 00:31:21.017 [2024-06-07 21:48:21.046050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.017 [2024-06-07 21:48:21.046081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.017 qpair failed and we were unable to recover it. 00:31:21.017 [2024-06-07 21:48:21.046332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.017 [2024-06-07 21:48:21.046363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.017 qpair failed and we were unable to recover it. 00:31:21.017 [2024-06-07 21:48:21.046700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.017 [2024-06-07 21:48:21.046709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.017 qpair failed and we were unable to recover it. 00:31:21.017 [2024-06-07 21:48:21.046970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.017 [2024-06-07 21:48:21.047001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.017 qpair failed and we were unable to recover it. 00:31:21.017 [2024-06-07 21:48:21.047323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.017 [2024-06-07 21:48:21.047354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.017 qpair failed and we were unable to recover it. 00:31:21.017 [2024-06-07 21:48:21.047595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.017 [2024-06-07 21:48:21.047625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.017 qpair failed and we were unable to recover it. 00:31:21.017 [2024-06-07 21:48:21.047964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.017 [2024-06-07 21:48:21.047974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.017 qpair failed and we were unable to recover it. 00:31:21.017 [2024-06-07 21:48:21.048247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.017 [2024-06-07 21:48:21.048278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.017 qpair failed and we were unable to recover it. 00:31:21.017 [2024-06-07 21:48:21.048530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.017 [2024-06-07 21:48:21.048559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.017 qpair failed and we were unable to recover it. 00:31:21.017 [2024-06-07 21:48:21.048853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.017 [2024-06-07 21:48:21.048862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.017 qpair failed and we were unable to recover it. 00:31:21.017 [2024-06-07 21:48:21.048957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.017 [2024-06-07 21:48:21.048965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.017 qpair failed and we were unable to recover it. 00:31:21.017 [2024-06-07 21:48:21.049195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.017 [2024-06-07 21:48:21.049226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.017 qpair failed and we were unable to recover it. 00:31:21.017 [2024-06-07 21:48:21.049505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.017 [2024-06-07 21:48:21.049535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.017 qpair failed and we were unable to recover it. 00:31:21.017 [2024-06-07 21:48:21.049722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.017 [2024-06-07 21:48:21.049753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.017 qpair failed and we were unable to recover it. 00:31:21.017 [2024-06-07 21:48:21.050083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.017 [2024-06-07 21:48:21.050094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.017 qpair failed and we were unable to recover it. 00:31:21.017 [2024-06-07 21:48:21.050295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.017 [2024-06-07 21:48:21.050325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.017 qpair failed and we were unable to recover it. 00:31:21.018 [2024-06-07 21:48:21.050635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.018 [2024-06-07 21:48:21.050644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.018 qpair failed and we were unable to recover it. 00:31:21.018 [2024-06-07 21:48:21.050795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.018 [2024-06-07 21:48:21.050804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.018 qpair failed and we were unable to recover it. 00:31:21.018 [2024-06-07 21:48:21.051117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.018 [2024-06-07 21:48:21.051127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.018 qpair failed and we were unable to recover it. 00:31:21.018 [2024-06-07 21:48:21.051356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.018 [2024-06-07 21:48:21.051366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.018 qpair failed and we were unable to recover it. 00:31:21.018 [2024-06-07 21:48:21.051495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.018 [2024-06-07 21:48:21.051505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.018 qpair failed and we were unable to recover it. 00:31:21.018 [2024-06-07 21:48:21.051767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.018 [2024-06-07 21:48:21.051777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.018 qpair failed and we were unable to recover it. 00:31:21.018 [2024-06-07 21:48:21.051935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.018 [2024-06-07 21:48:21.051944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.018 qpair failed and we were unable to recover it. 00:31:21.018 [2024-06-07 21:48:21.052146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.018 [2024-06-07 21:48:21.052156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.018 qpair failed and we were unable to recover it. 00:31:21.018 [2024-06-07 21:48:21.052406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.018 [2024-06-07 21:48:21.052436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.018 qpair failed and we were unable to recover it. 00:31:21.018 [2024-06-07 21:48:21.052639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.018 [2024-06-07 21:48:21.052670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.018 qpair failed and we were unable to recover it. 00:31:21.018 [2024-06-07 21:48:21.052955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.018 [2024-06-07 21:48:21.052986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.018 qpair failed and we were unable to recover it. 00:31:21.018 [2024-06-07 21:48:21.053239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.018 [2024-06-07 21:48:21.053270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.018 qpair failed and we were unable to recover it. 00:31:21.018 [2024-06-07 21:48:21.053538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.018 [2024-06-07 21:48:21.053569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.018 qpair failed and we were unable to recover it. 00:31:21.018 [2024-06-07 21:48:21.053769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.018 [2024-06-07 21:48:21.053798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.018 qpair failed and we were unable to recover it. 00:31:21.018 [2024-06-07 21:48:21.054050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.018 [2024-06-07 21:48:21.054060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.018 qpair failed and we were unable to recover it. 00:31:21.018 [2024-06-07 21:48:21.054204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.018 [2024-06-07 21:48:21.054213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.018 qpair failed and we were unable to recover it. 00:31:21.018 [2024-06-07 21:48:21.054463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.018 [2024-06-07 21:48:21.054492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.018 qpair failed and we were unable to recover it. 00:31:21.018 [2024-06-07 21:48:21.054736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.018 [2024-06-07 21:48:21.054767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.018 qpair failed and we were unable to recover it. 00:31:21.018 [2024-06-07 21:48:21.055024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.018 [2024-06-07 21:48:21.055063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.018 qpair failed and we were unable to recover it. 00:31:21.018 [2024-06-07 21:48:21.055398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.018 [2024-06-07 21:48:21.055428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.018 qpair failed and we were unable to recover it. 00:31:21.018 [2024-06-07 21:48:21.055668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.018 [2024-06-07 21:48:21.055699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.018 qpair failed and we were unable to recover it. 00:31:21.018 [2024-06-07 21:48:21.056046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.018 [2024-06-07 21:48:21.056078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.018 qpair failed and we were unable to recover it. 00:31:21.018 [2024-06-07 21:48:21.056418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.018 [2024-06-07 21:48:21.056449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.018 qpair failed and we were unable to recover it. 00:31:21.018 [2024-06-07 21:48:21.056758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.018 [2024-06-07 21:48:21.056788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.018 qpair failed and we were unable to recover it. 00:31:21.018 [2024-06-07 21:48:21.057099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.018 [2024-06-07 21:48:21.057129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.018 qpair failed and we were unable to recover it. 00:31:21.018 [2024-06-07 21:48:21.057394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.018 [2024-06-07 21:48:21.057424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.018 qpair failed and we were unable to recover it. 00:31:21.018 [2024-06-07 21:48:21.057685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.018 [2024-06-07 21:48:21.057715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.018 qpair failed and we were unable to recover it. 00:31:21.018 [2024-06-07 21:48:21.058023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.018 [2024-06-07 21:48:21.058064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.018 qpair failed and we were unable to recover it. 00:31:21.018 [2024-06-07 21:48:21.058301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.018 [2024-06-07 21:48:21.058310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.018 qpair failed and we were unable to recover it. 00:31:21.018 [2024-06-07 21:48:21.058441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.018 [2024-06-07 21:48:21.058450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.018 qpair failed and we were unable to recover it. 00:31:21.018 [2024-06-07 21:48:21.058666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.018 [2024-06-07 21:48:21.058675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.018 qpair failed and we were unable to recover it. 00:31:21.018 [2024-06-07 21:48:21.058869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.018 [2024-06-07 21:48:21.058878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.018 qpair failed and we were unable to recover it. 00:31:21.018 [2024-06-07 21:48:21.059092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.019 [2024-06-07 21:48:21.059123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.019 qpair failed and we were unable to recover it. 00:31:21.019 [2024-06-07 21:48:21.059413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.019 [2024-06-07 21:48:21.059443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.019 qpair failed and we were unable to recover it. 00:31:21.019 [2024-06-07 21:48:21.059725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.019 [2024-06-07 21:48:21.059755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.019 qpair failed and we were unable to recover it. 00:31:21.019 [2024-06-07 21:48:21.059999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.019 [2024-06-07 21:48:21.060054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.019 qpair failed and we were unable to recover it. 00:31:21.019 [2024-06-07 21:48:21.060342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.019 [2024-06-07 21:48:21.060372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.019 qpair failed and we were unable to recover it. 00:31:21.019 [2024-06-07 21:48:21.060577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.019 [2024-06-07 21:48:21.060607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.019 qpair failed and we were unable to recover it. 00:31:21.019 [2024-06-07 21:48:21.060943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.019 [2024-06-07 21:48:21.060954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.019 qpair failed and we were unable to recover it. 00:31:21.019 [2024-06-07 21:48:21.061256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.019 [2024-06-07 21:48:21.061287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.019 qpair failed and we were unable to recover it. 00:31:21.019 [2024-06-07 21:48:21.061538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.019 [2024-06-07 21:48:21.061569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.019 qpair failed and we were unable to recover it. 00:31:21.019 [2024-06-07 21:48:21.061813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.019 [2024-06-07 21:48:21.061822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.019 qpair failed and we were unable to recover it. 00:31:21.019 [2024-06-07 21:48:21.062036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.019 [2024-06-07 21:48:21.062046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.019 qpair failed and we were unable to recover it. 00:31:21.019 [2024-06-07 21:48:21.062256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.019 [2024-06-07 21:48:21.062265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.019 qpair failed and we were unable to recover it. 00:31:21.019 [2024-06-07 21:48:21.062472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.019 [2024-06-07 21:48:21.062481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.019 qpair failed and we were unable to recover it. 00:31:21.019 [2024-06-07 21:48:21.062640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.019 [2024-06-07 21:48:21.062649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.019 qpair failed and we were unable to recover it. 00:31:21.019 [2024-06-07 21:48:21.062896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.019 [2024-06-07 21:48:21.062927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.019 qpair failed and we were unable to recover it. 00:31:21.019 [2024-06-07 21:48:21.063118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.019 [2024-06-07 21:48:21.063149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.019 qpair failed and we were unable to recover it. 00:31:21.019 [2024-06-07 21:48:21.063395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.019 [2024-06-07 21:48:21.063426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.019 qpair failed and we were unable to recover it. 00:31:21.019 [2024-06-07 21:48:21.063590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.019 [2024-06-07 21:48:21.063599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.019 qpair failed and we were unable to recover it. 00:31:21.019 [2024-06-07 21:48:21.063806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.019 [2024-06-07 21:48:21.063836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.019 qpair failed and we were unable to recover it. 00:31:21.019 [2024-06-07 21:48:21.064159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.019 [2024-06-07 21:48:21.064190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.019 qpair failed and we were unable to recover it. 00:31:21.019 [2024-06-07 21:48:21.064506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.019 [2024-06-07 21:48:21.064536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.019 qpair failed and we were unable to recover it. 00:31:21.019 [2024-06-07 21:48:21.064780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.019 [2024-06-07 21:48:21.064810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.019 qpair failed and we were unable to recover it. 00:31:21.019 [2024-06-07 21:48:21.065121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.019 [2024-06-07 21:48:21.065152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.019 qpair failed and we were unable to recover it. 00:31:21.019 [2024-06-07 21:48:21.065479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.019 [2024-06-07 21:48:21.065509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.019 qpair failed and we were unable to recover it. 00:31:21.019 [2024-06-07 21:48:21.065771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.019 [2024-06-07 21:48:21.065801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.019 qpair failed and we were unable to recover it. 00:31:21.019 [2024-06-07 21:48:21.066118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.019 [2024-06-07 21:48:21.066149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.019 qpair failed and we were unable to recover it. 00:31:21.019 [2024-06-07 21:48:21.066398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.019 [2024-06-07 21:48:21.066429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.019 qpair failed and we were unable to recover it. 00:31:21.019 [2024-06-07 21:48:21.066664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.019 [2024-06-07 21:48:21.066673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.019 qpair failed and we were unable to recover it. 00:31:21.019 [2024-06-07 21:48:21.066887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.019 [2024-06-07 21:48:21.066896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.019 qpair failed and we were unable to recover it. 00:31:21.019 [2024-06-07 21:48:21.067093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.019 [2024-06-07 21:48:21.067102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.019 qpair failed and we were unable to recover it. 00:31:21.019 [2024-06-07 21:48:21.067252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.019 [2024-06-07 21:48:21.067261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.019 qpair failed and we were unable to recover it. 00:31:21.019 [2024-06-07 21:48:21.067550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.019 [2024-06-07 21:48:21.067560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.019 qpair failed and we were unable to recover it. 00:31:21.019 [2024-06-07 21:48:21.067720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.019 [2024-06-07 21:48:21.067729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.019 qpair failed and we were unable to recover it. 00:31:21.019 [2024-06-07 21:48:21.068022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.019 [2024-06-07 21:48:21.068040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.019 qpair failed and we were unable to recover it. 00:31:21.019 [2024-06-07 21:48:21.068310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.019 [2024-06-07 21:48:21.068319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.019 qpair failed and we were unable to recover it. 00:31:21.019 [2024-06-07 21:48:21.068536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.019 [2024-06-07 21:48:21.068545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.019 qpair failed and we were unable to recover it. 00:31:21.019 [2024-06-07 21:48:21.068698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.019 [2024-06-07 21:48:21.068707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.019 qpair failed and we were unable to recover it. 00:31:21.019 [2024-06-07 21:48:21.068980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.019 [2024-06-07 21:48:21.069011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.019 qpair failed and we were unable to recover it. 00:31:21.019 [2024-06-07 21:48:21.069282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.019 [2024-06-07 21:48:21.069313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.020 qpair failed and we were unable to recover it. 00:31:21.020 [2024-06-07 21:48:21.069567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.020 [2024-06-07 21:48:21.069587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.020 qpair failed and we were unable to recover it. 00:31:21.020 [2024-06-07 21:48:21.069717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.020 [2024-06-07 21:48:21.069726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.020 qpair failed and we were unable to recover it. 00:31:21.020 [2024-06-07 21:48:21.070010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.020 [2024-06-07 21:48:21.070019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.020 qpair failed and we were unable to recover it. 00:31:21.020 [2024-06-07 21:48:21.070239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.020 [2024-06-07 21:48:21.070248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.020 qpair failed and we were unable to recover it. 00:31:21.020 [2024-06-07 21:48:21.070452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.020 [2024-06-07 21:48:21.070461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.020 qpair failed and we were unable to recover it. 00:31:21.020 [2024-06-07 21:48:21.070708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.020 [2024-06-07 21:48:21.070739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.020 qpair failed and we were unable to recover it. 00:31:21.020 [2024-06-07 21:48:21.070995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.020 [2024-06-07 21:48:21.071035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.020 qpair failed and we were unable to recover it. 00:31:21.020 [2024-06-07 21:48:21.071347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.020 [2024-06-07 21:48:21.071382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.020 qpair failed and we were unable to recover it. 00:31:21.020 [2024-06-07 21:48:21.071745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.020 [2024-06-07 21:48:21.071775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.020 qpair failed and we were unable to recover it. 00:31:21.020 [2024-06-07 21:48:21.072044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.020 [2024-06-07 21:48:21.072076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.020 qpair failed and we were unable to recover it. 00:31:21.020 [2024-06-07 21:48:21.072342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.020 [2024-06-07 21:48:21.072372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.020 qpair failed and we were unable to recover it. 00:31:21.020 [2024-06-07 21:48:21.072708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.020 [2024-06-07 21:48:21.072738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.020 qpair failed and we were unable to recover it. 00:31:21.020 [2024-06-07 21:48:21.072993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.020 [2024-06-07 21:48:21.073024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.020 qpair failed and we were unable to recover it. 00:31:21.020 [2024-06-07 21:48:21.073355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.020 [2024-06-07 21:48:21.073385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.020 qpair failed and we were unable to recover it. 00:31:21.020 [2024-06-07 21:48:21.073622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.020 [2024-06-07 21:48:21.073631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.020 qpair failed and we were unable to recover it. 00:31:21.020 [2024-06-07 21:48:21.073894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.020 [2024-06-07 21:48:21.073903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.020 qpair failed and we were unable to recover it. 00:31:21.020 [2024-06-07 21:48:21.074046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.020 [2024-06-07 21:48:21.074055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.020 qpair failed and we were unable to recover it. 00:31:21.020 [2024-06-07 21:48:21.074197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.020 [2024-06-07 21:48:21.074206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.020 qpair failed and we were unable to recover it. 00:31:21.020 [2024-06-07 21:48:21.074300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.020 [2024-06-07 21:48:21.074309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.020 qpair failed and we were unable to recover it. 00:31:21.020 [2024-06-07 21:48:21.074575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.020 [2024-06-07 21:48:21.074584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.020 qpair failed and we were unable to recover it. 00:31:21.020 [2024-06-07 21:48:21.074788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.020 [2024-06-07 21:48:21.074797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.020 qpair failed and we were unable to recover it. 00:31:21.020 [2024-06-07 21:48:21.075008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.020 [2024-06-07 21:48:21.075017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.020 qpair failed and we were unable to recover it. 00:31:21.020 [2024-06-07 21:48:21.075255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.020 [2024-06-07 21:48:21.075286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.020 qpair failed and we were unable to recover it. 00:31:21.020 [2024-06-07 21:48:21.075567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.020 [2024-06-07 21:48:21.075597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.020 qpair failed and we were unable to recover it. 00:31:21.020 [2024-06-07 21:48:21.075858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.020 [2024-06-07 21:48:21.075889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.020 qpair failed and we were unable to recover it. 00:31:21.020 [2024-06-07 21:48:21.076199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.020 [2024-06-07 21:48:21.076253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.020 qpair failed and we were unable to recover it. 00:31:21.020 [2024-06-07 21:48:21.076568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.020 [2024-06-07 21:48:21.076577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.020 qpair failed and we were unable to recover it. 00:31:21.020 [2024-06-07 21:48:21.076805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.020 [2024-06-07 21:48:21.076814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.020 qpair failed and we were unable to recover it. 00:31:21.020 [2024-06-07 21:48:21.076954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.020 [2024-06-07 21:48:21.076963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.020 qpair failed and we were unable to recover it. 00:31:21.020 [2024-06-07 21:48:21.077198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.020 [2024-06-07 21:48:21.077208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.020 qpair failed and we were unable to recover it. 00:31:21.020 [2024-06-07 21:48:21.077356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.020 [2024-06-07 21:48:21.077365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.020 qpair failed and we were unable to recover it. 00:31:21.020 [2024-06-07 21:48:21.077494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.020 [2024-06-07 21:48:21.077503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.020 qpair failed and we were unable to recover it. 00:31:21.020 [2024-06-07 21:48:21.077742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.020 [2024-06-07 21:48:21.077773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.020 qpair failed and we were unable to recover it. 00:31:21.020 [2024-06-07 21:48:21.078068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.020 [2024-06-07 21:48:21.078100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.020 qpair failed and we were unable to recover it. 00:31:21.020 [2024-06-07 21:48:21.078368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.020 [2024-06-07 21:48:21.078399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.020 qpair failed and we were unable to recover it. 00:31:21.020 [2024-06-07 21:48:21.078651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.020 [2024-06-07 21:48:21.078681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.020 qpair failed and we were unable to recover it. 00:31:21.020 [2024-06-07 21:48:21.078958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.020 [2024-06-07 21:48:21.078988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.020 qpair failed and we were unable to recover it. 00:31:21.020 [2024-06-07 21:48:21.079357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.020 [2024-06-07 21:48:21.079367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.020 qpair failed and we were unable to recover it. 00:31:21.021 [2024-06-07 21:48:21.079635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.021 [2024-06-07 21:48:21.079664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.021 qpair failed and we were unable to recover it. 00:31:21.021 [2024-06-07 21:48:21.079868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.021 [2024-06-07 21:48:21.079898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.021 qpair failed and we were unable to recover it. 00:31:21.021 [2024-06-07 21:48:21.080207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.021 [2024-06-07 21:48:21.080239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.021 qpair failed and we were unable to recover it. 00:31:21.021 [2024-06-07 21:48:21.080577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.021 [2024-06-07 21:48:21.080607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.021 qpair failed and we were unable to recover it. 00:31:21.021 [2024-06-07 21:48:21.080949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.021 [2024-06-07 21:48:21.080979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.021 qpair failed and we were unable to recover it. 00:31:21.021 [2024-06-07 21:48:21.081265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.021 [2024-06-07 21:48:21.081297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.021 qpair failed and we were unable to recover it. 00:31:21.021 [2024-06-07 21:48:21.081583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.021 [2024-06-07 21:48:21.081612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.021 qpair failed and we were unable to recover it. 00:31:21.021 [2024-06-07 21:48:21.081951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.021 [2024-06-07 21:48:21.081982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.021 qpair failed and we were unable to recover it. 00:31:21.021 [2024-06-07 21:48:21.082251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.021 [2024-06-07 21:48:21.082283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.021 qpair failed and we were unable to recover it. 00:31:21.021 [2024-06-07 21:48:21.082529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.021 [2024-06-07 21:48:21.082565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.021 qpair failed and we were unable to recover it. 00:31:21.021 [2024-06-07 21:48:21.082762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.021 [2024-06-07 21:48:21.082793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.021 qpair failed and we were unable to recover it. 00:31:21.021 [2024-06-07 21:48:21.083140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.021 [2024-06-07 21:48:21.083171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.021 qpair failed and we were unable to recover it. 00:31:21.021 [2024-06-07 21:48:21.083520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.021 [2024-06-07 21:48:21.083550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.021 qpair failed and we were unable to recover it. 00:31:21.021 [2024-06-07 21:48:21.083862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.021 [2024-06-07 21:48:21.083893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.021 qpair failed and we were unable to recover it. 00:31:21.021 [2024-06-07 21:48:21.084091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.021 [2024-06-07 21:48:21.084100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.021 qpair failed and we were unable to recover it. 00:31:21.021 [2024-06-07 21:48:21.084250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.021 [2024-06-07 21:48:21.084259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.021 qpair failed and we were unable to recover it. 00:31:21.021 [2024-06-07 21:48:21.084551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.021 [2024-06-07 21:48:21.084560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.021 qpair failed and we were unable to recover it. 00:31:21.021 [2024-06-07 21:48:21.084771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.021 [2024-06-07 21:48:21.084780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.021 qpair failed and we were unable to recover it. 00:31:21.021 [2024-06-07 21:48:21.085084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.021 [2024-06-07 21:48:21.085094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.021 qpair failed and we were unable to recover it. 00:31:21.021 [2024-06-07 21:48:21.085403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.021 [2024-06-07 21:48:21.085412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.021 qpair failed and we were unable to recover it. 00:31:21.021 [2024-06-07 21:48:21.085627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.021 [2024-06-07 21:48:21.085637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.021 qpair failed and we were unable to recover it. 00:31:21.021 [2024-06-07 21:48:21.085784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.021 [2024-06-07 21:48:21.085793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.021 qpair failed and we were unable to recover it. 00:31:21.021 [2024-06-07 21:48:21.086017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.021 [2024-06-07 21:48:21.086030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.021 qpair failed and we were unable to recover it. 00:31:21.021 [2024-06-07 21:48:21.086302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.021 [2024-06-07 21:48:21.086333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.021 qpair failed and we were unable to recover it. 00:31:21.021 [2024-06-07 21:48:21.086602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.021 [2024-06-07 21:48:21.086632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.021 qpair failed and we were unable to recover it. 00:31:21.021 [2024-06-07 21:48:21.086871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.021 [2024-06-07 21:48:21.086902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.021 qpair failed and we were unable to recover it. 00:31:21.021 [2024-06-07 21:48:21.087177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.021 [2024-06-07 21:48:21.087209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.021 qpair failed and we were unable to recover it. 00:31:21.021 [2024-06-07 21:48:21.087452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.021 [2024-06-07 21:48:21.087483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.021 qpair failed and we were unable to recover it. 00:31:21.021 [2024-06-07 21:48:21.087816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.021 [2024-06-07 21:48:21.087846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.021 qpair failed and we were unable to recover it. 00:31:21.021 [2024-06-07 21:48:21.088127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.021 [2024-06-07 21:48:21.088160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.021 qpair failed and we were unable to recover it. 00:31:21.021 [2024-06-07 21:48:21.088465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.021 [2024-06-07 21:48:21.088474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.021 qpair failed and we were unable to recover it. 00:31:21.021 [2024-06-07 21:48:21.088697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.021 [2024-06-07 21:48:21.088706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.021 qpair failed and we were unable to recover it. 00:31:21.021 [2024-06-07 21:48:21.088977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.021 [2024-06-07 21:48:21.088985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.021 qpair failed and we were unable to recover it. 00:31:21.021 [2024-06-07 21:48:21.089152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.021 [2024-06-07 21:48:21.089161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.021 qpair failed and we were unable to recover it. 00:31:21.021 [2024-06-07 21:48:21.089438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.021 [2024-06-07 21:48:21.089447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.021 qpair failed and we were unable to recover it. 00:31:21.021 [2024-06-07 21:48:21.089662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.021 [2024-06-07 21:48:21.089671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.021 qpair failed and we were unable to recover it. 00:31:21.021 [2024-06-07 21:48:21.089978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.021 [2024-06-07 21:48:21.090008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.021 qpair failed and we were unable to recover it. 00:31:21.021 [2024-06-07 21:48:21.090214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.021 [2024-06-07 21:48:21.090246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.021 qpair failed and we were unable to recover it. 00:31:21.021 [2024-06-07 21:48:21.090556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.022 [2024-06-07 21:48:21.090587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.022 qpair failed and we were unable to recover it. 00:31:21.022 [2024-06-07 21:48:21.090777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.022 [2024-06-07 21:48:21.090808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.022 qpair failed and we were unable to recover it. 00:31:21.022 [2024-06-07 21:48:21.091056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.022 [2024-06-07 21:48:21.091066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.022 qpair failed and we were unable to recover it. 00:31:21.022 [2024-06-07 21:48:21.091375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.022 [2024-06-07 21:48:21.091385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.022 qpair failed and we were unable to recover it. 00:31:21.022 [2024-06-07 21:48:21.091711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.022 [2024-06-07 21:48:21.091721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.022 qpair failed and we were unable to recover it. 00:31:21.022 [2024-06-07 21:48:21.092005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.022 [2024-06-07 21:48:21.092055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.022 qpair failed and we were unable to recover it. 00:31:21.022 [2024-06-07 21:48:21.092302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.022 [2024-06-07 21:48:21.092332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.022 qpair failed and we were unable to recover it. 00:31:21.022 [2024-06-07 21:48:21.092684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.022 [2024-06-07 21:48:21.092714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.022 qpair failed and we were unable to recover it. 00:31:21.022 [2024-06-07 21:48:21.093017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.022 [2024-06-07 21:48:21.093057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.022 qpair failed and we were unable to recover it. 00:31:21.022 [2024-06-07 21:48:21.093397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.022 [2024-06-07 21:48:21.093428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.022 qpair failed and we were unable to recover it. 00:31:21.022 [2024-06-07 21:48:21.093677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.022 [2024-06-07 21:48:21.093686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.022 qpair failed and we were unable to recover it. 00:31:21.022 [2024-06-07 21:48:21.093836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.022 [2024-06-07 21:48:21.093847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.022 qpair failed and we were unable to recover it. 00:31:21.022 [2024-06-07 21:48:21.094121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.022 [2024-06-07 21:48:21.094152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.022 qpair failed and we were unable to recover it. 00:31:21.022 [2024-06-07 21:48:21.094396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.022 [2024-06-07 21:48:21.094427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.022 qpair failed and we were unable to recover it. 00:31:21.022 [2024-06-07 21:48:21.094729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.022 [2024-06-07 21:48:21.094740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.022 qpair failed and we were unable to recover it. 00:31:21.022 [2024-06-07 21:48:21.094959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.022 [2024-06-07 21:48:21.094969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.022 qpair failed and we were unable to recover it. 00:31:21.022 [2024-06-07 21:48:21.095147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.022 [2024-06-07 21:48:21.095157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.022 qpair failed and we were unable to recover it. 00:31:21.022 [2024-06-07 21:48:21.095370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.022 [2024-06-07 21:48:21.095380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.022 qpair failed and we were unable to recover it. 00:31:21.022 [2024-06-07 21:48:21.095599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.022 [2024-06-07 21:48:21.095609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.022 qpair failed and we were unable to recover it. 00:31:21.022 [2024-06-07 21:48:21.095863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.022 [2024-06-07 21:48:21.095893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.022 qpair failed and we were unable to recover it. 00:31:21.022 [2024-06-07 21:48:21.096213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.022 [2024-06-07 21:48:21.096245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.022 qpair failed and we were unable to recover it. 00:31:21.022 [2024-06-07 21:48:21.096398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.022 [2024-06-07 21:48:21.096428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.022 qpair failed and we were unable to recover it. 00:31:21.022 [2024-06-07 21:48:21.096561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.022 [2024-06-07 21:48:21.096591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.022 qpair failed and we were unable to recover it. 00:31:21.022 [2024-06-07 21:48:21.096926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.022 [2024-06-07 21:48:21.096956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.022 qpair failed and we were unable to recover it. 00:31:21.022 [2024-06-07 21:48:21.097214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.022 [2024-06-07 21:48:21.097245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.022 qpair failed and we were unable to recover it. 00:31:21.022 [2024-06-07 21:48:21.097512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.022 [2024-06-07 21:48:21.097542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.022 qpair failed and we were unable to recover it. 00:31:21.022 [2024-06-07 21:48:21.097788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.022 [2024-06-07 21:48:21.097819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.022 qpair failed and we were unable to recover it. 00:31:21.022 [2024-06-07 21:48:21.098005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.022 [2024-06-07 21:48:21.098044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.022 qpair failed and we were unable to recover it. 00:31:21.022 [2024-06-07 21:48:21.098285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.022 [2024-06-07 21:48:21.098316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.022 qpair failed and we were unable to recover it. 00:31:21.022 [2024-06-07 21:48:21.098575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.022 [2024-06-07 21:48:21.098605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.022 qpair failed and we were unable to recover it. 00:31:21.022 [2024-06-07 21:48:21.098940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.022 [2024-06-07 21:48:21.098970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.022 qpair failed and we were unable to recover it. 00:31:21.022 [2024-06-07 21:48:21.099247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.022 [2024-06-07 21:48:21.099279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.022 qpair failed and we were unable to recover it. 00:31:21.022 [2024-06-07 21:48:21.099540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.022 [2024-06-07 21:48:21.099571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.022 qpair failed and we were unable to recover it. 00:31:21.022 [2024-06-07 21:48:21.099811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.022 [2024-06-07 21:48:21.099820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.022 qpair failed and we were unable to recover it. 00:31:21.022 [2024-06-07 21:48:21.100046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.022 [2024-06-07 21:48:21.100056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.022 qpair failed and we were unable to recover it. 00:31:21.022 [2024-06-07 21:48:21.100279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.022 [2024-06-07 21:48:21.100288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.022 qpair failed and we were unable to recover it. 00:31:21.022 [2024-06-07 21:48:21.100526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.022 [2024-06-07 21:48:21.100535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.022 qpair failed and we were unable to recover it. 00:31:21.022 [2024-06-07 21:48:21.100778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.022 [2024-06-07 21:48:21.100787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.022 qpair failed and we were unable to recover it. 00:31:21.022 [2024-06-07 21:48:21.100945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.023 [2024-06-07 21:48:21.100954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.023 qpair failed and we were unable to recover it. 00:31:21.023 [2024-06-07 21:48:21.101169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.023 [2024-06-07 21:48:21.101201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.023 qpair failed and we were unable to recover it. 00:31:21.023 [2024-06-07 21:48:21.101399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.023 [2024-06-07 21:48:21.101430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.023 qpair failed and we were unable to recover it. 00:31:21.023 [2024-06-07 21:48:21.101691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.023 [2024-06-07 21:48:21.101721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.023 qpair failed and we were unable to recover it. 00:31:21.023 [2024-06-07 21:48:21.101946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.023 [2024-06-07 21:48:21.101955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.023 qpair failed and we were unable to recover it. 00:31:21.023 [2024-06-07 21:48:21.102169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.023 [2024-06-07 21:48:21.102178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.023 qpair failed and we were unable to recover it. 00:31:21.023 [2024-06-07 21:48:21.102409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.023 [2024-06-07 21:48:21.102418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.023 qpair failed and we were unable to recover it. 00:31:21.023 [2024-06-07 21:48:21.102635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.023 [2024-06-07 21:48:21.102644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.023 qpair failed and we were unable to recover it. 00:31:21.023 [2024-06-07 21:48:21.102802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.023 [2024-06-07 21:48:21.102811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.023 qpair failed and we were unable to recover it. 00:31:21.023 [2024-06-07 21:48:21.103034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.023 [2024-06-07 21:48:21.103065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.023 qpair failed and we were unable to recover it. 00:31:21.023 [2024-06-07 21:48:21.103324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.023 [2024-06-07 21:48:21.103355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.023 qpair failed and we were unable to recover it. 00:31:21.023 [2024-06-07 21:48:21.103548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.023 [2024-06-07 21:48:21.103578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.023 qpair failed and we were unable to recover it. 00:31:21.023 [2024-06-07 21:48:21.103853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.023 [2024-06-07 21:48:21.103883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.023 qpair failed and we were unable to recover it. 00:31:21.023 [2024-06-07 21:48:21.104203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.023 [2024-06-07 21:48:21.104215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.023 qpair failed and we were unable to recover it. 00:31:21.023 [2024-06-07 21:48:21.104414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.023 [2024-06-07 21:48:21.104424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.023 qpair failed and we were unable to recover it. 00:31:21.023 [2024-06-07 21:48:21.104693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.023 [2024-06-07 21:48:21.104703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.023 qpair failed and we were unable to recover it. 00:31:21.023 [2024-06-07 21:48:21.104988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.023 [2024-06-07 21:48:21.105008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.023 qpair failed and we were unable to recover it. 00:31:21.023 [2024-06-07 21:48:21.105280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.023 [2024-06-07 21:48:21.105289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.023 qpair failed and we were unable to recover it. 00:31:21.023 [2024-06-07 21:48:21.105422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.023 [2024-06-07 21:48:21.105430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.023 qpair failed and we were unable to recover it. 00:31:21.023 [2024-06-07 21:48:21.105623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.023 [2024-06-07 21:48:21.105633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.023 qpair failed and we were unable to recover it. 00:31:21.023 [2024-06-07 21:48:21.105953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.023 [2024-06-07 21:48:21.105984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.023 qpair failed and we were unable to recover it. 00:31:21.023 [2024-06-07 21:48:21.106264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.023 [2024-06-07 21:48:21.106296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.023 qpair failed and we were unable to recover it. 00:31:21.023 [2024-06-07 21:48:21.106553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.023 [2024-06-07 21:48:21.106583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.023 qpair failed and we were unable to recover it. 00:31:21.023 [2024-06-07 21:48:21.106949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.023 [2024-06-07 21:48:21.106979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.023 qpair failed and we were unable to recover it. 00:31:21.023 [2024-06-07 21:48:21.107198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.023 [2024-06-07 21:48:21.107229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.023 qpair failed and we were unable to recover it. 00:31:21.023 [2024-06-07 21:48:21.107413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.023 [2024-06-07 21:48:21.107443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.023 qpair failed and we were unable to recover it. 00:31:21.023 [2024-06-07 21:48:21.107580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.023 [2024-06-07 21:48:21.107610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.023 qpair failed and we were unable to recover it. 00:31:21.023 [2024-06-07 21:48:21.107857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.023 [2024-06-07 21:48:21.107866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.023 qpair failed and we were unable to recover it. 00:31:21.023 [2024-06-07 21:48:21.108155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.023 [2024-06-07 21:48:21.108165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.023 qpair failed and we were unable to recover it. 00:31:21.023 [2024-06-07 21:48:21.108318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.023 [2024-06-07 21:48:21.108328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.023 qpair failed and we were unable to recover it. 00:31:21.023 [2024-06-07 21:48:21.108565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.023 [2024-06-07 21:48:21.108574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.023 qpair failed and we were unable to recover it. 00:31:21.023 [2024-06-07 21:48:21.108780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.023 [2024-06-07 21:48:21.108817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.023 qpair failed and we were unable to recover it. 00:31:21.023 [2024-06-07 21:48:21.109074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.023 [2024-06-07 21:48:21.109105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.024 qpair failed and we were unable to recover it. 00:31:21.024 [2024-06-07 21:48:21.109292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.024 [2024-06-07 21:48:21.109323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.024 qpair failed and we were unable to recover it. 00:31:21.024 [2024-06-07 21:48:21.109639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.024 [2024-06-07 21:48:21.109680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.024 qpair failed and we were unable to recover it. 00:31:21.024 [2024-06-07 21:48:21.109992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.024 [2024-06-07 21:48:21.110001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.024 qpair failed and we were unable to recover it. 00:31:21.024 [2024-06-07 21:48:21.110281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.024 [2024-06-07 21:48:21.110290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.024 qpair failed and we were unable to recover it. 00:31:21.024 [2024-06-07 21:48:21.110501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.024 [2024-06-07 21:48:21.110510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.024 qpair failed and we were unable to recover it. 00:31:21.024 [2024-06-07 21:48:21.110718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.024 [2024-06-07 21:48:21.110727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.024 qpair failed and we were unable to recover it. 00:31:21.024 [2024-06-07 21:48:21.111012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.024 [2024-06-07 21:48:21.111021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.024 qpair failed and we were unable to recover it. 00:31:21.024 [2024-06-07 21:48:21.111329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.024 [2024-06-07 21:48:21.111339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.024 qpair failed and we were unable to recover it. 00:31:21.024 [2024-06-07 21:48:21.111533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.024 [2024-06-07 21:48:21.111542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.024 qpair failed and we were unable to recover it. 00:31:21.024 [2024-06-07 21:48:21.111816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.024 [2024-06-07 21:48:21.111825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.024 qpair failed and we were unable to recover it. 00:31:21.024 [2024-06-07 21:48:21.112036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.024 [2024-06-07 21:48:21.112046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.024 qpair failed and we were unable to recover it. 00:31:21.024 [2024-06-07 21:48:21.112254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.024 [2024-06-07 21:48:21.112263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.024 qpair failed and we were unable to recover it. 00:31:21.024 [2024-06-07 21:48:21.112497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.024 [2024-06-07 21:48:21.112507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.024 qpair failed and we were unable to recover it. 00:31:21.024 [2024-06-07 21:48:21.112828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.024 [2024-06-07 21:48:21.112837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.024 qpair failed and we were unable to recover it. 00:31:21.024 [2024-06-07 21:48:21.113131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.024 [2024-06-07 21:48:21.113141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.024 qpair failed and we were unable to recover it. 00:31:21.024 [2024-06-07 21:48:21.113409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.024 [2024-06-07 21:48:21.113419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.024 qpair failed and we were unable to recover it. 00:31:21.024 [2024-06-07 21:48:21.113635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.024 [2024-06-07 21:48:21.113644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.024 qpair failed and we were unable to recover it. 00:31:21.024 [2024-06-07 21:48:21.113936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.024 [2024-06-07 21:48:21.113945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.024 qpair failed and we were unable to recover it. 00:31:21.024 [2024-06-07 21:48:21.114233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.024 [2024-06-07 21:48:21.114242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.024 qpair failed and we were unable to recover it. 00:31:21.024 [2024-06-07 21:48:21.114383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.024 [2024-06-07 21:48:21.114392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.024 qpair failed and we were unable to recover it. 00:31:21.024 [2024-06-07 21:48:21.114631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.024 [2024-06-07 21:48:21.114666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.024 qpair failed and we were unable to recover it. 00:31:21.024 [2024-06-07 21:48:21.114904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.024 [2024-06-07 21:48:21.114935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.024 qpair failed and we were unable to recover it. 00:31:21.024 [2024-06-07 21:48:21.115194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.024 [2024-06-07 21:48:21.115226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.024 qpair failed and we were unable to recover it. 00:31:21.024 [2024-06-07 21:48:21.115477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.024 [2024-06-07 21:48:21.115508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.024 qpair failed and we were unable to recover it. 00:31:21.024 [2024-06-07 21:48:21.115756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.024 [2024-06-07 21:48:21.115787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.024 qpair failed and we were unable to recover it. 00:31:21.024 [2024-06-07 21:48:21.116021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.024 [2024-06-07 21:48:21.116047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.024 qpair failed and we were unable to recover it. 00:31:21.024 [2024-06-07 21:48:21.116264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.024 [2024-06-07 21:48:21.116273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.024 qpair failed and we were unable to recover it. 00:31:21.024 [2024-06-07 21:48:21.116510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.024 [2024-06-07 21:48:21.116520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.024 qpair failed and we were unable to recover it. 00:31:21.024 [2024-06-07 21:48:21.116733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.024 [2024-06-07 21:48:21.116764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.024 qpair failed and we were unable to recover it. 00:31:21.024 [2024-06-07 21:48:21.117072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.024 [2024-06-07 21:48:21.117105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.024 qpair failed and we were unable to recover it. 00:31:21.024 [2024-06-07 21:48:21.117277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.024 [2024-06-07 21:48:21.117308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.024 qpair failed and we were unable to recover it. 00:31:21.024 [2024-06-07 21:48:21.117568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.024 [2024-06-07 21:48:21.117599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.024 qpair failed and we were unable to recover it. 00:31:21.024 [2024-06-07 21:48:21.117867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.024 [2024-06-07 21:48:21.117898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.024 qpair failed and we were unable to recover it. 00:31:21.024 [2024-06-07 21:48:21.118091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.024 [2024-06-07 21:48:21.118122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.024 qpair failed and we were unable to recover it. 00:31:21.024 [2024-06-07 21:48:21.118378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.024 [2024-06-07 21:48:21.118410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.024 qpair failed and we were unable to recover it. 00:31:21.024 [2024-06-07 21:48:21.118714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.024 [2024-06-07 21:48:21.118724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.024 qpair failed and we were unable to recover it. 00:31:21.024 [2024-06-07 21:48:21.118966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.024 [2024-06-07 21:48:21.118975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.024 qpair failed and we were unable to recover it. 00:31:21.024 [2024-06-07 21:48:21.119196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.025 [2024-06-07 21:48:21.119205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.025 qpair failed and we were unable to recover it. 00:31:21.025 [2024-06-07 21:48:21.119439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.025 [2024-06-07 21:48:21.119448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.025 qpair failed and we were unable to recover it. 00:31:21.025 [2024-06-07 21:48:21.119704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.025 [2024-06-07 21:48:21.119713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.025 qpair failed and we were unable to recover it. 00:31:21.025 [2024-06-07 21:48:21.119979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.025 [2024-06-07 21:48:21.119988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.025 qpair failed and we were unable to recover it. 00:31:21.025 [2024-06-07 21:48:21.120198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.025 [2024-06-07 21:48:21.120209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.025 qpair failed and we were unable to recover it. 00:31:21.025 [2024-06-07 21:48:21.120417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.025 [2024-06-07 21:48:21.120428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.025 qpair failed and we were unable to recover it. 00:31:21.025 [2024-06-07 21:48:21.120723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.025 [2024-06-07 21:48:21.120754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.025 qpair failed and we were unable to recover it. 00:31:21.025 [2024-06-07 21:48:21.121092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.025 [2024-06-07 21:48:21.121123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.025 qpair failed and we were unable to recover it. 00:31:21.025 [2024-06-07 21:48:21.121461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.025 [2024-06-07 21:48:21.121491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.025 qpair failed and we were unable to recover it. 00:31:21.025 [2024-06-07 21:48:21.121772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.025 [2024-06-07 21:48:21.121804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.025 qpair failed and we were unable to recover it. 00:31:21.025 [2024-06-07 21:48:21.122136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.025 [2024-06-07 21:48:21.122205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7b4000b90 with addr=10.0.0.2, port=4420 00:31:21.025 qpair failed and we were unable to recover it. 00:31:21.025 [2024-06-07 21:48:21.122540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.025 [2024-06-07 21:48:21.122573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7b4000b90 with addr=10.0.0.2, port=4420 00:31:21.025 qpair failed and we were unable to recover it. 00:31:21.025 [2024-06-07 21:48:21.122770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.025 [2024-06-07 21:48:21.122800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7b4000b90 with addr=10.0.0.2, port=4420 00:31:21.025 qpair failed and we were unable to recover it. 00:31:21.025 [2024-06-07 21:48:21.123051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.025 [2024-06-07 21:48:21.123082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7b4000b90 with addr=10.0.0.2, port=4420 00:31:21.025 qpair failed and we were unable to recover it. 00:31:21.025 [2024-06-07 21:48:21.123288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.025 [2024-06-07 21:48:21.123317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7b4000b90 with addr=10.0.0.2, port=4420 00:31:21.025 qpair failed and we were unable to recover it. 00:31:21.025 [2024-06-07 21:48:21.123593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.025 [2024-06-07 21:48:21.123624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7b4000b90 with addr=10.0.0.2, port=4420 00:31:21.025 qpair failed and we were unable to recover it. 00:31:21.025 [2024-06-07 21:48:21.123899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.025 [2024-06-07 21:48:21.123919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.025 qpair failed and we were unable to recover it. 00:31:21.025 [2024-06-07 21:48:21.124148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.025 [2024-06-07 21:48:21.124158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.025 qpair failed and we were unable to recover it. 00:31:21.025 [2024-06-07 21:48:21.124438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.025 [2024-06-07 21:48:21.124448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.025 qpair failed and we were unable to recover it. 00:31:21.025 [2024-06-07 21:48:21.124661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.025 [2024-06-07 21:48:21.124671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.025 qpair failed and we were unable to recover it. 00:31:21.025 [2024-06-07 21:48:21.124935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.025 [2024-06-07 21:48:21.124944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.025 qpair failed and we were unable to recover it. 00:31:21.025 [2024-06-07 21:48:21.125240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.025 [2024-06-07 21:48:21.125250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.025 qpair failed and we were unable to recover it. 00:31:21.025 [2024-06-07 21:48:21.125408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.025 [2024-06-07 21:48:21.125418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.025 qpair failed and we were unable to recover it. 00:31:21.025 [2024-06-07 21:48:21.125637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.025 [2024-06-07 21:48:21.125650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.025 qpair failed and we were unable to recover it. 00:31:21.025 [2024-06-07 21:48:21.125916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.025 [2024-06-07 21:48:21.125925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.025 qpair failed and we were unable to recover it. 00:31:21.025 [2024-06-07 21:48:21.126147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.025 [2024-06-07 21:48:21.126156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.025 qpair failed and we were unable to recover it. 00:31:21.025 [2024-06-07 21:48:21.126447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.025 [2024-06-07 21:48:21.126457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.025 qpair failed and we were unable to recover it. 00:31:21.025 [2024-06-07 21:48:21.126611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.025 [2024-06-07 21:48:21.126621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.025 qpair failed and we were unable to recover it. 00:31:21.025 [2024-06-07 21:48:21.126843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.025 [2024-06-07 21:48:21.126852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.025 qpair failed and we were unable to recover it. 00:31:21.025 [2024-06-07 21:48:21.127147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.025 [2024-06-07 21:48:21.127156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.025 qpair failed and we were unable to recover it. 00:31:21.025 [2024-06-07 21:48:21.127317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.025 [2024-06-07 21:48:21.127326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.025 qpair failed and we were unable to recover it. 00:31:21.025 [2024-06-07 21:48:21.127489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.025 [2024-06-07 21:48:21.127498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.025 qpair failed and we were unable to recover it. 00:31:21.025 [2024-06-07 21:48:21.127708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.025 [2024-06-07 21:48:21.127717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.025 qpair failed and we were unable to recover it. 00:31:21.025 [2024-06-07 21:48:21.127856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.025 [2024-06-07 21:48:21.127866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.025 qpair failed and we were unable to recover it. 00:31:21.025 [2024-06-07 21:48:21.128094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.025 [2024-06-07 21:48:21.128104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.025 qpair failed and we were unable to recover it. 00:31:21.025 [2024-06-07 21:48:21.128313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.025 [2024-06-07 21:48:21.128322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.025 qpair failed and we were unable to recover it. 00:31:21.025 [2024-06-07 21:48:21.128590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.025 [2024-06-07 21:48:21.128599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.025 qpair failed and we were unable to recover it. 00:31:21.025 [2024-06-07 21:48:21.128829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.025 [2024-06-07 21:48:21.128839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.025 qpair failed and we were unable to recover it. 00:31:21.025 [2024-06-07 21:48:21.129046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.026 [2024-06-07 21:48:21.129056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.026 qpair failed and we were unable to recover it. 00:31:21.026 [2024-06-07 21:48:21.129410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.026 [2024-06-07 21:48:21.129419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.026 qpair failed and we were unable to recover it. 00:31:21.026 [2024-06-07 21:48:21.130115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.026 [2024-06-07 21:48:21.130135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.026 qpair failed and we were unable to recover it. 00:31:21.026 [2024-06-07 21:48:21.130307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.026 [2024-06-07 21:48:21.130316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.026 qpair failed and we were unable to recover it. 00:31:21.026 [2024-06-07 21:48:21.130534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.026 [2024-06-07 21:48:21.130544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.026 qpair failed and we were unable to recover it. 00:31:21.026 [2024-06-07 21:48:21.130750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.026 [2024-06-07 21:48:21.130760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.026 qpair failed and we were unable to recover it. 00:31:21.026 [2024-06-07 21:48:21.131017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.026 [2024-06-07 21:48:21.131057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.026 qpair failed and we were unable to recover it. 00:31:21.026 [2024-06-07 21:48:21.131315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.026 [2024-06-07 21:48:21.131346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.026 qpair failed and we were unable to recover it. 00:31:21.026 [2024-06-07 21:48:21.131551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.026 [2024-06-07 21:48:21.131582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.026 qpair failed and we were unable to recover it. 00:31:21.026 [2024-06-07 21:48:21.131842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.026 [2024-06-07 21:48:21.131873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.026 qpair failed and we were unable to recover it. 00:31:21.026 [2024-06-07 21:48:21.132202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.026 [2024-06-07 21:48:21.132212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.026 qpair failed and we were unable to recover it. 00:31:21.026 [2024-06-07 21:48:21.132485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.026 [2024-06-07 21:48:21.132495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.026 qpair failed and we were unable to recover it. 00:31:21.026 [2024-06-07 21:48:21.132693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.026 [2024-06-07 21:48:21.132703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.026 qpair failed and we were unable to recover it. 00:31:21.026 [2024-06-07 21:48:21.132836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.026 [2024-06-07 21:48:21.132845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.026 qpair failed and we were unable to recover it. 00:31:21.026 [2024-06-07 21:48:21.133062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.026 [2024-06-07 21:48:21.133094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.026 qpair failed and we were unable to recover it. 00:31:21.026 [2024-06-07 21:48:21.133361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.026 [2024-06-07 21:48:21.133393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.026 qpair failed and we were unable to recover it. 00:31:21.026 [2024-06-07 21:48:21.133657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.026 [2024-06-07 21:48:21.133687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.026 qpair failed and we were unable to recover it. 00:31:21.026 [2024-06-07 21:48:21.133967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.026 [2024-06-07 21:48:21.133997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.026 qpair failed and we were unable to recover it. 00:31:21.026 [2024-06-07 21:48:21.134168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.026 [2024-06-07 21:48:21.134178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.026 qpair failed and we were unable to recover it. 00:31:21.026 [2024-06-07 21:48:21.134404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.026 [2024-06-07 21:48:21.134435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.026 qpair failed and we were unable to recover it. 00:31:21.026 [2024-06-07 21:48:21.134679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.026 [2024-06-07 21:48:21.134709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.026 qpair failed and we were unable to recover it. 00:31:21.026 [2024-06-07 21:48:21.134906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.026 [2024-06-07 21:48:21.134936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.026 qpair failed and we were unable to recover it. 00:31:21.026 [2024-06-07 21:48:21.135196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.026 [2024-06-07 21:48:21.135227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.026 qpair failed and we were unable to recover it. 00:31:21.026 [2024-06-07 21:48:21.135416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.026 [2024-06-07 21:48:21.135447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.026 qpair failed and we were unable to recover it. 00:31:21.026 [2024-06-07 21:48:21.135655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.026 [2024-06-07 21:48:21.135685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.026 qpair failed and we were unable to recover it. 00:31:21.026 [2024-06-07 21:48:21.135926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.026 [2024-06-07 21:48:21.135962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.026 qpair failed and we were unable to recover it. 00:31:21.026 [2024-06-07 21:48:21.136167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.026 [2024-06-07 21:48:21.136199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.026 qpair failed and we were unable to recover it. 00:31:21.026 [2024-06-07 21:48:21.136465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.026 [2024-06-07 21:48:21.136495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.026 qpair failed and we were unable to recover it. 00:31:21.026 [2024-06-07 21:48:21.136776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.026 [2024-06-07 21:48:21.136806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.026 qpair failed and we were unable to recover it. 00:31:21.026 [2024-06-07 21:48:21.137149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.026 [2024-06-07 21:48:21.137181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.026 qpair failed and we were unable to recover it. 00:31:21.026 [2024-06-07 21:48:21.137461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.026 [2024-06-07 21:48:21.137491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.026 qpair failed and we were unable to recover it. 00:31:21.026 [2024-06-07 21:48:21.137639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.026 [2024-06-07 21:48:21.137669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.026 qpair failed and we were unable to recover it. 00:31:21.026 [2024-06-07 21:48:21.137923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.026 [2024-06-07 21:48:21.137954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.026 qpair failed and we were unable to recover it. 00:31:21.026 [2024-06-07 21:48:21.138213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.026 [2024-06-07 21:48:21.138244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.026 qpair failed and we were unable to recover it. 00:31:21.026 [2024-06-07 21:48:21.138557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.026 [2024-06-07 21:48:21.138588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.026 qpair failed and we were unable to recover it. 00:31:21.026 [2024-06-07 21:48:21.138765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.026 [2024-06-07 21:48:21.138774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.026 qpair failed and we were unable to recover it. 00:31:21.026 [2024-06-07 21:48:21.138987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.026 [2024-06-07 21:48:21.138996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.026 qpair failed and we were unable to recover it. 00:31:21.026 [2024-06-07 21:48:21.139292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.026 [2024-06-07 21:48:21.139323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.026 qpair failed and we were unable to recover it. 00:31:21.027 [2024-06-07 21:48:21.139516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.027 [2024-06-07 21:48:21.139546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.027 qpair failed and we were unable to recover it. 00:31:21.027 [2024-06-07 21:48:21.139784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.027 [2024-06-07 21:48:21.139793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.027 qpair failed and we were unable to recover it. 00:31:21.027 [2024-06-07 21:48:21.140092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.027 [2024-06-07 21:48:21.140123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.027 qpair failed and we were unable to recover it. 00:31:21.027 [2024-06-07 21:48:21.140369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.027 [2024-06-07 21:48:21.140399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.027 qpair failed and we were unable to recover it. 00:31:21.027 [2024-06-07 21:48:21.140658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.027 [2024-06-07 21:48:21.140688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.027 qpair failed and we were unable to recover it. 00:31:21.027 [2024-06-07 21:48:21.140882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.027 [2024-06-07 21:48:21.140913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.027 qpair failed and we were unable to recover it. 00:31:21.027 [2024-06-07 21:48:21.141148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.027 [2024-06-07 21:48:21.141157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.027 qpair failed and we were unable to recover it. 00:31:21.027 [2024-06-07 21:48:21.141374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.027 [2024-06-07 21:48:21.141384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.027 qpair failed and we were unable to recover it. 00:31:21.027 [2024-06-07 21:48:21.141681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.027 [2024-06-07 21:48:21.141691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.027 qpair failed and we were unable to recover it. 00:31:21.027 [2024-06-07 21:48:21.141916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.027 [2024-06-07 21:48:21.141925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.027 qpair failed and we were unable to recover it. 00:31:21.027 [2024-06-07 21:48:21.142178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.027 [2024-06-07 21:48:21.142188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.027 qpair failed and we were unable to recover it. 00:31:21.027 [2024-06-07 21:48:21.142344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.027 [2024-06-07 21:48:21.142353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.027 qpair failed and we were unable to recover it. 00:31:21.027 [2024-06-07 21:48:21.142590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.027 [2024-06-07 21:48:21.142621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.027 qpair failed and we were unable to recover it. 00:31:21.027 [2024-06-07 21:48:21.142873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.027 [2024-06-07 21:48:21.142904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.027 qpair failed and we were unable to recover it. 00:31:21.027 [2024-06-07 21:48:21.143201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.027 [2024-06-07 21:48:21.143270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.027 qpair failed and we were unable to recover it. 00:31:21.027 [2024-06-07 21:48:21.143569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.027 [2024-06-07 21:48:21.143604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.027 qpair failed and we were unable to recover it. 00:31:21.027 [2024-06-07 21:48:21.143946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.027 [2024-06-07 21:48:21.143977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.027 qpair failed and we were unable to recover it. 00:31:21.027 [2024-06-07 21:48:21.144254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.027 [2024-06-07 21:48:21.144287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.027 qpair failed and we were unable to recover it. 00:31:21.027 [2024-06-07 21:48:21.144600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.027 [2024-06-07 21:48:21.144629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.027 qpair failed and we were unable to recover it. 00:31:21.027 [2024-06-07 21:48:21.144995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.027 [2024-06-07 21:48:21.145033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.027 qpair failed and we were unable to recover it. 00:31:21.027 [2024-06-07 21:48:21.145350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.027 [2024-06-07 21:48:21.145381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.027 qpair failed and we were unable to recover it. 00:31:21.027 [2024-06-07 21:48:21.145745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.027 [2024-06-07 21:48:21.145775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.027 qpair failed and we were unable to recover it. 00:31:21.027 [2024-06-07 21:48:21.146089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.027 [2024-06-07 21:48:21.146119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.027 qpair failed and we were unable to recover it. 00:31:21.027 [2024-06-07 21:48:21.146431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.027 [2024-06-07 21:48:21.146461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.027 qpair failed and we were unable to recover it. 00:31:21.027 [2024-06-07 21:48:21.146796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.027 [2024-06-07 21:48:21.146826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.027 qpair failed and we were unable to recover it. 00:31:21.027 [2024-06-07 21:48:21.147175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.027 [2024-06-07 21:48:21.147206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.027 qpair failed and we were unable to recover it. 00:31:21.027 [2024-06-07 21:48:21.147412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.027 [2024-06-07 21:48:21.147443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.027 qpair failed and we were unable to recover it. 00:31:21.027 [2024-06-07 21:48:21.147778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.027 [2024-06-07 21:48:21.147817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.027 qpair failed and we were unable to recover it. 00:31:21.027 [2024-06-07 21:48:21.148018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.027 [2024-06-07 21:48:21.148043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.027 qpair failed and we were unable to recover it. 00:31:21.027 [2024-06-07 21:48:21.148329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.027 [2024-06-07 21:48:21.148346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.027 qpair failed and we were unable to recover it. 00:31:21.027 [2024-06-07 21:48:21.148570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.027 [2024-06-07 21:48:21.148587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.027 qpair failed and we were unable to recover it. 00:31:21.027 [2024-06-07 21:48:21.148805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.027 [2024-06-07 21:48:21.148822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.027 qpair failed and we were unable to recover it. 00:31:21.027 [2024-06-07 21:48:21.149055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.027 [2024-06-07 21:48:21.149066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.027 qpair failed and we were unable to recover it. 00:31:21.027 [2024-06-07 21:48:21.149361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.027 [2024-06-07 21:48:21.149371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.028 qpair failed and we were unable to recover it. 00:31:21.028 [2024-06-07 21:48:21.149533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.028 [2024-06-07 21:48:21.149543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.028 qpair failed and we were unable to recover it. 00:31:21.028 [2024-06-07 21:48:21.149814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.028 [2024-06-07 21:48:21.149844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.028 qpair failed and we were unable to recover it. 00:31:21.028 [2024-06-07 21:48:21.150121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.028 [2024-06-07 21:48:21.150153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.028 qpair failed and we were unable to recover it. 00:31:21.028 [2024-06-07 21:48:21.150348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.028 [2024-06-07 21:48:21.150378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.028 qpair failed and we were unable to recover it. 00:31:21.028 [2024-06-07 21:48:21.150716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.028 [2024-06-07 21:48:21.150747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.028 qpair failed and we were unable to recover it. 00:31:21.028 [2024-06-07 21:48:21.151037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.028 [2024-06-07 21:48:21.151069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.028 qpair failed and we were unable to recover it. 00:31:21.028 [2024-06-07 21:48:21.151307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.028 [2024-06-07 21:48:21.151338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.028 qpair failed and we were unable to recover it. 00:31:21.028 [2024-06-07 21:48:21.151609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.028 [2024-06-07 21:48:21.151640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.028 qpair failed and we were unable to recover it. 00:31:21.028 [2024-06-07 21:48:21.151891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.028 [2024-06-07 21:48:21.151928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.028 qpair failed and we were unable to recover it. 00:31:21.028 [2024-06-07 21:48:21.152174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.028 [2024-06-07 21:48:21.152184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.028 qpair failed and we were unable to recover it. 00:31:21.028 [2024-06-07 21:48:21.152483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.028 [2024-06-07 21:48:21.152513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.028 qpair failed and we were unable to recover it. 00:31:21.028 [2024-06-07 21:48:21.152764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.028 [2024-06-07 21:48:21.152795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.028 qpair failed and we were unable to recover it. 00:31:21.028 [2024-06-07 21:48:21.153051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.028 [2024-06-07 21:48:21.153083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.028 qpair failed and we were unable to recover it. 00:31:21.028 [2024-06-07 21:48:21.153400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.028 [2024-06-07 21:48:21.153431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.028 qpair failed and we were unable to recover it. 00:31:21.028 [2024-06-07 21:48:21.153636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.028 [2024-06-07 21:48:21.153666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.028 qpair failed and we were unable to recover it. 00:31:21.028 [2024-06-07 21:48:21.153915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.028 [2024-06-07 21:48:21.153945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.028 qpair failed and we were unable to recover it. 00:31:21.028 [2024-06-07 21:48:21.154126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.028 [2024-06-07 21:48:21.154136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.028 qpair failed and we were unable to recover it. 00:31:21.028 [2024-06-07 21:48:21.154436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.028 [2024-06-07 21:48:21.154466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.028 qpair failed and we were unable to recover it. 00:31:21.028 [2024-06-07 21:48:21.154802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.028 [2024-06-07 21:48:21.154832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.028 qpair failed and we were unable to recover it. 00:31:21.028 [2024-06-07 21:48:21.155120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.028 [2024-06-07 21:48:21.155130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.028 qpair failed and we were unable to recover it. 00:31:21.028 [2024-06-07 21:48:21.155276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.028 [2024-06-07 21:48:21.155286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.028 qpair failed and we were unable to recover it. 00:31:21.028 [2024-06-07 21:48:21.155497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.028 [2024-06-07 21:48:21.155506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.028 qpair failed and we were unable to recover it. 00:31:21.028 [2024-06-07 21:48:21.155751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.028 [2024-06-07 21:48:21.155781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.028 qpair failed and we were unable to recover it. 00:31:21.028 [2024-06-07 21:48:21.156092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.028 [2024-06-07 21:48:21.156123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.028 qpair failed and we were unable to recover it. 00:31:21.028 [2024-06-07 21:48:21.156307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.028 [2024-06-07 21:48:21.156337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.028 qpair failed and we were unable to recover it. 00:31:21.028 [2024-06-07 21:48:21.156592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.028 [2024-06-07 21:48:21.156622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.028 qpair failed and we were unable to recover it. 00:31:21.028 [2024-06-07 21:48:21.156879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.028 [2024-06-07 21:48:21.156910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.028 qpair failed and we were unable to recover it. 00:31:21.028 [2024-06-07 21:48:21.157251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.028 [2024-06-07 21:48:21.157282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.028 qpair failed and we were unable to recover it. 00:31:21.028 [2024-06-07 21:48:21.157554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.028 [2024-06-07 21:48:21.157584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.028 qpair failed and we were unable to recover it. 00:31:21.028 [2024-06-07 21:48:21.157887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.028 [2024-06-07 21:48:21.157925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.028 qpair failed and we were unable to recover it. 00:31:21.028 [2024-06-07 21:48:21.158087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.028 [2024-06-07 21:48:21.158096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.028 qpair failed and we were unable to recover it. 00:31:21.028 [2024-06-07 21:48:21.158251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.028 [2024-06-07 21:48:21.158260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.028 qpair failed and we were unable to recover it. 00:31:21.028 [2024-06-07 21:48:21.158405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.028 [2024-06-07 21:48:21.158414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.028 qpair failed and we were unable to recover it. 00:31:21.028 [2024-06-07 21:48:21.158633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.028 [2024-06-07 21:48:21.158644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.028 qpair failed and we were unable to recover it. 00:31:21.028 [2024-06-07 21:48:21.158891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.028 [2024-06-07 21:48:21.158922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.028 qpair failed and we were unable to recover it. 00:31:21.028 [2024-06-07 21:48:21.159107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.028 [2024-06-07 21:48:21.159138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.028 qpair failed and we were unable to recover it. 00:31:21.028 [2024-06-07 21:48:21.159387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.028 [2024-06-07 21:48:21.159417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.028 qpair failed and we were unable to recover it. 00:31:21.028 [2024-06-07 21:48:21.159678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.029 [2024-06-07 21:48:21.159709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.029 qpair failed and we were unable to recover it. 00:31:21.029 [2024-06-07 21:48:21.160058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.029 [2024-06-07 21:48:21.160090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.029 qpair failed and we were unable to recover it. 00:31:21.029 [2024-06-07 21:48:21.160410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.029 [2024-06-07 21:48:21.160419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.029 qpair failed and we were unable to recover it. 00:31:21.029 [2024-06-07 21:48:21.160629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.029 [2024-06-07 21:48:21.160638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.029 qpair failed and we were unable to recover it. 00:31:21.029 [2024-06-07 21:48:21.160872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.029 [2024-06-07 21:48:21.160881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.029 qpair failed and we were unable to recover it. 00:31:21.029 [2024-06-07 21:48:21.161176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.029 [2024-06-07 21:48:21.161186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.029 qpair failed and we were unable to recover it. 00:31:21.029 [2024-06-07 21:48:21.161453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.029 [2024-06-07 21:48:21.161462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.029 qpair failed and we were unable to recover it. 00:31:21.029 [2024-06-07 21:48:21.161700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.029 [2024-06-07 21:48:21.161709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.029 qpair failed and we were unable to recover it. 00:31:21.029 [2024-06-07 21:48:21.161915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.029 [2024-06-07 21:48:21.161924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.029 qpair failed and we were unable to recover it. 00:31:21.029 [2024-06-07 21:48:21.162080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.029 [2024-06-07 21:48:21.162089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.029 qpair failed and we were unable to recover it. 00:31:21.029 [2024-06-07 21:48:21.162361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.029 [2024-06-07 21:48:21.162370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.029 qpair failed and we were unable to recover it. 00:31:21.029 [2024-06-07 21:48:21.162501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.029 [2024-06-07 21:48:21.162511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.029 qpair failed and we were unable to recover it. 00:31:21.029 [2024-06-07 21:48:21.162756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.029 [2024-06-07 21:48:21.162787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.029 qpair failed and we were unable to recover it. 00:31:21.029 [2024-06-07 21:48:21.163044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.029 [2024-06-07 21:48:21.163075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.029 qpair failed and we were unable to recover it. 00:31:21.029 [2024-06-07 21:48:21.163331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.029 [2024-06-07 21:48:21.163361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.029 qpair failed and we were unable to recover it. 00:31:21.029 [2024-06-07 21:48:21.163618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.029 [2024-06-07 21:48:21.163648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.029 qpair failed and we were unable to recover it. 00:31:21.029 [2024-06-07 21:48:21.163907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.029 [2024-06-07 21:48:21.163937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.029 qpair failed and we were unable to recover it. 00:31:21.029 [2024-06-07 21:48:21.164186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.029 [2024-06-07 21:48:21.164195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.029 qpair failed and we were unable to recover it. 00:31:21.029 [2024-06-07 21:48:21.164329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.029 [2024-06-07 21:48:21.164339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.029 qpair failed and we were unable to recover it. 00:31:21.029 [2024-06-07 21:48:21.164565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.029 [2024-06-07 21:48:21.164596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.029 qpair failed and we were unable to recover it. 00:31:21.029 [2024-06-07 21:48:21.164852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.029 [2024-06-07 21:48:21.164882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.029 qpair failed and we were unable to recover it. 00:31:21.029 [2024-06-07 21:48:21.165222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.029 [2024-06-07 21:48:21.165254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.029 qpair failed and we were unable to recover it. 00:31:21.029 [2024-06-07 21:48:21.165614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.029 [2024-06-07 21:48:21.165644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.029 qpair failed and we were unable to recover it. 00:31:21.029 [2024-06-07 21:48:21.165977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.029 [2024-06-07 21:48:21.166057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b4d60 with addr=10.0.0.2, port=4420 00:31:21.029 qpair failed and we were unable to recover it. 00:31:21.029 [2024-06-07 21:48:21.166365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.029 [2024-06-07 21:48:21.166406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.029 qpair failed and we were unable to recover it. 00:31:21.029 [2024-06-07 21:48:21.166647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.029 [2024-06-07 21:48:21.166666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.029 qpair failed and we were unable to recover it. 00:31:21.029 [2024-06-07 21:48:21.166978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.029 [2024-06-07 21:48:21.166996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.029 qpair failed and we were unable to recover it. 00:31:21.029 [2024-06-07 21:48:21.167238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.029 [2024-06-07 21:48:21.167257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.029 qpair failed and we were unable to recover it. 00:31:21.029 [2024-06-07 21:48:21.167505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.029 [2024-06-07 21:48:21.167522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.029 qpair failed and we were unable to recover it. 00:31:21.029 [2024-06-07 21:48:21.167743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.029 [2024-06-07 21:48:21.167754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.029 qpair failed and we were unable to recover it. 00:31:21.029 [2024-06-07 21:48:21.167992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.029 [2024-06-07 21:48:21.168001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.029 qpair failed and we were unable to recover it. 00:31:21.029 [2024-06-07 21:48:21.168149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.029 [2024-06-07 21:48:21.168158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.029 qpair failed and we were unable to recover it. 00:31:21.029 [2024-06-07 21:48:21.168396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.029 [2024-06-07 21:48:21.168426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.029 qpair failed and we were unable to recover it. 00:31:21.029 [2024-06-07 21:48:21.168764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.029 [2024-06-07 21:48:21.168794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.029 qpair failed and we were unable to recover it. 00:31:21.029 [2024-06-07 21:48:21.169126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.029 [2024-06-07 21:48:21.169157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.029 qpair failed and we were unable to recover it. 00:31:21.029 [2024-06-07 21:48:21.169495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.029 [2024-06-07 21:48:21.169525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.029 qpair failed and we were unable to recover it. 00:31:21.029 [2024-06-07 21:48:21.169785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.029 [2024-06-07 21:48:21.169815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.029 qpair failed and we were unable to recover it. 00:31:21.029 [2024-06-07 21:48:21.170159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.029 [2024-06-07 21:48:21.170190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.029 qpair failed and we were unable to recover it. 00:31:21.030 [2024-06-07 21:48:21.170375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.030 [2024-06-07 21:48:21.170405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.030 qpair failed and we were unable to recover it. 00:31:21.030 [2024-06-07 21:48:21.170654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.030 [2024-06-07 21:48:21.170684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.030 qpair failed and we were unable to recover it. 00:31:21.030 [2024-06-07 21:48:21.170994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.030 [2024-06-07 21:48:21.171032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.030 qpair failed and we were unable to recover it. 00:31:21.030 [2024-06-07 21:48:21.171320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.030 [2024-06-07 21:48:21.171329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.030 qpair failed and we were unable to recover it. 00:31:21.030 [2024-06-07 21:48:21.171629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.030 [2024-06-07 21:48:21.171638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.030 qpair failed and we were unable to recover it. 00:31:21.030 [2024-06-07 21:48:21.171799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.030 [2024-06-07 21:48:21.171809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.030 qpair failed and we were unable to recover it. 00:31:21.030 [2024-06-07 21:48:21.172099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.030 [2024-06-07 21:48:21.172130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.030 qpair failed and we were unable to recover it. 00:31:21.030 [2024-06-07 21:48:21.172401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.030 [2024-06-07 21:48:21.172431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.030 qpair failed and we were unable to recover it. 00:31:21.030 [2024-06-07 21:48:21.172700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.030 [2024-06-07 21:48:21.172731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.030 qpair failed and we were unable to recover it. 00:31:21.030 [2024-06-07 21:48:21.172971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.030 [2024-06-07 21:48:21.173001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.030 qpair failed and we were unable to recover it. 00:31:21.030 [2024-06-07 21:48:21.173254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.030 [2024-06-07 21:48:21.173264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.030 qpair failed and we were unable to recover it. 00:31:21.030 [2024-06-07 21:48:21.173475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.030 [2024-06-07 21:48:21.173484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.030 qpair failed and we were unable to recover it. 00:31:21.030 [2024-06-07 21:48:21.173741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.030 [2024-06-07 21:48:21.173750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.030 qpair failed and we were unable to recover it. 00:31:21.030 [2024-06-07 21:48:21.173894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.030 [2024-06-07 21:48:21.173903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.030 qpair failed and we were unable to recover it. 00:31:21.030 [2024-06-07 21:48:21.174177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.030 [2024-06-07 21:48:21.174209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.030 qpair failed and we were unable to recover it. 00:31:21.030 [2024-06-07 21:48:21.174464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.030 [2024-06-07 21:48:21.174494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.030 qpair failed and we were unable to recover it. 00:31:21.030 [2024-06-07 21:48:21.174858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.030 [2024-06-07 21:48:21.174888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.030 qpair failed and we were unable to recover it. 00:31:21.030 [2024-06-07 21:48:21.175217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.030 [2024-06-07 21:48:21.175227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.030 qpair failed and we were unable to recover it. 00:31:21.030 [2024-06-07 21:48:21.175526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.030 [2024-06-07 21:48:21.175556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.030 qpair failed and we were unable to recover it. 00:31:21.030 [2024-06-07 21:48:21.175897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.030 [2024-06-07 21:48:21.175928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.030 qpair failed and we were unable to recover it. 00:31:21.030 [2024-06-07 21:48:21.176269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.030 [2024-06-07 21:48:21.176300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.030 qpair failed and we were unable to recover it. 00:31:21.030 [2024-06-07 21:48:21.176651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.030 [2024-06-07 21:48:21.176682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.030 qpair failed and we were unable to recover it. 00:31:21.030 [2024-06-07 21:48:21.176985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.030 [2024-06-07 21:48:21.176994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.030 qpair failed and we were unable to recover it. 00:31:21.030 [2024-06-07 21:48:21.177229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.030 [2024-06-07 21:48:21.177238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.030 qpair failed and we were unable to recover it. 00:31:21.030 [2024-06-07 21:48:21.177438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.030 [2024-06-07 21:48:21.177447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.030 qpair failed and we were unable to recover it. 00:31:21.030 [2024-06-07 21:48:21.177590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.030 [2024-06-07 21:48:21.177601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.030 qpair failed and we were unable to recover it. 00:31:21.030 [2024-06-07 21:48:21.177864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.030 [2024-06-07 21:48:21.177874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.030 qpair failed and we were unable to recover it. 00:31:21.030 [2024-06-07 21:48:21.178085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.030 [2024-06-07 21:48:21.178095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.030 qpair failed and we were unable to recover it. 00:31:21.030 [2024-06-07 21:48:21.178412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.030 [2024-06-07 21:48:21.178422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.030 qpair failed and we were unable to recover it. 00:31:21.030 [2024-06-07 21:48:21.178626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.030 [2024-06-07 21:48:21.178635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.030 qpair failed and we were unable to recover it. 00:31:21.030 [2024-06-07 21:48:21.178898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.030 [2024-06-07 21:48:21.178907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.030 qpair failed and we were unable to recover it. 00:31:21.030 [2024-06-07 21:48:21.179123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.030 [2024-06-07 21:48:21.179132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.030 qpair failed and we were unable to recover it. 00:31:21.030 [2024-06-07 21:48:21.179399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.030 [2024-06-07 21:48:21.179408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.030 qpair failed and we were unable to recover it. 00:31:21.030 [2024-06-07 21:48:21.179568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.030 [2024-06-07 21:48:21.179577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.030 qpair failed and we were unable to recover it. 00:31:21.030 [2024-06-07 21:48:21.179799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.030 [2024-06-07 21:48:21.179829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.030 qpair failed and we were unable to recover it. 00:31:21.030 [2024-06-07 21:48:21.180024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.030 [2024-06-07 21:48:21.180064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.030 qpair failed and we were unable to recover it. 00:31:21.030 [2024-06-07 21:48:21.180271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.030 [2024-06-07 21:48:21.180305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.030 qpair failed and we were unable to recover it. 00:31:21.030 [2024-06-07 21:48:21.180646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.030 [2024-06-07 21:48:21.180676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.030 qpair failed and we were unable to recover it. 00:31:21.030 [2024-06-07 21:48:21.181014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.031 [2024-06-07 21:48:21.181053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.031 qpair failed and we were unable to recover it. 00:31:21.031 [2024-06-07 21:48:21.181367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.031 [2024-06-07 21:48:21.181397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.031 qpair failed and we were unable to recover it. 00:31:21.031 [2024-06-07 21:48:21.181707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.031 [2024-06-07 21:48:21.181737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.031 qpair failed and we were unable to recover it. 00:31:21.031 [2024-06-07 21:48:21.182043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.031 [2024-06-07 21:48:21.182069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.031 qpair failed and we were unable to recover it. 00:31:21.031 [2024-06-07 21:48:21.182305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.031 [2024-06-07 21:48:21.182314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.031 qpair failed and we were unable to recover it. 00:31:21.031 [2024-06-07 21:48:21.182476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.031 [2024-06-07 21:48:21.182485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.031 qpair failed and we were unable to recover it. 00:31:21.031 [2024-06-07 21:48:21.182780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.031 [2024-06-07 21:48:21.182810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.031 qpair failed and we were unable to recover it. 00:31:21.031 [2024-06-07 21:48:21.183065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.031 [2024-06-07 21:48:21.183096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.031 qpair failed and we were unable to recover it. 00:31:21.031 [2024-06-07 21:48:21.183339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.031 [2024-06-07 21:48:21.183370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.031 qpair failed and we were unable to recover it. 00:31:21.031 [2024-06-07 21:48:21.183632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.031 [2024-06-07 21:48:21.183662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.031 qpair failed and we were unable to recover it. 00:31:21.031 [2024-06-07 21:48:21.184034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.031 [2024-06-07 21:48:21.184065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.031 qpair failed and we were unable to recover it. 00:31:21.031 [2024-06-07 21:48:21.184265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.031 [2024-06-07 21:48:21.184296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.031 qpair failed and we were unable to recover it. 00:31:21.031 [2024-06-07 21:48:21.184610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.031 [2024-06-07 21:48:21.184640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.031 qpair failed and we were unable to recover it. 00:31:21.031 [2024-06-07 21:48:21.184897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.031 [2024-06-07 21:48:21.184927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.031 qpair failed and we were unable to recover it. 00:31:21.031 [2024-06-07 21:48:21.185192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.031 [2024-06-07 21:48:21.185202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.031 qpair failed and we were unable to recover it. 00:31:21.031 [2024-06-07 21:48:21.185473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.031 [2024-06-07 21:48:21.185482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.031 qpair failed and we were unable to recover it. 00:31:21.031 [2024-06-07 21:48:21.185700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.031 [2024-06-07 21:48:21.185709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.031 qpair failed and we were unable to recover it. 00:31:21.031 [2024-06-07 21:48:21.185956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.031 [2024-06-07 21:48:21.185966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.031 qpair failed and we were unable to recover it. 00:31:21.031 [2024-06-07 21:48:21.186099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.031 [2024-06-07 21:48:21.186109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.031 qpair failed and we were unable to recover it. 00:31:21.031 [2024-06-07 21:48:21.186260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.031 [2024-06-07 21:48:21.186269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.031 qpair failed and we were unable to recover it. 00:31:21.031 [2024-06-07 21:48:21.186534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.031 [2024-06-07 21:48:21.186543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.031 qpair failed and we were unable to recover it. 00:31:21.031 [2024-06-07 21:48:21.186814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.031 [2024-06-07 21:48:21.186844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.031 qpair failed and we were unable to recover it. 00:31:21.031 [2024-06-07 21:48:21.187087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.031 [2024-06-07 21:48:21.187118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.031 qpair failed and we were unable to recover it. 00:31:21.031 [2024-06-07 21:48:21.187446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.031 [2024-06-07 21:48:21.187477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.031 qpair failed and we were unable to recover it. 00:31:21.031 [2024-06-07 21:48:21.187728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.031 [2024-06-07 21:48:21.187758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.031 qpair failed and we were unable to recover it. 00:31:21.031 [2024-06-07 21:48:21.188022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.031 [2024-06-07 21:48:21.188064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.031 qpair failed and we were unable to recover it. 00:31:21.031 [2024-06-07 21:48:21.188217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.031 [2024-06-07 21:48:21.188247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.031 qpair failed and we were unable to recover it. 00:31:21.031 [2024-06-07 21:48:21.188588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.031 [2024-06-07 21:48:21.188628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.031 qpair failed and we were unable to recover it. 00:31:21.031 [2024-06-07 21:48:21.188823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.031 [2024-06-07 21:48:21.188854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.031 qpair failed and we were unable to recover it. 00:31:21.031 [2024-06-07 21:48:21.189165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.031 [2024-06-07 21:48:21.189196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.031 qpair failed and we were unable to recover it. 00:31:21.031 [2024-06-07 21:48:21.189437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.031 [2024-06-07 21:48:21.189467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.031 qpair failed and we were unable to recover it. 00:31:21.031 [2024-06-07 21:48:21.189671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.031 [2024-06-07 21:48:21.189701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.031 qpair failed and we were unable to recover it. 00:31:21.031 [2024-06-07 21:48:21.189911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.031 [2024-06-07 21:48:21.189920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.031 qpair failed and we were unable to recover it. 00:31:21.031 [2024-06-07 21:48:21.190091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.031 [2024-06-07 21:48:21.190101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.031 qpair failed and we were unable to recover it. 00:31:21.031 [2024-06-07 21:48:21.190398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.031 [2024-06-07 21:48:21.190407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.031 qpair failed and we were unable to recover it. 00:31:21.031 [2024-06-07 21:48:21.190567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.031 [2024-06-07 21:48:21.190576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.031 qpair failed and we were unable to recover it. 00:31:21.031 [2024-06-07 21:48:21.190729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.031 [2024-06-07 21:48:21.190739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.031 qpair failed and we were unable to recover it. 00:31:21.031 [2024-06-07 21:48:21.191042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.031 [2024-06-07 21:48:21.191074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.031 qpair failed and we were unable to recover it. 00:31:21.031 [2024-06-07 21:48:21.191333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.032 [2024-06-07 21:48:21.191363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.032 qpair failed and we were unable to recover it. 00:31:21.032 [2024-06-07 21:48:21.191621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.032 [2024-06-07 21:48:21.191651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.032 qpair failed and we were unable to recover it. 00:31:21.032 [2024-06-07 21:48:21.191904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.032 [2024-06-07 21:48:21.191934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.032 qpair failed and we were unable to recover it. 00:31:21.032 [2024-06-07 21:48:21.192212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.032 [2024-06-07 21:48:21.192222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.032 qpair failed and we were unable to recover it. 00:31:21.032 [2024-06-07 21:48:21.192490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.032 [2024-06-07 21:48:21.192500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.032 qpair failed and we were unable to recover it. 00:31:21.032 [2024-06-07 21:48:21.192635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.032 [2024-06-07 21:48:21.192645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.032 qpair failed and we were unable to recover it. 00:31:21.032 [2024-06-07 21:48:21.192861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.032 [2024-06-07 21:48:21.192870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.032 qpair failed and we were unable to recover it. 00:31:21.032 [2024-06-07 21:48:21.193009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.032 [2024-06-07 21:48:21.193018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.032 qpair failed and we were unable to recover it. 00:31:21.032 [2024-06-07 21:48:21.193235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.032 [2024-06-07 21:48:21.193245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.032 qpair failed and we were unable to recover it. 00:31:21.032 [2024-06-07 21:48:21.193469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.032 [2024-06-07 21:48:21.193479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.032 qpair failed and we were unable to recover it. 00:31:21.032 [2024-06-07 21:48:21.193773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.032 [2024-06-07 21:48:21.193782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.032 qpair failed and we were unable to recover it. 00:31:21.032 [2024-06-07 21:48:21.193997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.032 [2024-06-07 21:48:21.194006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.032 qpair failed and we were unable to recover it. 00:31:21.032 [2024-06-07 21:48:21.194271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.032 [2024-06-07 21:48:21.194281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.032 qpair failed and we were unable to recover it. 00:31:21.032 [2024-06-07 21:48:21.194496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.032 [2024-06-07 21:48:21.194505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.032 qpair failed and we were unable to recover it. 00:31:21.032 [2024-06-07 21:48:21.194715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.032 [2024-06-07 21:48:21.194745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.032 qpair failed and we were unable to recover it. 00:31:21.032 [2024-06-07 21:48:21.195083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.032 [2024-06-07 21:48:21.195115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.032 qpair failed and we were unable to recover it. 00:31:21.032 [2024-06-07 21:48:21.195445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.032 [2024-06-07 21:48:21.195454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.032 qpair failed and we were unable to recover it. 00:31:21.032 [2024-06-07 21:48:21.195671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.032 [2024-06-07 21:48:21.195680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.032 qpair failed and we were unable to recover it. 00:31:21.032 [2024-06-07 21:48:21.195905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.032 [2024-06-07 21:48:21.195914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.032 qpair failed and we were unable to recover it. 00:31:21.032 [2024-06-07 21:48:21.196131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.032 [2024-06-07 21:48:21.196163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.032 qpair failed and we were unable to recover it. 00:31:21.032 [2024-06-07 21:48:21.196342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.032 [2024-06-07 21:48:21.196372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.032 qpair failed and we were unable to recover it. 00:31:21.032 [2024-06-07 21:48:21.196680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.032 [2024-06-07 21:48:21.196711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.032 qpair failed and we were unable to recover it. 00:31:21.032 [2024-06-07 21:48:21.197078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.032 [2024-06-07 21:48:21.197109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.032 qpair failed and we were unable to recover it. 00:31:21.032 [2024-06-07 21:48:21.197392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.032 [2024-06-07 21:48:21.197422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.032 qpair failed and we were unable to recover it. 00:31:21.032 [2024-06-07 21:48:21.197770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.032 [2024-06-07 21:48:21.197800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.032 qpair failed and we were unable to recover it. 00:31:21.032 [2024-06-07 21:48:21.198011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.032 [2024-06-07 21:48:21.198050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.032 qpair failed and we were unable to recover it. 00:31:21.032 [2024-06-07 21:48:21.198295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.032 [2024-06-07 21:48:21.198325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.032 qpair failed and we were unable to recover it. 00:31:21.032 [2024-06-07 21:48:21.198529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.032 [2024-06-07 21:48:21.198560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.032 qpair failed and we were unable to recover it. 00:31:21.032 [2024-06-07 21:48:21.198896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.032 [2024-06-07 21:48:21.198931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.032 qpair failed and we were unable to recover it. 00:31:21.032 [2024-06-07 21:48:21.199137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.032 [2024-06-07 21:48:21.199150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.032 qpair failed and we were unable to recover it. 00:31:21.032 [2024-06-07 21:48:21.199443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.032 [2024-06-07 21:48:21.199452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.032 qpair failed and we were unable to recover it. 00:31:21.032 [2024-06-07 21:48:21.199677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.032 [2024-06-07 21:48:21.199686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.032 qpair failed and we were unable to recover it. 00:31:21.032 [2024-06-07 21:48:21.199883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.032 [2024-06-07 21:48:21.199892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.033 qpair failed and we were unable to recover it. 00:31:21.033 [2024-06-07 21:48:21.200034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.033 [2024-06-07 21:48:21.200044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.033 qpair failed and we were unable to recover it. 00:31:21.033 [2024-06-07 21:48:21.200243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.033 [2024-06-07 21:48:21.200252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.033 qpair failed and we were unable to recover it. 00:31:21.033 [2024-06-07 21:48:21.200465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.033 [2024-06-07 21:48:21.200495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.033 qpair failed and we were unable to recover it. 00:31:21.033 [2024-06-07 21:48:21.200747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.033 [2024-06-07 21:48:21.200777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.033 qpair failed and we were unable to recover it. 00:31:21.033 [2024-06-07 21:48:21.201087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.033 [2024-06-07 21:48:21.201118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.033 qpair failed and we were unable to recover it. 00:31:21.033 [2024-06-07 21:48:21.201309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.033 [2024-06-07 21:48:21.201339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.033 qpair failed and we were unable to recover it. 00:31:21.033 [2024-06-07 21:48:21.201685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.033 [2024-06-07 21:48:21.201715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.033 qpair failed and we were unable to recover it. 00:31:21.033 [2024-06-07 21:48:21.201969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.033 [2024-06-07 21:48:21.201998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.033 qpair failed and we were unable to recover it. 00:31:21.033 [2024-06-07 21:48:21.202412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.033 [2024-06-07 21:48:21.202480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.033 qpair failed and we were unable to recover it. 00:31:21.033 [2024-06-07 21:48:21.202874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.033 [2024-06-07 21:48:21.202909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.033 qpair failed and we were unable to recover it. 00:31:21.033 [2024-06-07 21:48:21.203269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.033 [2024-06-07 21:48:21.203288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.033 qpair failed and we were unable to recover it. 00:31:21.033 [2024-06-07 21:48:21.203544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.033 [2024-06-07 21:48:21.203575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.033 qpair failed and we were unable to recover it. 00:31:21.033 [2024-06-07 21:48:21.203945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.033 [2024-06-07 21:48:21.203963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.033 qpair failed and we were unable to recover it. 00:31:21.033 [2024-06-07 21:48:21.204261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.033 [2024-06-07 21:48:21.204292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.033 qpair failed and we were unable to recover it. 00:31:21.033 [2024-06-07 21:48:21.204553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.033 [2024-06-07 21:48:21.204583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.033 qpair failed and we were unable to recover it. 00:31:21.033 [2024-06-07 21:48:21.204913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.033 [2024-06-07 21:48:21.204931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.033 qpair failed and we were unable to recover it. 00:31:21.033 [2024-06-07 21:48:21.205161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.033 [2024-06-07 21:48:21.205193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.033 qpair failed and we were unable to recover it. 00:31:21.033 [2024-06-07 21:48:21.205451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.033 [2024-06-07 21:48:21.205481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.033 qpair failed and we were unable to recover it. 00:31:21.033 [2024-06-07 21:48:21.205824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.033 [2024-06-07 21:48:21.205854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.033 qpair failed and we were unable to recover it. 00:31:21.033 [2024-06-07 21:48:21.206053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.033 [2024-06-07 21:48:21.206084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.033 qpair failed and we were unable to recover it. 00:31:21.033 [2024-06-07 21:48:21.206362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.033 [2024-06-07 21:48:21.206392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.033 qpair failed and we were unable to recover it. 00:31:21.033 [2024-06-07 21:48:21.206646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.033 [2024-06-07 21:48:21.206676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.033 qpair failed and we were unable to recover it. 00:31:21.033 [2024-06-07 21:48:21.206999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.033 [2024-06-07 21:48:21.207039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.033 qpair failed and we were unable to recover it. 00:31:21.033 [2024-06-07 21:48:21.207373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.033 [2024-06-07 21:48:21.207404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.033 qpair failed and we were unable to recover it. 00:31:21.033 [2024-06-07 21:48:21.207593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.033 [2024-06-07 21:48:21.207623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.033 qpair failed and we were unable to recover it. 00:31:21.033 [2024-06-07 21:48:21.207869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.033 [2024-06-07 21:48:21.207900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.033 qpair failed and we were unable to recover it. 00:31:21.033 [2024-06-07 21:48:21.208263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.033 [2024-06-07 21:48:21.208294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.033 qpair failed and we were unable to recover it. 00:31:21.033 [2024-06-07 21:48:21.208638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.033 [2024-06-07 21:48:21.208668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.033 qpair failed and we were unable to recover it. 00:31:21.033 [2024-06-07 21:48:21.208931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.033 [2024-06-07 21:48:21.208961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.033 qpair failed and we were unable to recover it. 00:31:21.033 [2024-06-07 21:48:21.209247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.033 [2024-06-07 21:48:21.209278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.033 qpair failed and we were unable to recover it. 00:31:21.033 [2024-06-07 21:48:21.209589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.033 [2024-06-07 21:48:21.209619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.033 qpair failed and we were unable to recover it. 00:31:21.033 [2024-06-07 21:48:21.209954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.033 [2024-06-07 21:48:21.209985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.033 qpair failed and we were unable to recover it. 00:31:21.033 [2024-06-07 21:48:21.210268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.033 [2024-06-07 21:48:21.210299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.033 qpair failed and we were unable to recover it. 00:31:21.033 [2024-06-07 21:48:21.210634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.033 [2024-06-07 21:48:21.210664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.033 qpair failed and we were unable to recover it. 00:31:21.033 [2024-06-07 21:48:21.210966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.033 [2024-06-07 21:48:21.210996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.033 qpair failed and we were unable to recover it. 00:31:21.033 [2024-06-07 21:48:21.211251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.033 [2024-06-07 21:48:21.211283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.033 qpair failed and we were unable to recover it. 00:31:21.033 [2024-06-07 21:48:21.211596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.033 [2024-06-07 21:48:21.211632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.033 qpair failed and we were unable to recover it. 00:31:21.033 [2024-06-07 21:48:21.211890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.034 [2024-06-07 21:48:21.211920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.034 qpair failed and we were unable to recover it. 00:31:21.034 [2024-06-07 21:48:21.212174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.034 [2024-06-07 21:48:21.212205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.034 qpair failed and we were unable to recover it. 00:31:21.034 [2024-06-07 21:48:21.212512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.034 [2024-06-07 21:48:21.212531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.034 qpair failed and we were unable to recover it. 00:31:21.034 [2024-06-07 21:48:21.212792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.034 [2024-06-07 21:48:21.212809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.034 qpair failed and we were unable to recover it. 00:31:21.034 [2024-06-07 21:48:21.213033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.034 [2024-06-07 21:48:21.213051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.034 qpair failed and we were unable to recover it. 00:31:21.034 [2024-06-07 21:48:21.213287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.034 [2024-06-07 21:48:21.213304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.034 qpair failed and we were unable to recover it. 00:31:21.034 [2024-06-07 21:48:21.213557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.034 [2024-06-07 21:48:21.213574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.034 qpair failed and we were unable to recover it. 00:31:21.034 [2024-06-07 21:48:21.213858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.034 [2024-06-07 21:48:21.213875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.034 qpair failed and we were unable to recover it. 00:31:21.034 [2024-06-07 21:48:21.214095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.034 [2024-06-07 21:48:21.214113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.034 qpair failed and we were unable to recover it. 00:31:21.034 [2024-06-07 21:48:21.214414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.034 [2024-06-07 21:48:21.214431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.034 qpair failed and we were unable to recover it. 00:31:21.034 [2024-06-07 21:48:21.214596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.034 [2024-06-07 21:48:21.214613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.034 qpair failed and we were unable to recover it. 00:31:21.034 [2024-06-07 21:48:21.214922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.034 [2024-06-07 21:48:21.214939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.034 qpair failed and we were unable to recover it. 00:31:21.034 [2024-06-07 21:48:21.215121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.034 [2024-06-07 21:48:21.215138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.034 qpair failed and we were unable to recover it. 00:31:21.034 [2024-06-07 21:48:21.215395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.034 [2024-06-07 21:48:21.215413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.034 qpair failed and we were unable to recover it. 00:31:21.034 [2024-06-07 21:48:21.215725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.034 [2024-06-07 21:48:21.215742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.034 qpair failed and we were unable to recover it. 00:31:21.034 [2024-06-07 21:48:21.216024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.034 [2024-06-07 21:48:21.216056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.034 qpair failed and we were unable to recover it. 00:31:21.034 [2024-06-07 21:48:21.216344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.034 [2024-06-07 21:48:21.216369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.034 qpair failed and we were unable to recover it. 00:31:21.034 [2024-06-07 21:48:21.216613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.034 [2024-06-07 21:48:21.216623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.034 qpair failed and we were unable to recover it. 00:31:21.034 [2024-06-07 21:48:21.216912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.034 [2024-06-07 21:48:21.216922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.034 qpair failed and we were unable to recover it. 00:31:21.034 [2024-06-07 21:48:21.217137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.034 [2024-06-07 21:48:21.217147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.034 qpair failed and we were unable to recover it. 00:31:21.034 [2024-06-07 21:48:21.217366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.034 [2024-06-07 21:48:21.217375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.034 qpair failed and we were unable to recover it. 00:31:21.034 [2024-06-07 21:48:21.217651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.034 [2024-06-07 21:48:21.217661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.034 qpair failed and we were unable to recover it. 00:31:21.034 [2024-06-07 21:48:21.217970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.034 [2024-06-07 21:48:21.217979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.034 qpair failed and we were unable to recover it. 00:31:21.034 [2024-06-07 21:48:21.218130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.034 [2024-06-07 21:48:21.218139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.034 qpair failed and we were unable to recover it. 00:31:21.034 [2024-06-07 21:48:21.218434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.034 [2024-06-07 21:48:21.218444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.034 qpair failed and we were unable to recover it. 00:31:21.034 [2024-06-07 21:48:21.218582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.034 [2024-06-07 21:48:21.218591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.034 qpair failed and we were unable to recover it. 00:31:21.034 [2024-06-07 21:48:21.218802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.034 [2024-06-07 21:48:21.218811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.034 qpair failed and we were unable to recover it. 00:31:21.034 [2024-06-07 21:48:21.219077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.034 [2024-06-07 21:48:21.219115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.034 qpair failed and we were unable to recover it. 00:31:21.034 [2024-06-07 21:48:21.219315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.034 [2024-06-07 21:48:21.219346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.034 qpair failed and we were unable to recover it. 00:31:21.034 [2024-06-07 21:48:21.219716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.034 [2024-06-07 21:48:21.219746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.034 qpair failed and we were unable to recover it. 00:31:21.034 [2024-06-07 21:48:21.220057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.034 [2024-06-07 21:48:21.220089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.034 qpair failed and we were unable to recover it. 00:31:21.034 [2024-06-07 21:48:21.220338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.034 [2024-06-07 21:48:21.220347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.034 qpair failed and we were unable to recover it. 00:31:21.034 [2024-06-07 21:48:21.220583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.034 [2024-06-07 21:48:21.220593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.034 qpair failed and we were unable to recover it. 00:31:21.034 [2024-06-07 21:48:21.220871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.034 [2024-06-07 21:48:21.220881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.034 qpair failed and we were unable to recover it. 00:31:21.034 [2024-06-07 21:48:21.221150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.034 [2024-06-07 21:48:21.221160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.034 qpair failed and we were unable to recover it. 00:31:21.034 [2024-06-07 21:48:21.221363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.034 [2024-06-07 21:48:21.221373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.034 qpair failed and we were unable to recover it. 00:31:21.034 [2024-06-07 21:48:21.221658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.034 [2024-06-07 21:48:21.221667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.034 qpair failed and we were unable to recover it. 00:31:21.034 [2024-06-07 21:48:21.221881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.034 [2024-06-07 21:48:21.221890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.035 qpair failed and we were unable to recover it. 00:31:21.035 [2024-06-07 21:48:21.222116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.035 [2024-06-07 21:48:21.222125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.035 qpair failed and we were unable to recover it. 00:31:21.035 [2024-06-07 21:48:21.222336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.035 [2024-06-07 21:48:21.222347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.035 qpair failed and we were unable to recover it. 00:31:21.035 [2024-06-07 21:48:21.222643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.035 [2024-06-07 21:48:21.222653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.035 qpair failed and we were unable to recover it. 00:31:21.035 [2024-06-07 21:48:21.222879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.035 [2024-06-07 21:48:21.222888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.035 qpair failed and we were unable to recover it. 00:31:21.035 [2024-06-07 21:48:21.223225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.035 [2024-06-07 21:48:21.223235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.035 qpair failed and we were unable to recover it. 00:31:21.035 [2024-06-07 21:48:21.223411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.035 [2024-06-07 21:48:21.223442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.035 qpair failed and we were unable to recover it. 00:31:21.035 [2024-06-07 21:48:21.223693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.035 [2024-06-07 21:48:21.223724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.035 qpair failed and we were unable to recover it. 00:31:21.035 [2024-06-07 21:48:21.224052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.035 [2024-06-07 21:48:21.224083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.035 qpair failed and we were unable to recover it. 00:31:21.035 [2024-06-07 21:48:21.224339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.035 [2024-06-07 21:48:21.224369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.035 qpair failed and we were unable to recover it. 00:31:21.035 [2024-06-07 21:48:21.224566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.035 [2024-06-07 21:48:21.224597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.035 qpair failed and we were unable to recover it. 00:31:21.035 [2024-06-07 21:48:21.224841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.035 [2024-06-07 21:48:21.224883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.035 qpair failed and we were unable to recover it. 00:31:21.035 [2024-06-07 21:48:21.225204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.035 [2024-06-07 21:48:21.225214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.035 qpair failed and we were unable to recover it. 00:31:21.035 [2024-06-07 21:48:21.225453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.035 [2024-06-07 21:48:21.225462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.035 qpair failed and we were unable to recover it. 00:31:21.035 [2024-06-07 21:48:21.225613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.035 [2024-06-07 21:48:21.225622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.035 qpair failed and we were unable to recover it. 00:31:21.035 [2024-06-07 21:48:21.225819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.035 [2024-06-07 21:48:21.225828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.035 qpair failed and we were unable to recover it. 00:31:21.035 [2024-06-07 21:48:21.226035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.035 [2024-06-07 21:48:21.226045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.035 qpair failed and we were unable to recover it. 00:31:21.035 [2024-06-07 21:48:21.226308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.035 [2024-06-07 21:48:21.226317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.035 qpair failed and we were unable to recover it. 00:31:21.035 [2024-06-07 21:48:21.226465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.035 [2024-06-07 21:48:21.226474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.035 qpair failed and we were unable to recover it. 00:31:21.035 [2024-06-07 21:48:21.226706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.035 [2024-06-07 21:48:21.226736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.035 qpair failed and we were unable to recover it. 00:31:21.035 [2024-06-07 21:48:21.227089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.035 [2024-06-07 21:48:21.227120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.035 qpair failed and we were unable to recover it. 00:31:21.035 [2024-06-07 21:48:21.227297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.035 [2024-06-07 21:48:21.227307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.035 qpair failed and we were unable to recover it. 00:31:21.035 [2024-06-07 21:48:21.227518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.035 [2024-06-07 21:48:21.227549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.035 qpair failed and we were unable to recover it. 00:31:21.035 [2024-06-07 21:48:21.227891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.035 [2024-06-07 21:48:21.227921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.035 qpair failed and we were unable to recover it. 00:31:21.035 [2024-06-07 21:48:21.228103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.035 [2024-06-07 21:48:21.228113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.035 qpair failed and we were unable to recover it. 00:31:21.035 [2024-06-07 21:48:21.228321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.035 [2024-06-07 21:48:21.228331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.035 qpair failed and we were unable to recover it. 00:31:21.035 [2024-06-07 21:48:21.228486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.035 [2024-06-07 21:48:21.228520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.035 qpair failed and we were unable to recover it. 00:31:21.035 [2024-06-07 21:48:21.228771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.035 [2024-06-07 21:48:21.228801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.035 qpair failed and we were unable to recover it. 00:31:21.035 [2024-06-07 21:48:21.229138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.035 [2024-06-07 21:48:21.229170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.035 qpair failed and we were unable to recover it. 00:31:21.035 [2024-06-07 21:48:21.229352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.035 [2024-06-07 21:48:21.229383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.035 qpair failed and we were unable to recover it. 00:31:21.035 [2024-06-07 21:48:21.229566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.035 [2024-06-07 21:48:21.229596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.035 qpair failed and we were unable to recover it. 00:31:21.035 [2024-06-07 21:48:21.229848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.035 [2024-06-07 21:48:21.229878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.035 qpair failed and we were unable to recover it. 00:31:21.035 [2024-06-07 21:48:21.230137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.035 [2024-06-07 21:48:21.230147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.035 qpair failed and we were unable to recover it. 00:31:21.035 [2024-06-07 21:48:21.230438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.035 [2024-06-07 21:48:21.230448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.035 qpair failed and we were unable to recover it. 00:31:21.035 [2024-06-07 21:48:21.230596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.035 [2024-06-07 21:48:21.230605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.035 qpair failed and we were unable to recover it. 00:31:21.035 [2024-06-07 21:48:21.230843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.035 [2024-06-07 21:48:21.230852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.035 qpair failed and we were unable to recover it. 00:31:21.035 [2024-06-07 21:48:21.230998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.035 [2024-06-07 21:48:21.231007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.035 qpair failed and we were unable to recover it. 00:31:21.035 [2024-06-07 21:48:21.231282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.035 [2024-06-07 21:48:21.231292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.035 qpair failed and we were unable to recover it. 00:31:21.035 [2024-06-07 21:48:21.231568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.035 [2024-06-07 21:48:21.231598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.036 qpair failed and we were unable to recover it. 00:31:21.036 [2024-06-07 21:48:21.231856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.036 [2024-06-07 21:48:21.231887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.036 qpair failed and we were unable to recover it. 00:31:21.036 [2024-06-07 21:48:21.232083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.036 [2024-06-07 21:48:21.232093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.036 qpair failed and we were unable to recover it. 00:31:21.036 [2024-06-07 21:48:21.232311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.036 [2024-06-07 21:48:21.232321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.036 qpair failed and we were unable to recover it. 00:31:21.036 [2024-06-07 21:48:21.232524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.036 [2024-06-07 21:48:21.232535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.036 qpair failed and we were unable to recover it. 00:31:21.036 [2024-06-07 21:48:21.232771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.036 [2024-06-07 21:48:21.232780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.036 qpair failed and we were unable to recover it. 00:31:21.036 [2024-06-07 21:48:21.232933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.036 [2024-06-07 21:48:21.232943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.036 qpair failed and we were unable to recover it. 00:31:21.036 [2024-06-07 21:48:21.233159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.036 [2024-06-07 21:48:21.233190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.036 qpair failed and we were unable to recover it. 00:31:21.036 [2024-06-07 21:48:21.233437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.036 [2024-06-07 21:48:21.233467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.036 qpair failed and we were unable to recover it. 00:31:21.036 [2024-06-07 21:48:21.233710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.036 [2024-06-07 21:48:21.233740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.036 qpair failed and we were unable to recover it. 00:31:21.036 [2024-06-07 21:48:21.234059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.036 [2024-06-07 21:48:21.234091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.036 qpair failed and we were unable to recover it. 00:31:21.036 [2024-06-07 21:48:21.234318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.036 [2024-06-07 21:48:21.234328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.036 qpair failed and we were unable to recover it. 00:31:21.036 [2024-06-07 21:48:21.234534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.036 [2024-06-07 21:48:21.234543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.036 qpair failed and we were unable to recover it. 00:31:21.036 [2024-06-07 21:48:21.234808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.036 [2024-06-07 21:48:21.234817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.036 qpair failed and we were unable to recover it. 00:31:21.036 [2024-06-07 21:48:21.235060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.036 [2024-06-07 21:48:21.235070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.036 qpair failed and we were unable to recover it. 00:31:21.036 [2024-06-07 21:48:21.235359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.036 [2024-06-07 21:48:21.235368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.036 qpair failed and we were unable to recover it. 00:31:21.036 [2024-06-07 21:48:21.235567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.036 [2024-06-07 21:48:21.235577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.036 qpair failed and we were unable to recover it. 00:31:21.036 [2024-06-07 21:48:21.235740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.036 [2024-06-07 21:48:21.235749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.036 qpair failed and we were unable to recover it. 00:31:21.036 [2024-06-07 21:48:21.236016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.036 [2024-06-07 21:48:21.236031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.036 qpair failed and we were unable to recover it. 00:31:21.036 [2024-06-07 21:48:21.236358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.036 [2024-06-07 21:48:21.236388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.036 qpair failed and we were unable to recover it. 00:31:21.036 [2024-06-07 21:48:21.236577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.036 [2024-06-07 21:48:21.236607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.036 qpair failed and we were unable to recover it. 00:31:21.036 [2024-06-07 21:48:21.236866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.036 [2024-06-07 21:48:21.236897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.036 qpair failed and we were unable to recover it. 00:31:21.036 [2024-06-07 21:48:21.237048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.036 [2024-06-07 21:48:21.237057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.036 qpair failed and we were unable to recover it. 00:31:21.036 [2024-06-07 21:48:21.237198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.036 [2024-06-07 21:48:21.237208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.036 qpair failed and we were unable to recover it. 00:31:21.036 [2024-06-07 21:48:21.237425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.036 [2024-06-07 21:48:21.237434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.036 qpair failed and we were unable to recover it. 00:31:21.036 [2024-06-07 21:48:21.237580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.036 [2024-06-07 21:48:21.237589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.036 qpair failed and we were unable to recover it. 00:31:21.036 [2024-06-07 21:48:21.237820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.036 [2024-06-07 21:48:21.237850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.036 qpair failed and we were unable to recover it. 00:31:21.036 [2024-06-07 21:48:21.238091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.036 [2024-06-07 21:48:21.238122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.036 qpair failed and we were unable to recover it. 00:31:21.036 [2024-06-07 21:48:21.238367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.036 [2024-06-07 21:48:21.238397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.036 qpair failed and we were unable to recover it. 00:31:21.036 [2024-06-07 21:48:21.238687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.036 [2024-06-07 21:48:21.238717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.036 qpair failed and we were unable to recover it. 00:31:21.036 [2024-06-07 21:48:21.239022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.036 [2024-06-07 21:48:21.239078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.036 qpair failed and we were unable to recover it. 00:31:21.036 [2024-06-07 21:48:21.239403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.036 [2024-06-07 21:48:21.239436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.036 qpair failed and we were unable to recover it. 00:31:21.036 [2024-06-07 21:48:21.239724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.036 [2024-06-07 21:48:21.239755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.036 qpair failed and we were unable to recover it. 00:31:21.036 [2024-06-07 21:48:21.240002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.036 [2024-06-07 21:48:21.240019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.036 qpair failed and we were unable to recover it. 00:31:21.036 [2024-06-07 21:48:21.240338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.036 [2024-06-07 21:48:21.240355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.036 qpair failed and we were unable to recover it. 00:31:21.036 [2024-06-07 21:48:21.240587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.036 [2024-06-07 21:48:21.240604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.036 qpair failed and we were unable to recover it. 00:31:21.036 [2024-06-07 21:48:21.240912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.036 [2024-06-07 21:48:21.240929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.036 qpair failed and we were unable to recover it. 00:31:21.036 [2024-06-07 21:48:21.241150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.036 [2024-06-07 21:48:21.241161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.036 qpair failed and we were unable to recover it. 00:31:21.036 [2024-06-07 21:48:21.241432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.037 [2024-06-07 21:48:21.241462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.037 qpair failed and we were unable to recover it. 00:31:21.037 [2024-06-07 21:48:21.241747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.037 [2024-06-07 21:48:21.241777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.037 qpair failed and we were unable to recover it. 00:31:21.037 [2024-06-07 21:48:21.241964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.037 [2024-06-07 21:48:21.241974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.037 qpair failed and we were unable to recover it. 00:31:21.037 [2024-06-07 21:48:21.242244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.037 [2024-06-07 21:48:21.242254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.037 qpair failed and we were unable to recover it. 00:31:21.037 [2024-06-07 21:48:21.242456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.037 [2024-06-07 21:48:21.242465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.037 qpair failed and we were unable to recover it. 00:31:21.037 [2024-06-07 21:48:21.242731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.037 [2024-06-07 21:48:21.242740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.037 qpair failed and we were unable to recover it. 00:31:21.037 [2024-06-07 21:48:21.242944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.037 [2024-06-07 21:48:21.242954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.037 qpair failed and we were unable to recover it. 00:31:21.037 [2024-06-07 21:48:21.243195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.037 [2024-06-07 21:48:21.243205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.037 qpair failed and we were unable to recover it. 00:31:21.037 [2024-06-07 21:48:21.243351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.037 [2024-06-07 21:48:21.243360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.037 qpair failed and we were unable to recover it. 00:31:21.037 [2024-06-07 21:48:21.243638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.037 [2024-06-07 21:48:21.243668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.037 qpair failed and we were unable to recover it. 00:31:21.037 [2024-06-07 21:48:21.244071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.037 [2024-06-07 21:48:21.244103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.037 qpair failed and we were unable to recover it. 00:31:21.037 [2024-06-07 21:48:21.244442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.037 [2024-06-07 21:48:21.244452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.037 qpair failed and we were unable to recover it. 00:31:21.037 [2024-06-07 21:48:21.244793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.037 [2024-06-07 21:48:21.244823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.037 qpair failed and we were unable to recover it. 00:31:21.037 [2024-06-07 21:48:21.244998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.037 [2024-06-07 21:48:21.245041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.037 qpair failed and we were unable to recover it. 00:31:21.037 [2024-06-07 21:48:21.245252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.037 [2024-06-07 21:48:21.245283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.037 qpair failed and we were unable to recover it. 00:31:21.037 [2024-06-07 21:48:21.245560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.037 [2024-06-07 21:48:21.245590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.037 qpair failed and we were unable to recover it. 00:31:21.037 [2024-06-07 21:48:21.245926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.037 [2024-06-07 21:48:21.245956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.037 qpair failed and we were unable to recover it. 00:31:21.037 [2024-06-07 21:48:21.246202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.037 [2024-06-07 21:48:21.246212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.037 qpair failed and we were unable to recover it. 00:31:21.037 [2024-06-07 21:48:21.246483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.037 [2024-06-07 21:48:21.246492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.037 qpair failed and we were unable to recover it. 00:31:21.037 [2024-06-07 21:48:21.246603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.037 [2024-06-07 21:48:21.246612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.037 qpair failed and we were unable to recover it. 00:31:21.037 [2024-06-07 21:48:21.246826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.037 [2024-06-07 21:48:21.246835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.037 qpair failed and we were unable to recover it. 00:31:21.037 [2024-06-07 21:48:21.246983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.037 [2024-06-07 21:48:21.246993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.037 qpair failed and we were unable to recover it. 00:31:21.037 [2024-06-07 21:48:21.247188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.037 [2024-06-07 21:48:21.247198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.037 qpair failed and we were unable to recover it. 00:31:21.037 [2024-06-07 21:48:21.247413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.037 [2024-06-07 21:48:21.247422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.037 qpair failed and we were unable to recover it. 00:31:21.037 [2024-06-07 21:48:21.247688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.037 [2024-06-07 21:48:21.247697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.037 qpair failed and we were unable to recover it. 00:31:21.037 [2024-06-07 21:48:21.247834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.037 [2024-06-07 21:48:21.247843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.037 qpair failed and we were unable to recover it. 00:31:21.037 [2024-06-07 21:48:21.248091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.037 [2024-06-07 21:48:21.248123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.037 qpair failed and we were unable to recover it. 00:31:21.037 [2024-06-07 21:48:21.248480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.037 [2024-06-07 21:48:21.248510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.037 qpair failed and we were unable to recover it. 00:31:21.037 [2024-06-07 21:48:21.248822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.037 [2024-06-07 21:48:21.248852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.037 qpair failed and we were unable to recover it. 00:31:21.037 [2024-06-07 21:48:21.249159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.037 [2024-06-07 21:48:21.249168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.037 qpair failed and we were unable to recover it. 00:31:21.037 [2024-06-07 21:48:21.249437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.037 [2024-06-07 21:48:21.249446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.037 qpair failed and we were unable to recover it. 00:31:21.037 [2024-06-07 21:48:21.249726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.037 [2024-06-07 21:48:21.249735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.037 qpair failed and we were unable to recover it. 00:31:21.037 [2024-06-07 21:48:21.249974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.038 [2024-06-07 21:48:21.249983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.038 qpair failed and we were unable to recover it. 00:31:21.038 [2024-06-07 21:48:21.250254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.038 [2024-06-07 21:48:21.250264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.038 qpair failed and we were unable to recover it. 00:31:21.038 [2024-06-07 21:48:21.250583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.038 [2024-06-07 21:48:21.250593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.038 qpair failed and we were unable to recover it. 00:31:21.038 [2024-06-07 21:48:21.250885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.038 [2024-06-07 21:48:21.250894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.038 qpair failed and we were unable to recover it. 00:31:21.038 [2024-06-07 21:48:21.251118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.038 [2024-06-07 21:48:21.251128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.038 qpair failed and we were unable to recover it. 00:31:21.038 [2024-06-07 21:48:21.251292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.038 [2024-06-07 21:48:21.251301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.038 qpair failed and we were unable to recover it. 00:31:21.038 [2024-06-07 21:48:21.251577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.038 [2024-06-07 21:48:21.251608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.038 qpair failed and we were unable to recover it. 00:31:21.038 [2024-06-07 21:48:21.251944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.038 [2024-06-07 21:48:21.251975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.038 qpair failed and we were unable to recover it. 00:31:21.038 [2024-06-07 21:48:21.252196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.038 [2024-06-07 21:48:21.252227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.038 qpair failed and we were unable to recover it. 00:31:21.038 [2024-06-07 21:48:21.252481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.038 [2024-06-07 21:48:21.252511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.038 qpair failed and we were unable to recover it. 00:31:21.038 [2024-06-07 21:48:21.252762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.038 [2024-06-07 21:48:21.252791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.038 qpair failed and we were unable to recover it. 00:31:21.038 [2024-06-07 21:48:21.253059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.038 [2024-06-07 21:48:21.253090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.038 qpair failed and we were unable to recover it. 00:31:21.038 [2024-06-07 21:48:21.253407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.038 [2024-06-07 21:48:21.253416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.038 qpair failed and we were unable to recover it. 00:31:21.038 [2024-06-07 21:48:21.253649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.038 [2024-06-07 21:48:21.253659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.038 qpair failed and we were unable to recover it. 00:31:21.038 [2024-06-07 21:48:21.253898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.038 [2024-06-07 21:48:21.253933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.038 qpair failed and we were unable to recover it. 00:31:21.038 [2024-06-07 21:48:21.254183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.038 [2024-06-07 21:48:21.254192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.038 qpair failed and we were unable to recover it. 00:31:21.038 [2024-06-07 21:48:21.254395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.038 [2024-06-07 21:48:21.254404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.038 qpair failed and we were unable to recover it. 00:31:21.038 [2024-06-07 21:48:21.254648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.038 [2024-06-07 21:48:21.254657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.038 qpair failed and we were unable to recover it. 00:31:21.038 [2024-06-07 21:48:21.254864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.038 [2024-06-07 21:48:21.254873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.038 qpair failed and we were unable to recover it. 00:31:21.038 [2024-06-07 21:48:21.255095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.038 [2024-06-07 21:48:21.255104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.038 qpair failed and we were unable to recover it. 00:31:21.038 [2024-06-07 21:48:21.255301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.038 [2024-06-07 21:48:21.255311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.038 qpair failed and we were unable to recover it. 00:31:21.038 [2024-06-07 21:48:21.255540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.038 [2024-06-07 21:48:21.255570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.038 qpair failed and we were unable to recover it. 00:31:21.038 [2024-06-07 21:48:21.255907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.038 [2024-06-07 21:48:21.255934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.038 qpair failed and we were unable to recover it. 00:31:21.038 [2024-06-07 21:48:21.256174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.038 [2024-06-07 21:48:21.256183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.038 qpair failed and we were unable to recover it. 00:31:21.038 [2024-06-07 21:48:21.256478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.038 [2024-06-07 21:48:21.256487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.038 qpair failed and we were unable to recover it. 00:31:21.038 [2024-06-07 21:48:21.256691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.038 [2024-06-07 21:48:21.256700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.038 qpair failed and we were unable to recover it. 00:31:21.038 [2024-06-07 21:48:21.256968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.038 [2024-06-07 21:48:21.256977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.038 qpair failed and we were unable to recover it. 00:31:21.038 [2024-06-07 21:48:21.257195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.038 [2024-06-07 21:48:21.257205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.038 qpair failed and we were unable to recover it. 00:31:21.038 [2024-06-07 21:48:21.257507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.038 [2024-06-07 21:48:21.257517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.038 qpair failed and we were unable to recover it. 00:31:21.038 [2024-06-07 21:48:21.257807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.038 [2024-06-07 21:48:21.257816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.038 qpair failed and we were unable to recover it. 00:31:21.038 [2024-06-07 21:48:21.258109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.038 [2024-06-07 21:48:21.258120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.038 qpair failed and we were unable to recover it. 00:31:21.038 [2024-06-07 21:48:21.258445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.038 [2024-06-07 21:48:21.258476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.038 qpair failed and we were unable to recover it. 00:31:21.038 [2024-06-07 21:48:21.258843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.038 [2024-06-07 21:48:21.258873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.038 qpair failed and we were unable to recover it. 00:31:21.038 [2024-06-07 21:48:21.259244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.038 [2024-06-07 21:48:21.259276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.038 qpair failed and we were unable to recover it. 00:31:21.038 [2024-06-07 21:48:21.259565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.038 [2024-06-07 21:48:21.259596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.038 qpair failed and we were unable to recover it. 00:31:21.038 [2024-06-07 21:48:21.259938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.038 [2024-06-07 21:48:21.259968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.038 qpair failed and we were unable to recover it. 00:31:21.038 [2024-06-07 21:48:21.260247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.038 [2024-06-07 21:48:21.260279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.038 qpair failed and we were unable to recover it. 00:31:21.038 [2024-06-07 21:48:21.260559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.038 [2024-06-07 21:48:21.260568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.038 qpair failed and we were unable to recover it. 00:31:21.038 [2024-06-07 21:48:21.260805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.039 [2024-06-07 21:48:21.260814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.039 qpair failed and we were unable to recover it. 00:31:21.039 [2024-06-07 21:48:21.261111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.039 [2024-06-07 21:48:21.261121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.039 qpair failed and we were unable to recover it. 00:31:21.039 [2024-06-07 21:48:21.261278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.039 [2024-06-07 21:48:21.261287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.039 qpair failed and we were unable to recover it. 00:31:21.039 [2024-06-07 21:48:21.261506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.039 [2024-06-07 21:48:21.261537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.039 qpair failed and we were unable to recover it. 00:31:21.039 [2024-06-07 21:48:21.261877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.039 [2024-06-07 21:48:21.261907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.039 qpair failed and we were unable to recover it. 00:31:21.039 [2024-06-07 21:48:21.262229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.039 [2024-06-07 21:48:21.262239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.039 qpair failed and we were unable to recover it. 00:31:21.039 [2024-06-07 21:48:21.262531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.039 [2024-06-07 21:48:21.262540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.039 qpair failed and we were unable to recover it. 00:31:21.039 [2024-06-07 21:48:21.262926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.039 [2024-06-07 21:48:21.262934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.039 qpair failed and we were unable to recover it. 00:31:21.039 [2024-06-07 21:48:21.263236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.039 [2024-06-07 21:48:21.263245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.039 qpair failed and we were unable to recover it. 00:31:21.039 [2024-06-07 21:48:21.263443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.039 [2024-06-07 21:48:21.263452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.039 qpair failed and we were unable to recover it. 00:31:21.039 [2024-06-07 21:48:21.263692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.039 [2024-06-07 21:48:21.263701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.039 qpair failed and we were unable to recover it. 00:31:21.039 [2024-06-07 21:48:21.263899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.039 [2024-06-07 21:48:21.263908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.039 qpair failed and we were unable to recover it. 00:31:21.039 [2024-06-07 21:48:21.264231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.039 [2024-06-07 21:48:21.264241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.039 qpair failed and we were unable to recover it. 00:31:21.039 [2024-06-07 21:48:21.264384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.039 [2024-06-07 21:48:21.264393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.039 qpair failed and we were unable to recover it. 00:31:21.039 [2024-06-07 21:48:21.264611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.039 [2024-06-07 21:48:21.264641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.039 qpair failed and we were unable to recover it. 00:31:21.039 [2024-06-07 21:48:21.264900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.039 [2024-06-07 21:48:21.264931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.039 qpair failed and we were unable to recover it. 00:31:21.039 [2024-06-07 21:48:21.265266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.039 [2024-06-07 21:48:21.265303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.039 qpair failed and we were unable to recover it. 00:31:21.039 [2024-06-07 21:48:21.265682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.039 [2024-06-07 21:48:21.265712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.039 qpair failed and we were unable to recover it. 00:31:21.039 [2024-06-07 21:48:21.265967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.039 [2024-06-07 21:48:21.265998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.039 qpair failed and we were unable to recover it. 00:31:21.039 [2024-06-07 21:48:21.266243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.039 [2024-06-07 21:48:21.266285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.039 qpair failed and we were unable to recover it. 00:31:21.039 [2024-06-07 21:48:21.266612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.039 [2024-06-07 21:48:21.266632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.039 qpair failed and we were unable to recover it. 00:31:21.039 [2024-06-07 21:48:21.266858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.039 [2024-06-07 21:48:21.266889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.039 qpair failed and we were unable to recover it. 00:31:21.039 [2024-06-07 21:48:21.267216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.039 [2024-06-07 21:48:21.267248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.039 qpair failed and we were unable to recover it. 00:31:21.039 [2024-06-07 21:48:21.267573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.039 [2024-06-07 21:48:21.267591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.039 qpair failed and we were unable to recover it. 00:31:21.039 [2024-06-07 21:48:21.267913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.039 [2024-06-07 21:48:21.267930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.039 qpair failed and we were unable to recover it. 00:31:21.039 [2024-06-07 21:48:21.268163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.039 [2024-06-07 21:48:21.268174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.039 qpair failed and we were unable to recover it. 00:31:21.039 [2024-06-07 21:48:21.268386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.039 [2024-06-07 21:48:21.268395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.039 qpair failed and we were unable to recover it. 00:31:21.039 [2024-06-07 21:48:21.268607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.039 [2024-06-07 21:48:21.268616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.039 qpair failed and we were unable to recover it. 00:31:21.039 [2024-06-07 21:48:21.268881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.039 [2024-06-07 21:48:21.268890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.039 qpair failed and we were unable to recover it. 00:31:21.039 [2024-06-07 21:48:21.269171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.039 [2024-06-07 21:48:21.269181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.039 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-07 21:48:21.269384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-07 21:48:21.269393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-07 21:48:21.269628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-07 21:48:21.269637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-07 21:48:21.269928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-07 21:48:21.269937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-07 21:48:21.270162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-07 21:48:21.270172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-07 21:48:21.270384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-07 21:48:21.270393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-07 21:48:21.270659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-07 21:48:21.270668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-07 21:48:21.270932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-07 21:48:21.270941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-07 21:48:21.271151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-07 21:48:21.271161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-07 21:48:21.271412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-07 21:48:21.271421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-07 21:48:21.271738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-07 21:48:21.271747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-07 21:48:21.272040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-07 21:48:21.272050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-07 21:48:21.272341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-07 21:48:21.272350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-07 21:48:21.272670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-07 21:48:21.272679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-07 21:48:21.272840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-07 21:48:21.272849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-07 21:48:21.273156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-07 21:48:21.273166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-07 21:48:21.273484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-07 21:48:21.273494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-07 21:48:21.273733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-07 21:48:21.273742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-07 21:48:21.273948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-07 21:48:21.273957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-07 21:48:21.274253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-07 21:48:21.274263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-07 21:48:21.274555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-07 21:48:21.274564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-07 21:48:21.274881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-07 21:48:21.274890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.312 [2024-06-07 21:48:21.275099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.312 [2024-06-07 21:48:21.275109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.312 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-07 21:48:21.275397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-07 21:48:21.275407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-07 21:48:21.275671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-07 21:48:21.275681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-07 21:48:21.275889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-07 21:48:21.275898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-07 21:48:21.276218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-07 21:48:21.276228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-07 21:48:21.276474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-07 21:48:21.276484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-07 21:48:21.276798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-07 21:48:21.276828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-07 21:48:21.277081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-07 21:48:21.277112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-07 21:48:21.277308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-07 21:48:21.277339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-07 21:48:21.277568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-07 21:48:21.277578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-07 21:48:21.277787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-07 21:48:21.277796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-07 21:48:21.278089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-07 21:48:21.278098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-07 21:48:21.278395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-07 21:48:21.278404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-07 21:48:21.278650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-07 21:48:21.278659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-07 21:48:21.278925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-07 21:48:21.278935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-07 21:48:21.279221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-07 21:48:21.279231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-07 21:48:21.279534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-07 21:48:21.279543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-07 21:48:21.279779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-07 21:48:21.279788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-07 21:48:21.279994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-07 21:48:21.280004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-07 21:48:21.280296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-07 21:48:21.280306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-07 21:48:21.280517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-07 21:48:21.280527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-07 21:48:21.280757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-07 21:48:21.280766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-07 21:48:21.280970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-07 21:48:21.280979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-07 21:48:21.281272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-07 21:48:21.281282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-07 21:48:21.281574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-07 21:48:21.281583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-07 21:48:21.281918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-07 21:48:21.281948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-07 21:48:21.282283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-07 21:48:21.282314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-07 21:48:21.282625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-07 21:48:21.282656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-07 21:48:21.283009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-07 21:48:21.283061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-07 21:48:21.283323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-07 21:48:21.283332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-07 21:48:21.283605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-07 21:48:21.283614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-07 21:48:21.283894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-07 21:48:21.283903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-07 21:48:21.284170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-07 21:48:21.284189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-07 21:48:21.284485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-07 21:48:21.284494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-07 21:48:21.284788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-07 21:48:21.284797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-07 21:48:21.285089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-07 21:48:21.285098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-07 21:48:21.285407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-07 21:48:21.285416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-07 21:48:21.285659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-07 21:48:21.285668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.313 qpair failed and we were unable to recover it. 00:31:21.313 [2024-06-07 21:48:21.285939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.313 [2024-06-07 21:48:21.285970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-07 21:48:21.286291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-07 21:48:21.286322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-07 21:48:21.286636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-07 21:48:21.286667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-07 21:48:21.286989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-07 21:48:21.287019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-07 21:48:21.287370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-07 21:48:21.287401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-07 21:48:21.287737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-07 21:48:21.287767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-07 21:48:21.287958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-07 21:48:21.287988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-07 21:48:21.288308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-07 21:48:21.288319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-07 21:48:21.288633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-07 21:48:21.288642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-07 21:48:21.288908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-07 21:48:21.288917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-07 21:48:21.289246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-07 21:48:21.289278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-07 21:48:21.289616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-07 21:48:21.289647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-07 21:48:21.289983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-07 21:48:21.290013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-07 21:48:21.290357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-07 21:48:21.290387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-07 21:48:21.290654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-07 21:48:21.290684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-07 21:48:21.291065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-07 21:48:21.291097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-07 21:48:21.291445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-07 21:48:21.291476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-07 21:48:21.291813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-07 21:48:21.291843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-07 21:48:21.292177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-07 21:48:21.292209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-07 21:48:21.292510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-07 21:48:21.292519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-07 21:48:21.292810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-07 21:48:21.292819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-07 21:48:21.293114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-07 21:48:21.293124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-07 21:48:21.293357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-07 21:48:21.293366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-07 21:48:21.293638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-07 21:48:21.293648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-07 21:48:21.293884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-07 21:48:21.293893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-07 21:48:21.294159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-07 21:48:21.294169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-07 21:48:21.294459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-07 21:48:21.294469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-07 21:48:21.294760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-07 21:48:21.294769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-07 21:48:21.295007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-07 21:48:21.295016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-07 21:48:21.295342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-07 21:48:21.295351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-07 21:48:21.295624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-07 21:48:21.295655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-07 21:48:21.296041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-07 21:48:21.296072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-07 21:48:21.296319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-07 21:48:21.296329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-07 21:48:21.296634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-07 21:48:21.296643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-07 21:48:21.296847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-07 21:48:21.296856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-07 21:48:21.297153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-07 21:48:21.297162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-07 21:48:21.297381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-07 21:48:21.297390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-07 21:48:21.297586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-07 21:48:21.297595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.314 qpair failed and we were unable to recover it. 00:31:21.314 [2024-06-07 21:48:21.297803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.314 [2024-06-07 21:48:21.297812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-07 21:48:21.298033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-07 21:48:21.298042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-07 21:48:21.298309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-07 21:48:21.298319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-07 21:48:21.298619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-07 21:48:21.298629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-07 21:48:21.298927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-07 21:48:21.298936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-07 21:48:21.299222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-07 21:48:21.299232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-07 21:48:21.299443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-07 21:48:21.299452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-07 21:48:21.299769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-07 21:48:21.299778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-07 21:48:21.300042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-07 21:48:21.300052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-07 21:48:21.300349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-07 21:48:21.300359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-07 21:48:21.300522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-07 21:48:21.300531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-07 21:48:21.300802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-07 21:48:21.300833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-07 21:48:21.301041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-07 21:48:21.301072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-07 21:48:21.301347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-07 21:48:21.301378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-07 21:48:21.301733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-07 21:48:21.301764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-07 21:48:21.302016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-07 21:48:21.302058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-07 21:48:21.302319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-07 21:48:21.302349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-07 21:48:21.302606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-07 21:48:21.302636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-07 21:48:21.302957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-07 21:48:21.302988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-07 21:48:21.303333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-07 21:48:21.303365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-07 21:48:21.303701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-07 21:48:21.303732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-07 21:48:21.304016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-07 21:48:21.304057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-07 21:48:21.304321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-07 21:48:21.304351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-07 21:48:21.304607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-07 21:48:21.304616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-07 21:48:21.304911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-07 21:48:21.304919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-07 21:48:21.305163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-07 21:48:21.305172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-07 21:48:21.305388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-07 21:48:21.305397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-07 21:48:21.305689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-07 21:48:21.305699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-07 21:48:21.306003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-07 21:48:21.306012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-07 21:48:21.306316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-07 21:48:21.306325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-07 21:48:21.306607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-07 21:48:21.306617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-07 21:48:21.306819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-07 21:48:21.306829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-07 21:48:21.307128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-07 21:48:21.307137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-07 21:48:21.307425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-07 21:48:21.307434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-07 21:48:21.307749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-07 21:48:21.307758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-07 21:48:21.308073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-07 21:48:21.308083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-07 21:48:21.308359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-07 21:48:21.308370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-07 21:48:21.308603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.315 [2024-06-07 21:48:21.308612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.315 qpair failed and we were unable to recover it. 00:31:21.315 [2024-06-07 21:48:21.308906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-07 21:48:21.308915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-07 21:48:21.309251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-07 21:48:21.309261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-07 21:48:21.309561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-07 21:48:21.309591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-07 21:48:21.309953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-07 21:48:21.309983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-07 21:48:21.310262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-07 21:48:21.310294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-07 21:48:21.310537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-07 21:48:21.310546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-07 21:48:21.310854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-07 21:48:21.310884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-07 21:48:21.311238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-07 21:48:21.311271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-07 21:48:21.311555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-07 21:48:21.311585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-07 21:48:21.311835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-07 21:48:21.311865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-07 21:48:21.312202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-07 21:48:21.312233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-07 21:48:21.312576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-07 21:48:21.312606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-07 21:48:21.312920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-07 21:48:21.312951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-07 21:48:21.313279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-07 21:48:21.313310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-07 21:48:21.313626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-07 21:48:21.313656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-07 21:48:21.313983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-07 21:48:21.314013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-07 21:48:21.314357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-07 21:48:21.314388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-07 21:48:21.314726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-07 21:48:21.314756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-07 21:48:21.315080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-07 21:48:21.315111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-07 21:48:21.315366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-07 21:48:21.315396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-07 21:48:21.315718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-07 21:48:21.315748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-07 21:48:21.316054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-07 21:48:21.316086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-07 21:48:21.316328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-07 21:48:21.316359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-07 21:48:21.316742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-07 21:48:21.316772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-07 21:48:21.317046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-07 21:48:21.317078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-07 21:48:21.317341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-07 21:48:21.317372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-07 21:48:21.317749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-07 21:48:21.317780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-07 21:48:21.318044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-07 21:48:21.318075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-07 21:48:21.318423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-07 21:48:21.318454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-07 21:48:21.318802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-07 21:48:21.318832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-07 21:48:21.319172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-07 21:48:21.319203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-07 21:48:21.319542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-07 21:48:21.319572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-07 21:48:21.319909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-07 21:48:21.319939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.316 [2024-06-07 21:48:21.320189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.316 [2024-06-07 21:48:21.320221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.316 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-07 21:48:21.320484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-07 21:48:21.320514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-07 21:48:21.320876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-07 21:48:21.320907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-07 21:48:21.321216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-07 21:48:21.321247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-07 21:48:21.321571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-07 21:48:21.321580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-07 21:48:21.321866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-07 21:48:21.321877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-07 21:48:21.322183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-07 21:48:21.322214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-07 21:48:21.322556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-07 21:48:21.322587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-07 21:48:21.322923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-07 21:48:21.322953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-07 21:48:21.323263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-07 21:48:21.323305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-07 21:48:21.323610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-07 21:48:21.323619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-07 21:48:21.323945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-07 21:48:21.323975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-07 21:48:21.324329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-07 21:48:21.324361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-07 21:48:21.324690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-07 21:48:21.324699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-07 21:48:21.324997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-07 21:48:21.325006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-07 21:48:21.325322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-07 21:48:21.325331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-07 21:48:21.325539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-07 21:48:21.325548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-07 21:48:21.325770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-07 21:48:21.325779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-07 21:48:21.326012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-07 21:48:21.326021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-07 21:48:21.326321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-07 21:48:21.326330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-07 21:48:21.326599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-07 21:48:21.326608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-07 21:48:21.326905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-07 21:48:21.326914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-07 21:48:21.327206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-07 21:48:21.327216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-07 21:48:21.327450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-07 21:48:21.327459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-07 21:48:21.327780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-07 21:48:21.327789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-07 21:48:21.328055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-07 21:48:21.328065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-07 21:48:21.328387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-07 21:48:21.328396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-07 21:48:21.328626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-07 21:48:21.328636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-07 21:48:21.328780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-07 21:48:21.328789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-07 21:48:21.329088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-07 21:48:21.329119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-07 21:48:21.329312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-07 21:48:21.329342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-07 21:48:21.329659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-07 21:48:21.329689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-07 21:48:21.330042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-07 21:48:21.330073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-07 21:48:21.330333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-07 21:48:21.330364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-07 21:48:21.330612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-07 21:48:21.330642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-07 21:48:21.330984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-07 21:48:21.331015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-07 21:48:21.331292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.317 [2024-06-07 21:48:21.331323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.317 qpair failed and we were unable to recover it. 00:31:21.317 [2024-06-07 21:48:21.331671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-07 21:48:21.331701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-07 21:48:21.332089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-07 21:48:21.332122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-07 21:48:21.332395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-07 21:48:21.332405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-07 21:48:21.332656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-07 21:48:21.332687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-07 21:48:21.333036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-07 21:48:21.333067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-07 21:48:21.333319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-07 21:48:21.333328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-07 21:48:21.333524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-07 21:48:21.333533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-07 21:48:21.333825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-07 21:48:21.333834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-07 21:48:21.334132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-07 21:48:21.334143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-07 21:48:21.334291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-07 21:48:21.334300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-07 21:48:21.334595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-07 21:48:21.334605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-07 21:48:21.334869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-07 21:48:21.334878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-07 21:48:21.335169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-07 21:48:21.335179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-07 21:48:21.335394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-07 21:48:21.335403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-07 21:48:21.335688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-07 21:48:21.335697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-07 21:48:21.335995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-07 21:48:21.336004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-07 21:48:21.336323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-07 21:48:21.336333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-07 21:48:21.336565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-07 21:48:21.336574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-07 21:48:21.336850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-07 21:48:21.336880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-07 21:48:21.337184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-07 21:48:21.337215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-07 21:48:21.337551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-07 21:48:21.337581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-07 21:48:21.337914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-07 21:48:21.337944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-07 21:48:21.338239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-07 21:48:21.338271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-07 21:48:21.338581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-07 21:48:21.338612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-07 21:48:21.338943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-07 21:48:21.338974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-07 21:48:21.339328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-07 21:48:21.339337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-07 21:48:21.339681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-07 21:48:21.339711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-07 21:48:21.340047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-07 21:48:21.340078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-07 21:48:21.340391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-07 21:48:21.340422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-07 21:48:21.340746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-07 21:48:21.340776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-07 21:48:21.341094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-07 21:48:21.341126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-07 21:48:21.341447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-07 21:48:21.341477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-07 21:48:21.341745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-07 21:48:21.341776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-07 21:48:21.342148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-07 21:48:21.342157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-07 21:48:21.342313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-07 21:48:21.342322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-07 21:48:21.342541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-07 21:48:21.342550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-07 21:48:21.342840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-07 21:48:21.342849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-07 21:48:21.343047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.318 [2024-06-07 21:48:21.343057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.318 qpair failed and we were unable to recover it. 00:31:21.318 [2024-06-07 21:48:21.343331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-07 21:48:21.343340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-07 21:48:21.343484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-07 21:48:21.343493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-07 21:48:21.343717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-07 21:48:21.343747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-07 21:48:21.344081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-07 21:48:21.344114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-07 21:48:21.344449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-07 21:48:21.344459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-07 21:48:21.344672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-07 21:48:21.344681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-07 21:48:21.344968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-07 21:48:21.344978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-07 21:48:21.345294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-07 21:48:21.345304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-07 21:48:21.345608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-07 21:48:21.345638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-07 21:48:21.345906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-07 21:48:21.345936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-07 21:48:21.346263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-07 21:48:21.346275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-07 21:48:21.346570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-07 21:48:21.346580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-07 21:48:21.346846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-07 21:48:21.346855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-07 21:48:21.347194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-07 21:48:21.347225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-07 21:48:21.347536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-07 21:48:21.347566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-07 21:48:21.347905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-07 21:48:21.347936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-07 21:48:21.348252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-07 21:48:21.348284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-07 21:48:21.348599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-07 21:48:21.348609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-07 21:48:21.348930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-07 21:48:21.348939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-07 21:48:21.349233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-07 21:48:21.349242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-07 21:48:21.349457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-07 21:48:21.349466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-07 21:48:21.349676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-07 21:48:21.349685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-07 21:48:21.349951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-07 21:48:21.349960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-07 21:48:21.350157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-07 21:48:21.350167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-07 21:48:21.350385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-07 21:48:21.350395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-07 21:48:21.350708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-07 21:48:21.350738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-07 21:48:21.350996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-07 21:48:21.351034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-07 21:48:21.351401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-07 21:48:21.351410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-07 21:48:21.351693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-07 21:48:21.351702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-07 21:48:21.352009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-07 21:48:21.352018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-07 21:48:21.352186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-07 21:48:21.352195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-07 21:48:21.352508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-07 21:48:21.352517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-07 21:48:21.352813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-07 21:48:21.352822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-07 21:48:21.353131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-07 21:48:21.353141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-07 21:48:21.353455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-07 21:48:21.353464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-07 21:48:21.353762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-07 21:48:21.353771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-07 21:48:21.354013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-07 21:48:21.354051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-07 21:48:21.354255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-07 21:48:21.354285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.319 [2024-06-07 21:48:21.354659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.319 [2024-06-07 21:48:21.354689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.319 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-07 21:48:21.354954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-07 21:48:21.354984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-07 21:48:21.355346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-07 21:48:21.355378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-07 21:48:21.355646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-07 21:48:21.355677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-07 21:48:21.355952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-07 21:48:21.355983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-07 21:48:21.356259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-07 21:48:21.356290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-07 21:48:21.356625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-07 21:48:21.356634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-07 21:48:21.356946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-07 21:48:21.356977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-07 21:48:21.357298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-07 21:48:21.357329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-07 21:48:21.357560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-07 21:48:21.357569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-07 21:48:21.357840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-07 21:48:21.357849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-07 21:48:21.358145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-07 21:48:21.358154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-07 21:48:21.358440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-07 21:48:21.358451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-07 21:48:21.358615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-07 21:48:21.358625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-07 21:48:21.358901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-07 21:48:21.358931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-07 21:48:21.359270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-07 21:48:21.359301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-07 21:48:21.359635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-07 21:48:21.359665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-07 21:48:21.359977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-07 21:48:21.360008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-07 21:48:21.360272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-07 21:48:21.360281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-07 21:48:21.360486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-07 21:48:21.360495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-07 21:48:21.360796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-07 21:48:21.360805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-07 21:48:21.361096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-07 21:48:21.361106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-07 21:48:21.361428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-07 21:48:21.361448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-07 21:48:21.361690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-07 21:48:21.361721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-07 21:48:21.362044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-07 21:48:21.362075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-07 21:48:21.362392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-07 21:48:21.362422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-07 21:48:21.362765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-07 21:48:21.362796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-07 21:48:21.363141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-07 21:48:21.363174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-07 21:48:21.363513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-07 21:48:21.363544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-07 21:48:21.363882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-07 21:48:21.363912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-07 21:48:21.364255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-07 21:48:21.364287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-07 21:48:21.364543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-07 21:48:21.364573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-07 21:48:21.364910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-07 21:48:21.364941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-07 21:48:21.365184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-07 21:48:21.365216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-07 21:48:21.365495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-07 21:48:21.365525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-07 21:48:21.365862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-07 21:48:21.365892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-07 21:48:21.366094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-07 21:48:21.366125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-07 21:48:21.366388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-07 21:48:21.366426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-07 21:48:21.366632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.320 [2024-06-07 21:48:21.366641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.320 qpair failed and we were unable to recover it. 00:31:21.320 [2024-06-07 21:48:21.366931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-07 21:48:21.366940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-07 21:48:21.367214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-07 21:48:21.367224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-07 21:48:21.367439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-07 21:48:21.367448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-07 21:48:21.367822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-07 21:48:21.367832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-07 21:48:21.368052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-07 21:48:21.368077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-07 21:48:21.368391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-07 21:48:21.368400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-07 21:48:21.368619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-07 21:48:21.368628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-07 21:48:21.368837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-07 21:48:21.368846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-07 21:48:21.369090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-07 21:48:21.369100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-07 21:48:21.369422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-07 21:48:21.369431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-07 21:48:21.369726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-07 21:48:21.369736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-07 21:48:21.369969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-07 21:48:21.369978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-07 21:48:21.370269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-07 21:48:21.370279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-07 21:48:21.370475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-07 21:48:21.370487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-07 21:48:21.370706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-07 21:48:21.370715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-07 21:48:21.371005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-07 21:48:21.371014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-07 21:48:21.371322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-07 21:48:21.371332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-07 21:48:21.371641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-07 21:48:21.371682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-07 21:48:21.372038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-07 21:48:21.372069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-07 21:48:21.372326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-07 21:48:21.372356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-07 21:48:21.372638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-07 21:48:21.372669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-07 21:48:21.373044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-07 21:48:21.373075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-07 21:48:21.373325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-07 21:48:21.373355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-07 21:48:21.373605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-07 21:48:21.373614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-07 21:48:21.373816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-07 21:48:21.373825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-07 21:48:21.374040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-07 21:48:21.374049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-07 21:48:21.374271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-07 21:48:21.374281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-07 21:48:21.374551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-07 21:48:21.374561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-07 21:48:21.374869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-07 21:48:21.374878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-07 21:48:21.375076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-07 21:48:21.375086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-07 21:48:21.375306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-07 21:48:21.375316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-07 21:48:21.375555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-07 21:48:21.375564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-07 21:48:21.375867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.321 [2024-06-07 21:48:21.375876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.321 qpair failed and we were unable to recover it. 00:31:21.321 [2024-06-07 21:48:21.376151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-07 21:48:21.376161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-07 21:48:21.376372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-07 21:48:21.376382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-07 21:48:21.376529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-07 21:48:21.376539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-07 21:48:21.376757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-07 21:48:21.376787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-07 21:48:21.377076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-07 21:48:21.377107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-07 21:48:21.377390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-07 21:48:21.377420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-07 21:48:21.377807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-07 21:48:21.377838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-07 21:48:21.378121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-07 21:48:21.378154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-07 21:48:21.378396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-07 21:48:21.378406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-07 21:48:21.378608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-07 21:48:21.378618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-07 21:48:21.378926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-07 21:48:21.378936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-07 21:48:21.379192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-07 21:48:21.379202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-07 21:48:21.379469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-07 21:48:21.379478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-07 21:48:21.379748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-07 21:48:21.379758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-07 21:48:21.379998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-07 21:48:21.380008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-07 21:48:21.380241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-07 21:48:21.380250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-07 21:48:21.380493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-07 21:48:21.380524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-07 21:48:21.380834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-07 21:48:21.380864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-07 21:48:21.381177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-07 21:48:21.381209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-07 21:48:21.381542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-07 21:48:21.381573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-07 21:48:21.381925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-07 21:48:21.381961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-07 21:48:21.382277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-07 21:48:21.382309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-07 21:48:21.382562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-07 21:48:21.382592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-07 21:48:21.382769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-07 21:48:21.382799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-07 21:48:21.383056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-07 21:48:21.383087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-07 21:48:21.383276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-07 21:48:21.383307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-07 21:48:21.383645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-07 21:48:21.383676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-07 21:48:21.384016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-07 21:48:21.384056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-07 21:48:21.384366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-07 21:48:21.384375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-07 21:48:21.384715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-07 21:48:21.384724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-07 21:48:21.384950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-07 21:48:21.384980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-07 21:48:21.385328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-07 21:48:21.385358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-07 21:48:21.385611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-07 21:48:21.385620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-07 21:48:21.385860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-07 21:48:21.385870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-07 21:48:21.386162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-07 21:48:21.386172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-07 21:48:21.386478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-07 21:48:21.386488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-07 21:48:21.386688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-07 21:48:21.386698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-07 21:48:21.386906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.322 [2024-06-07 21:48:21.386937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.322 qpair failed and we were unable to recover it. 00:31:21.322 [2024-06-07 21:48:21.387273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-07 21:48:21.387305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-07 21:48:21.387563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-07 21:48:21.387593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-07 21:48:21.387954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-07 21:48:21.387985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-07 21:48:21.388185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-07 21:48:21.388216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-07 21:48:21.388570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-07 21:48:21.388601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-07 21:48:21.388959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-07 21:48:21.388990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-07 21:48:21.389337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-07 21:48:21.389368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-07 21:48:21.389677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-07 21:48:21.389707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-07 21:48:21.390061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-07 21:48:21.390093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-07 21:48:21.390487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-07 21:48:21.390556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-07 21:48:21.390898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-07 21:48:21.390965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b4d60 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-07 21:48:21.391339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-07 21:48:21.391381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b4d60 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-07 21:48:21.391639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-07 21:48:21.391650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-07 21:48:21.391943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-07 21:48:21.391953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-07 21:48:21.392237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-07 21:48:21.392247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-07 21:48:21.392487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-07 21:48:21.392496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-07 21:48:21.392737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-07 21:48:21.392746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-07 21:48:21.393029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-07 21:48:21.393039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-07 21:48:21.393249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-07 21:48:21.393258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-07 21:48:21.393559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-07 21:48:21.393568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-07 21:48:21.393804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-07 21:48:21.393813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-07 21:48:21.394127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-07 21:48:21.394137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-07 21:48:21.394404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-07 21:48:21.394415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-07 21:48:21.394657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-07 21:48:21.394667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-07 21:48:21.394953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-07 21:48:21.394963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-07 21:48:21.395122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-07 21:48:21.395132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-07 21:48:21.395289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-07 21:48:21.395298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-07 21:48:21.395514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-07 21:48:21.395523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-07 21:48:21.395865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-07 21:48:21.395874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-07 21:48:21.396092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-07 21:48:21.396101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-07 21:48:21.396308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-07 21:48:21.396339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-07 21:48:21.396549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-07 21:48:21.396580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-07 21:48:21.396772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-07 21:48:21.396783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-07 21:48:21.397038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-07 21:48:21.397070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-07 21:48:21.397391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-07 21:48:21.397422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-07 21:48:21.397732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-07 21:48:21.397742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-07 21:48:21.398034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-07 21:48:21.398043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-07 21:48:21.398269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-07 21:48:21.398279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.323 [2024-06-07 21:48:21.398513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.323 [2024-06-07 21:48:21.398522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.323 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-07 21:48:21.398925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-07 21:48:21.398955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-07 21:48:21.399270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-07 21:48:21.399301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-07 21:48:21.399564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-07 21:48:21.399573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-07 21:48:21.399854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-07 21:48:21.399863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-07 21:48:21.400075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-07 21:48:21.400085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-07 21:48:21.400408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-07 21:48:21.400417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-07 21:48:21.400635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-07 21:48:21.400645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-07 21:48:21.400855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-07 21:48:21.400865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-07 21:48:21.401183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-07 21:48:21.401194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-07 21:48:21.401409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-07 21:48:21.401419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-07 21:48:21.401716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-07 21:48:21.401726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-07 21:48:21.401876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-07 21:48:21.401886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-07 21:48:21.402212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-07 21:48:21.402222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-07 21:48:21.402497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-07 21:48:21.402506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-07 21:48:21.402839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-07 21:48:21.402848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-07 21:48:21.403271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-07 21:48:21.403303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-07 21:48:21.403636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-07 21:48:21.403646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-07 21:48:21.403923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-07 21:48:21.403953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-07 21:48:21.404243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-07 21:48:21.404274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-07 21:48:21.404482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-07 21:48:21.404512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-07 21:48:21.404857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-07 21:48:21.404887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-07 21:48:21.405280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-07 21:48:21.405311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-07 21:48:21.405511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-07 21:48:21.405521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-07 21:48:21.405738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-07 21:48:21.405749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-07 21:48:21.405979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-07 21:48:21.405989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-07 21:48:21.406225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-07 21:48:21.406256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-07 21:48:21.406558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-07 21:48:21.406588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-07 21:48:21.406919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-07 21:48:21.406929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-07 21:48:21.407142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-07 21:48:21.407152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-07 21:48:21.407448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-07 21:48:21.407457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-07 21:48:21.407702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-07 21:48:21.407711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-07 21:48:21.407852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-07 21:48:21.407861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-07 21:48:21.408087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-07 21:48:21.408098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-07 21:48:21.408321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-07 21:48:21.408352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-07 21:48:21.408679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-07 21:48:21.408709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-07 21:48:21.409063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-07 21:48:21.409095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-07 21:48:21.409423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.324 [2024-06-07 21:48:21.409453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.324 qpair failed and we were unable to recover it. 00:31:21.324 [2024-06-07 21:48:21.409795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-07 21:48:21.409825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-07 21:48:21.410137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-07 21:48:21.410168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-07 21:48:21.410446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-07 21:48:21.410455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-07 21:48:21.410679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-07 21:48:21.410688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-07 21:48:21.411002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-07 21:48:21.411012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-07 21:48:21.411360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-07 21:48:21.411372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-07 21:48:21.411638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-07 21:48:21.411647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-07 21:48:21.411907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-07 21:48:21.411917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-07 21:48:21.412121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-07 21:48:21.412132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-07 21:48:21.412285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-07 21:48:21.412295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-07 21:48:21.412456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-07 21:48:21.412465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-07 21:48:21.412817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-07 21:48:21.412848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-07 21:48:21.413161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-07 21:48:21.413192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-07 21:48:21.413536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-07 21:48:21.413545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-07 21:48:21.413843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-07 21:48:21.413852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-07 21:48:21.414168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-07 21:48:21.414199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-07 21:48:21.414399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-07 21:48:21.414429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-07 21:48:21.414740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-07 21:48:21.414770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-07 21:48:21.414970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-07 21:48:21.415000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-07 21:48:21.415243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-07 21:48:21.415274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-07 21:48:21.415560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-07 21:48:21.415569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-07 21:48:21.415922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-07 21:48:21.415931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-07 21:48:21.416158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-07 21:48:21.416169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-07 21:48:21.416511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-07 21:48:21.416542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-07 21:48:21.416807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-07 21:48:21.416837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-07 21:48:21.417133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-07 21:48:21.417164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-07 21:48:21.417371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-07 21:48:21.417382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-07 21:48:21.417597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-07 21:48:21.417608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-07 21:48:21.417843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-07 21:48:21.417852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-07 21:48:21.418074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-07 21:48:21.418084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-07 21:48:21.418350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-07 21:48:21.418361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-07 21:48:21.418515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-07 21:48:21.418525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-07 21:48:21.418933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-07 21:48:21.418964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-07 21:48:21.419301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-07 21:48:21.419344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-07 21:48:21.419568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-07 21:48:21.419577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-07 21:48:21.419896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-07 21:48:21.419905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-07 21:48:21.420182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-07 21:48:21.420191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-07 21:48:21.420410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-07 21:48:21.420419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.325 qpair failed and we were unable to recover it. 00:31:21.325 [2024-06-07 21:48:21.420639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.325 [2024-06-07 21:48:21.420648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.326 qpair failed and we were unable to recover it. 00:31:21.326 [2024-06-07 21:48:21.420941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.326 [2024-06-07 21:48:21.420971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.326 qpair failed and we were unable to recover it. 00:31:21.326 [2024-06-07 21:48:21.421248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.326 [2024-06-07 21:48:21.421279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.326 qpair failed and we were unable to recover it. 00:31:21.326 [2024-06-07 21:48:21.421535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.326 [2024-06-07 21:48:21.421544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.326 qpair failed and we were unable to recover it. 00:31:21.326 [2024-06-07 21:48:21.421864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.326 [2024-06-07 21:48:21.421875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.326 qpair failed and we were unable to recover it. 00:31:21.326 [2024-06-07 21:48:21.422149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.326 [2024-06-07 21:48:21.422160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.326 qpair failed and we were unable to recover it. 00:31:21.326 [2024-06-07 21:48:21.422372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.326 [2024-06-07 21:48:21.422382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.326 qpair failed and we were unable to recover it. 00:31:21.326 [2024-06-07 21:48:21.422589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.326 [2024-06-07 21:48:21.422600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.326 qpair failed and we were unable to recover it. 00:31:21.326 [2024-06-07 21:48:21.422869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.326 [2024-06-07 21:48:21.422879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.326 qpair failed and we were unable to recover it. 00:31:21.326 [2024-06-07 21:48:21.423218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.326 [2024-06-07 21:48:21.423228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.326 qpair failed and we were unable to recover it. 00:31:21.326 [2024-06-07 21:48:21.423380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.326 [2024-06-07 21:48:21.423389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.326 qpair failed and we were unable to recover it. 00:31:21.326 [2024-06-07 21:48:21.423594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.326 [2024-06-07 21:48:21.423603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.326 qpair failed and we were unable to recover it. 00:31:21.326 [2024-06-07 21:48:21.423871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.326 [2024-06-07 21:48:21.423880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.326 qpair failed and we were unable to recover it. 00:31:21.326 [2024-06-07 21:48:21.424167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.326 [2024-06-07 21:48:21.424176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.326 qpair failed and we were unable to recover it. 00:31:21.326 [2024-06-07 21:48:21.424417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.326 [2024-06-07 21:48:21.424426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.326 qpair failed and we were unable to recover it. 00:31:21.326 [2024-06-07 21:48:21.424595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.326 [2024-06-07 21:48:21.424604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.326 qpair failed and we were unable to recover it. 00:31:21.326 [2024-06-07 21:48:21.424874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.326 [2024-06-07 21:48:21.424883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.326 qpair failed and we were unable to recover it. 00:31:21.326 [2024-06-07 21:48:21.425049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.326 [2024-06-07 21:48:21.425074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.326 qpair failed and we were unable to recover it. 00:31:21.326 [2024-06-07 21:48:21.425295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.326 [2024-06-07 21:48:21.425304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.326 qpair failed and we were unable to recover it. 00:31:21.326 [2024-06-07 21:48:21.425457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.326 [2024-06-07 21:48:21.425466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.326 qpair failed and we were unable to recover it. 00:31:21.326 [2024-06-07 21:48:21.425690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.326 [2024-06-07 21:48:21.425699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.326 qpair failed and we were unable to recover it. 00:31:21.326 [2024-06-07 21:48:21.425937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.326 [2024-06-07 21:48:21.425946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.326 qpair failed and we were unable to recover it. 00:31:21.326 [2024-06-07 21:48:21.426236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.326 [2024-06-07 21:48:21.426267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.326 qpair failed and we were unable to recover it. 00:31:21.326 [2024-06-07 21:48:21.426634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.326 [2024-06-07 21:48:21.426663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.326 qpair failed and we were unable to recover it. 00:31:21.326 [2024-06-07 21:48:21.426898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.326 [2024-06-07 21:48:21.426908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.326 qpair failed and we were unable to recover it. 00:31:21.326 [2024-06-07 21:48:21.427229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.326 [2024-06-07 21:48:21.427239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.326 qpair failed and we were unable to recover it. 00:31:21.326 [2024-06-07 21:48:21.427449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.326 [2024-06-07 21:48:21.427459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.326 qpair failed and we were unable to recover it. 00:31:21.326 [2024-06-07 21:48:21.427764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.326 [2024-06-07 21:48:21.427774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.326 qpair failed and we were unable to recover it. 00:31:21.326 [2024-06-07 21:48:21.428050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.326 [2024-06-07 21:48:21.428061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.326 qpair failed and we were unable to recover it. 00:31:21.326 [2024-06-07 21:48:21.428278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.326 [2024-06-07 21:48:21.428287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.326 qpair failed and we were unable to recover it. 00:31:21.326 [2024-06-07 21:48:21.428448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.326 [2024-06-07 21:48:21.428457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.326 qpair failed and we were unable to recover it. 00:31:21.326 [2024-06-07 21:48:21.428612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.326 [2024-06-07 21:48:21.428623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.326 qpair failed and we were unable to recover it. 00:31:21.326 [2024-06-07 21:48:21.428925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.326 [2024-06-07 21:48:21.428935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.326 qpair failed and we were unable to recover it. 00:31:21.326 [2024-06-07 21:48:21.429205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.326 [2024-06-07 21:48:21.429216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.326 qpair failed and we were unable to recover it. 00:31:21.327 [2024-06-07 21:48:21.429418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.327 [2024-06-07 21:48:21.429428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.327 qpair failed and we were unable to recover it. 00:31:21.327 [2024-06-07 21:48:21.429650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.327 [2024-06-07 21:48:21.429661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.327 qpair failed and we were unable to recover it. 00:31:21.327 [2024-06-07 21:48:21.429965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.327 [2024-06-07 21:48:21.429975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.327 qpair failed and we were unable to recover it. 00:31:21.327 [2024-06-07 21:48:21.430187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.327 [2024-06-07 21:48:21.430197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.327 qpair failed and we were unable to recover it. 00:31:21.327 [2024-06-07 21:48:21.430394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.327 [2024-06-07 21:48:21.430403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.327 qpair failed and we were unable to recover it. 00:31:21.327 [2024-06-07 21:48:21.430617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.327 [2024-06-07 21:48:21.430626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.327 qpair failed and we were unable to recover it. 00:31:21.327 [2024-06-07 21:48:21.430869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.327 [2024-06-07 21:48:21.430878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.327 qpair failed and we were unable to recover it. 00:31:21.327 [2024-06-07 21:48:21.431151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.327 [2024-06-07 21:48:21.431161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.327 qpair failed and we were unable to recover it. 00:31:21.327 [2024-06-07 21:48:21.431325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.327 [2024-06-07 21:48:21.431334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.327 qpair failed and we were unable to recover it. 00:31:21.327 [2024-06-07 21:48:21.431547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.327 [2024-06-07 21:48:21.431557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.327 qpair failed and we were unable to recover it. 00:31:21.327 [2024-06-07 21:48:21.431852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.327 [2024-06-07 21:48:21.431883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.327 qpair failed and we were unable to recover it. 00:31:21.327 [2024-06-07 21:48:21.432225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.327 [2024-06-07 21:48:21.432257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.327 qpair failed and we were unable to recover it. 00:31:21.327 [2024-06-07 21:48:21.432516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.327 [2024-06-07 21:48:21.432546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.327 qpair failed and we were unable to recover it. 00:31:21.327 [2024-06-07 21:48:21.432926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.327 [2024-06-07 21:48:21.432956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.327 qpair failed and we were unable to recover it. 00:31:21.327 [2024-06-07 21:48:21.433297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.327 [2024-06-07 21:48:21.433329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.327 qpair failed and we were unable to recover it. 00:31:21.327 [2024-06-07 21:48:21.433635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.327 [2024-06-07 21:48:21.433645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.327 qpair failed and we were unable to recover it. 00:31:21.327 [2024-06-07 21:48:21.433873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.327 [2024-06-07 21:48:21.433883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.327 qpair failed and we were unable to recover it. 00:31:21.327 [2024-06-07 21:48:21.434118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.327 [2024-06-07 21:48:21.434127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.327 qpair failed and we were unable to recover it. 00:31:21.327 [2024-06-07 21:48:21.434393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.327 [2024-06-07 21:48:21.434403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.327 qpair failed and we were unable to recover it. 00:31:21.327 [2024-06-07 21:48:21.434615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.327 [2024-06-07 21:48:21.434625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.327 qpair failed and we were unable to recover it. 00:31:21.327 [2024-06-07 21:48:21.434944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.327 [2024-06-07 21:48:21.434974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.327 qpair failed and we were unable to recover it. 00:31:21.327 [2024-06-07 21:48:21.435339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.327 [2024-06-07 21:48:21.435409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.327 qpair failed and we were unable to recover it. 00:31:21.327 [2024-06-07 21:48:21.435669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.327 [2024-06-07 21:48:21.435689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.327 qpair failed and we were unable to recover it. 00:31:21.327 [2024-06-07 21:48:21.435925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.327 [2024-06-07 21:48:21.435942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.327 qpair failed and we were unable to recover it. 00:31:21.327 [2024-06-07 21:48:21.436162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.327 [2024-06-07 21:48:21.436174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.327 qpair failed and we were unable to recover it. 00:31:21.327 [2024-06-07 21:48:21.436509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.327 [2024-06-07 21:48:21.436519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.327 qpair failed and we were unable to recover it. 00:31:21.327 [2024-06-07 21:48:21.436787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.327 [2024-06-07 21:48:21.436796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.327 qpair failed and we were unable to recover it. 00:31:21.327 [2024-06-07 21:48:21.437032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.327 [2024-06-07 21:48:21.437042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.327 qpair failed and we were unable to recover it. 00:31:21.327 [2024-06-07 21:48:21.437261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.327 [2024-06-07 21:48:21.437271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.327 qpair failed and we were unable to recover it. 00:31:21.327 [2024-06-07 21:48:21.437535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.327 [2024-06-07 21:48:21.437545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.327 qpair failed and we were unable to recover it. 00:31:21.327 [2024-06-07 21:48:21.437863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.327 [2024-06-07 21:48:21.437872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.327 qpair failed and we were unable to recover it. 00:31:21.327 [2024-06-07 21:48:21.438141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.327 [2024-06-07 21:48:21.438151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.327 qpair failed and we were unable to recover it. 00:31:21.327 [2024-06-07 21:48:21.438367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.327 [2024-06-07 21:48:21.438377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.327 qpair failed and we were unable to recover it. 00:31:21.327 [2024-06-07 21:48:21.438608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.327 [2024-06-07 21:48:21.438620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.327 qpair failed and we were unable to recover it. 00:31:21.327 [2024-06-07 21:48:21.438782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.327 [2024-06-07 21:48:21.438792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.327 qpair failed and we were unable to recover it. 00:31:21.327 [2024-06-07 21:48:21.439089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.327 [2024-06-07 21:48:21.439120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.327 qpair failed and we were unable to recover it. 00:31:21.327 [2024-06-07 21:48:21.439385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.327 [2024-06-07 21:48:21.439416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.327 qpair failed and we were unable to recover it. 00:31:21.327 [2024-06-07 21:48:21.439621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.327 [2024-06-07 21:48:21.439631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.327 qpair failed and we were unable to recover it. 00:31:21.327 [2024-06-07 21:48:21.439830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.328 [2024-06-07 21:48:21.439840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.328 qpair failed and we were unable to recover it. 00:31:21.328 [2024-06-07 21:48:21.440146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.328 [2024-06-07 21:48:21.440156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.328 qpair failed and we were unable to recover it. 00:31:21.328 [2024-06-07 21:48:21.440320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.328 [2024-06-07 21:48:21.440330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.328 qpair failed and we were unable to recover it. 00:31:21.328 [2024-06-07 21:48:21.440611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.328 [2024-06-07 21:48:21.440620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.328 qpair failed and we were unable to recover it. 00:31:21.328 [2024-06-07 21:48:21.440911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.328 [2024-06-07 21:48:21.440920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.328 qpair failed and we were unable to recover it. 00:31:21.328 [2024-06-07 21:48:21.441158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.328 [2024-06-07 21:48:21.441167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.328 qpair failed and we were unable to recover it. 00:31:21.328 [2024-06-07 21:48:21.441335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.328 [2024-06-07 21:48:21.441344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.328 qpair failed and we were unable to recover it. 00:31:21.328 [2024-06-07 21:48:21.441560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.328 [2024-06-07 21:48:21.441590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.328 qpair failed and we were unable to recover it. 00:31:21.328 [2024-06-07 21:48:21.441876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.328 [2024-06-07 21:48:21.441907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.328 qpair failed and we were unable to recover it. 00:31:21.328 [2024-06-07 21:48:21.442213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.328 [2024-06-07 21:48:21.442245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.328 qpair failed and we were unable to recover it. 00:31:21.328 [2024-06-07 21:48:21.442503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.328 [2024-06-07 21:48:21.442534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.328 qpair failed and we were unable to recover it. 00:31:21.328 [2024-06-07 21:48:21.442782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.328 [2024-06-07 21:48:21.442813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.328 qpair failed and we were unable to recover it. 00:31:21.328 [2024-06-07 21:48:21.443169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.328 [2024-06-07 21:48:21.443200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.328 qpair failed and we were unable to recover it. 00:31:21.328 [2024-06-07 21:48:21.443411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.328 [2024-06-07 21:48:21.443441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.328 qpair failed and we were unable to recover it. 00:31:21.328 [2024-06-07 21:48:21.443652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.328 [2024-06-07 21:48:21.443683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.328 qpair failed and we were unable to recover it. 00:31:21.328 [2024-06-07 21:48:21.443990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.328 [2024-06-07 21:48:21.443999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.328 qpair failed and we were unable to recover it. 00:31:21.328 [2024-06-07 21:48:21.444293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.328 [2024-06-07 21:48:21.444302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.328 qpair failed and we were unable to recover it. 00:31:21.328 [2024-06-07 21:48:21.444513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.328 [2024-06-07 21:48:21.444522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.328 qpair failed and we were unable to recover it. 00:31:21.328 [2024-06-07 21:48:21.444777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.328 [2024-06-07 21:48:21.444786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.328 qpair failed and we were unable to recover it. 00:31:21.328 [2024-06-07 21:48:21.445056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.328 [2024-06-07 21:48:21.445066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.328 qpair failed and we were unable to recover it. 00:31:21.328 [2024-06-07 21:48:21.445280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.328 [2024-06-07 21:48:21.445290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.328 qpair failed and we were unable to recover it. 00:31:21.328 [2024-06-07 21:48:21.445445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.328 [2024-06-07 21:48:21.445455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.328 qpair failed and we were unable to recover it. 00:31:21.328 [2024-06-07 21:48:21.445728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.328 [2024-06-07 21:48:21.445759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.328 qpair failed and we were unable to recover it. 00:31:21.328 [2024-06-07 21:48:21.446008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.328 [2024-06-07 21:48:21.446052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.328 qpair failed and we were unable to recover it. 00:31:21.328 [2024-06-07 21:48:21.446334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.328 [2024-06-07 21:48:21.446364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.328 qpair failed and we were unable to recover it. 00:31:21.328 [2024-06-07 21:48:21.446711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.328 [2024-06-07 21:48:21.446720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.328 qpair failed and we were unable to recover it. 00:31:21.328 [2024-06-07 21:48:21.447055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.328 [2024-06-07 21:48:21.447065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.328 qpair failed and we were unable to recover it. 00:31:21.328 [2024-06-07 21:48:21.447215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.328 [2024-06-07 21:48:21.447245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.328 qpair failed and we were unable to recover it. 00:31:21.328 [2024-06-07 21:48:21.447530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.328 [2024-06-07 21:48:21.447560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.328 qpair failed and we were unable to recover it. 00:31:21.328 [2024-06-07 21:48:21.447863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.328 [2024-06-07 21:48:21.447872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.328 qpair failed and we were unable to recover it. 00:31:21.328 [2024-06-07 21:48:21.448193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.328 [2024-06-07 21:48:21.448203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.328 qpair failed and we were unable to recover it. 00:31:21.328 [2024-06-07 21:48:21.448367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.328 [2024-06-07 21:48:21.448376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.328 qpair failed and we were unable to recover it. 00:31:21.328 [2024-06-07 21:48:21.448649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.328 [2024-06-07 21:48:21.448658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.328 qpair failed and we were unable to recover it. 00:31:21.328 [2024-06-07 21:48:21.448904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.328 [2024-06-07 21:48:21.448913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.328 qpair failed and we were unable to recover it. 00:31:21.328 [2024-06-07 21:48:21.449242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.328 [2024-06-07 21:48:21.449252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.328 qpair failed and we were unable to recover it. 00:31:21.328 [2024-06-07 21:48:21.449495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.328 [2024-06-07 21:48:21.449504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.328 qpair failed and we were unable to recover it. 00:31:21.328 [2024-06-07 21:48:21.449747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.328 [2024-06-07 21:48:21.449756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.328 qpair failed and we were unable to recover it. 00:31:21.328 [2024-06-07 21:48:21.449968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.328 [2024-06-07 21:48:21.449977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.328 qpair failed and we were unable to recover it. 00:31:21.328 [2024-06-07 21:48:21.450239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.328 [2024-06-07 21:48:21.450248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.329 qpair failed and we were unable to recover it. 00:31:21.329 [2024-06-07 21:48:21.450407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.329 [2024-06-07 21:48:21.450417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.329 qpair failed and we were unable to recover it. 00:31:21.329 [2024-06-07 21:48:21.450615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.329 [2024-06-07 21:48:21.450625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.329 qpair failed and we were unable to recover it. 00:31:21.329 [2024-06-07 21:48:21.450767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.329 [2024-06-07 21:48:21.450776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.329 qpair failed and we were unable to recover it. 00:31:21.329 [2024-06-07 21:48:21.450997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.329 [2024-06-07 21:48:21.451005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.329 qpair failed and we were unable to recover it. 00:31:21.329 [2024-06-07 21:48:21.451260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.329 [2024-06-07 21:48:21.451291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.329 qpair failed and we were unable to recover it. 00:31:21.329 [2024-06-07 21:48:21.451555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.329 [2024-06-07 21:48:21.451585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.329 qpair failed and we were unable to recover it. 00:31:21.329 [2024-06-07 21:48:21.451956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.329 [2024-06-07 21:48:21.451986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.329 qpair failed and we were unable to recover it. 00:31:21.329 [2024-06-07 21:48:21.452258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.329 [2024-06-07 21:48:21.452290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.329 qpair failed and we were unable to recover it. 00:31:21.329 [2024-06-07 21:48:21.452547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.329 [2024-06-07 21:48:21.452578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.329 qpair failed and we were unable to recover it. 00:31:21.329 [2024-06-07 21:48:21.452839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.329 [2024-06-07 21:48:21.452848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.329 qpair failed and we were unable to recover it. 00:31:21.329 [2024-06-07 21:48:21.453212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.329 [2024-06-07 21:48:21.453221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.329 qpair failed and we were unable to recover it. 00:31:21.329 [2024-06-07 21:48:21.453490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.329 [2024-06-07 21:48:21.453500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.329 qpair failed and we were unable to recover it. 00:31:21.329 [2024-06-07 21:48:21.453821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.329 [2024-06-07 21:48:21.453851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.329 qpair failed and we were unable to recover it. 00:31:21.329 [2024-06-07 21:48:21.454261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.329 [2024-06-07 21:48:21.454292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.329 qpair failed and we were unable to recover it. 00:31:21.329 [2024-06-07 21:48:21.454580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.329 [2024-06-07 21:48:21.454610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.329 qpair failed and we were unable to recover it. 00:31:21.329 [2024-06-07 21:48:21.454997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.329 [2024-06-07 21:48:21.455040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.329 qpair failed and we were unable to recover it. 00:31:21.329 [2024-06-07 21:48:21.455341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.329 [2024-06-07 21:48:21.455371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.329 qpair failed and we were unable to recover it. 00:31:21.329 [2024-06-07 21:48:21.455573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.329 [2024-06-07 21:48:21.455582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.329 qpair failed and we were unable to recover it. 00:31:21.329 [2024-06-07 21:48:21.455855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.329 [2024-06-07 21:48:21.455865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.329 qpair failed and we were unable to recover it. 00:31:21.329 [2024-06-07 21:48:21.456073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.329 [2024-06-07 21:48:21.456083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.329 qpair failed and we were unable to recover it. 00:31:21.329 [2024-06-07 21:48:21.456369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.329 [2024-06-07 21:48:21.456378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.329 qpair failed and we were unable to recover it. 00:31:21.329 [2024-06-07 21:48:21.456588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.329 [2024-06-07 21:48:21.456597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.329 qpair failed and we were unable to recover it. 00:31:21.329 [2024-06-07 21:48:21.456913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.329 [2024-06-07 21:48:21.456922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.329 qpair failed and we were unable to recover it. 00:31:21.329 [2024-06-07 21:48:21.457141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.329 [2024-06-07 21:48:21.457151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.329 qpair failed and we were unable to recover it. 00:31:21.329 [2024-06-07 21:48:21.457314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.329 [2024-06-07 21:48:21.457325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.329 qpair failed and we were unable to recover it. 00:31:21.329 [2024-06-07 21:48:21.457484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.329 [2024-06-07 21:48:21.457493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.329 qpair failed and we were unable to recover it. 00:31:21.329 [2024-06-07 21:48:21.457786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.329 [2024-06-07 21:48:21.457816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.329 qpair failed and we were unable to recover it. 00:31:21.329 [2024-06-07 21:48:21.458160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.329 [2024-06-07 21:48:21.458194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.329 qpair failed and we were unable to recover it. 00:31:21.329 [2024-06-07 21:48:21.458496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.329 [2024-06-07 21:48:21.458505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.329 qpair failed and we were unable to recover it. 00:31:21.329 [2024-06-07 21:48:21.458771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.329 [2024-06-07 21:48:21.458780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.329 qpair failed and we were unable to recover it. 00:31:21.329 [2024-06-07 21:48:21.459075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.329 [2024-06-07 21:48:21.459085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.329 qpair failed and we were unable to recover it. 00:31:21.329 [2024-06-07 21:48:21.459371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.329 [2024-06-07 21:48:21.459380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.329 qpair failed and we were unable to recover it. 00:31:21.329 [2024-06-07 21:48:21.459652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.329 [2024-06-07 21:48:21.459661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.329 qpair failed and we were unable to recover it. 00:31:21.329 [2024-06-07 21:48:21.459954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.329 [2024-06-07 21:48:21.459963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.329 qpair failed and we were unable to recover it. 00:31:21.329 [2024-06-07 21:48:21.460227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.329 [2024-06-07 21:48:21.460236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.329 qpair failed and we were unable to recover it. 00:31:21.329 [2024-06-07 21:48:21.460390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.329 [2024-06-07 21:48:21.460400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.329 qpair failed and we were unable to recover it. 00:31:21.329 [2024-06-07 21:48:21.460600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.329 [2024-06-07 21:48:21.460610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.329 qpair failed and we were unable to recover it. 00:31:21.329 [2024-06-07 21:48:21.460824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.329 [2024-06-07 21:48:21.460834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.329 qpair failed and we were unable to recover it. 00:31:21.330 [2024-06-07 21:48:21.461122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.330 [2024-06-07 21:48:21.461132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.330 qpair failed and we were unable to recover it. 00:31:21.330 [2024-06-07 21:48:21.461299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.330 [2024-06-07 21:48:21.461309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.330 qpair failed and we were unable to recover it. 00:31:21.330 [2024-06-07 21:48:21.461527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.330 [2024-06-07 21:48:21.461537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.330 qpair failed and we were unable to recover it. 00:31:21.330 [2024-06-07 21:48:21.461758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.330 [2024-06-07 21:48:21.461767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.330 qpair failed and we were unable to recover it. 00:31:21.330 [2024-06-07 21:48:21.461999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.330 [2024-06-07 21:48:21.462008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.330 qpair failed and we were unable to recover it. 00:31:21.330 [2024-06-07 21:48:21.462347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.330 [2024-06-07 21:48:21.462357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.330 qpair failed and we were unable to recover it. 00:31:21.330 [2024-06-07 21:48:21.462518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.330 [2024-06-07 21:48:21.462528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.330 qpair failed and we were unable to recover it. 00:31:21.330 [2024-06-07 21:48:21.462856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.330 [2024-06-07 21:48:21.462865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.330 qpair failed and we were unable to recover it. 00:31:21.330 [2024-06-07 21:48:21.463157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.330 [2024-06-07 21:48:21.463167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.330 qpair failed and we were unable to recover it. 00:31:21.330 [2024-06-07 21:48:21.463389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.330 [2024-06-07 21:48:21.463399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.330 qpair failed and we were unable to recover it. 00:31:21.330 [2024-06-07 21:48:21.463563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.330 [2024-06-07 21:48:21.463573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.330 qpair failed and we were unable to recover it. 00:31:21.330 [2024-06-07 21:48:21.463803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.330 [2024-06-07 21:48:21.463812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.330 qpair failed and we were unable to recover it. 00:31:21.330 [2024-06-07 21:48:21.464031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.330 [2024-06-07 21:48:21.464041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.330 qpair failed and we were unable to recover it. 00:31:21.330 [2024-06-07 21:48:21.464270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.330 [2024-06-07 21:48:21.464281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.330 qpair failed and we were unable to recover it. 00:31:21.330 [2024-06-07 21:48:21.464410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.330 [2024-06-07 21:48:21.464420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.330 qpair failed and we were unable to recover it. 00:31:21.330 [2024-06-07 21:48:21.464692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.330 [2024-06-07 21:48:21.464702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.330 qpair failed and we were unable to recover it. 00:31:21.330 [2024-06-07 21:48:21.464910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.330 [2024-06-07 21:48:21.464920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.330 qpair failed and we were unable to recover it. 00:31:21.330 [2024-06-07 21:48:21.465224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.330 [2024-06-07 21:48:21.465234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.330 qpair failed and we were unable to recover it. 00:31:21.330 [2024-06-07 21:48:21.465452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.330 [2024-06-07 21:48:21.465462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.330 qpair failed and we were unable to recover it. 00:31:21.330 [2024-06-07 21:48:21.465678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.330 [2024-06-07 21:48:21.465687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.330 qpair failed and we were unable to recover it. 00:31:21.330 [2024-06-07 21:48:21.465903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.330 [2024-06-07 21:48:21.465913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.330 qpair failed and we were unable to recover it. 00:31:21.330 [2024-06-07 21:48:21.466142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.330 [2024-06-07 21:48:21.466151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.330 qpair failed and we were unable to recover it. 00:31:21.330 [2024-06-07 21:48:21.466355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.330 [2024-06-07 21:48:21.466364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.330 qpair failed and we were unable to recover it. 00:31:21.330 [2024-06-07 21:48:21.466652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.330 [2024-06-07 21:48:21.466662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.330 qpair failed and we were unable to recover it. 00:31:21.330 [2024-06-07 21:48:21.466817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.330 [2024-06-07 21:48:21.466826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.330 qpair failed and we were unable to recover it. 00:31:21.330 [2024-06-07 21:48:21.467132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.330 [2024-06-07 21:48:21.467141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.330 qpair failed and we were unable to recover it. 00:31:21.330 [2024-06-07 21:48:21.467407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.330 [2024-06-07 21:48:21.467418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.330 qpair failed and we were unable to recover it. 00:31:21.330 [2024-06-07 21:48:21.467637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.330 [2024-06-07 21:48:21.467646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.330 qpair failed and we were unable to recover it. 00:31:21.330 [2024-06-07 21:48:21.467866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.330 [2024-06-07 21:48:21.467875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.330 qpair failed and we were unable to recover it. 00:31:21.330 [2024-06-07 21:48:21.468143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.330 [2024-06-07 21:48:21.468152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.330 qpair failed and we were unable to recover it. 00:31:21.330 [2024-06-07 21:48:21.468317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.330 [2024-06-07 21:48:21.468326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.330 qpair failed and we were unable to recover it. 00:31:21.330 [2024-06-07 21:48:21.468477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.330 [2024-06-07 21:48:21.468486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.330 qpair failed and we were unable to recover it. 00:31:21.330 [2024-06-07 21:48:21.468644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.330 [2024-06-07 21:48:21.468654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.330 qpair failed and we were unable to recover it. 00:31:21.331 [2024-06-07 21:48:21.468803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.331 [2024-06-07 21:48:21.468813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.331 qpair failed and we were unable to recover it. 00:31:21.331 [2024-06-07 21:48:21.469110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.331 [2024-06-07 21:48:21.469119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.331 qpair failed and we were unable to recover it. 00:31:21.331 [2024-06-07 21:48:21.469333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.331 [2024-06-07 21:48:21.469342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.331 qpair failed and we were unable to recover it. 00:31:21.331 [2024-06-07 21:48:21.469492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.331 [2024-06-07 21:48:21.469501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.331 qpair failed and we were unable to recover it. 00:31:21.331 [2024-06-07 21:48:21.469650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.331 [2024-06-07 21:48:21.469660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.331 qpair failed and we were unable to recover it. 00:31:21.331 [2024-06-07 21:48:21.469867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.331 [2024-06-07 21:48:21.469876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.331 qpair failed and we were unable to recover it. 00:31:21.331 [2024-06-07 21:48:21.470090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.331 [2024-06-07 21:48:21.470099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.331 qpair failed and we were unable to recover it. 00:31:21.331 [2024-06-07 21:48:21.470311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.331 [2024-06-07 21:48:21.470321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.331 qpair failed and we were unable to recover it. 00:31:21.331 [2024-06-07 21:48:21.470490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.331 [2024-06-07 21:48:21.470499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.331 qpair failed and we were unable to recover it. 00:31:21.331 [2024-06-07 21:48:21.470655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.331 [2024-06-07 21:48:21.470665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.331 qpair failed and we were unable to recover it. 00:31:21.331 [2024-06-07 21:48:21.470939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.331 [2024-06-07 21:48:21.470948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.331 qpair failed and we were unable to recover it. 00:31:21.331 [2024-06-07 21:48:21.471161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.331 [2024-06-07 21:48:21.471170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.331 qpair failed and we were unable to recover it. 00:31:21.331 [2024-06-07 21:48:21.471379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.331 [2024-06-07 21:48:21.471388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.331 qpair failed and we were unable to recover it. 00:31:21.331 [2024-06-07 21:48:21.471596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.331 [2024-06-07 21:48:21.471605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.331 qpair failed and we were unable to recover it. 00:31:21.331 [2024-06-07 21:48:21.471879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.331 [2024-06-07 21:48:21.471888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.331 qpair failed and we were unable to recover it. 00:31:21.331 [2024-06-07 21:48:21.472159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.331 [2024-06-07 21:48:21.472168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.331 qpair failed and we were unable to recover it. 00:31:21.331 [2024-06-07 21:48:21.472451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.331 [2024-06-07 21:48:21.472460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.331 qpair failed and we were unable to recover it. 00:31:21.331 [2024-06-07 21:48:21.472699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.331 [2024-06-07 21:48:21.472708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.331 qpair failed and we were unable to recover it. 00:31:21.331 [2024-06-07 21:48:21.472904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.331 [2024-06-07 21:48:21.472913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.331 qpair failed and we were unable to recover it. 00:31:21.331 [2024-06-07 21:48:21.473182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.331 [2024-06-07 21:48:21.473192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.331 qpair failed and we were unable to recover it. 00:31:21.331 [2024-06-07 21:48:21.473403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.331 [2024-06-07 21:48:21.473413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.331 qpair failed and we were unable to recover it. 00:31:21.331 [2024-06-07 21:48:21.473629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.331 [2024-06-07 21:48:21.473638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.331 qpair failed and we were unable to recover it. 00:31:21.331 [2024-06-07 21:48:21.473926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.331 [2024-06-07 21:48:21.473936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.331 qpair failed and we were unable to recover it. 00:31:21.331 [2024-06-07 21:48:21.474086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.331 [2024-06-07 21:48:21.474096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.331 qpair failed and we were unable to recover it. 00:31:21.331 [2024-06-07 21:48:21.474267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.331 [2024-06-07 21:48:21.474276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.331 qpair failed and we were unable to recover it. 00:31:21.331 [2024-06-07 21:48:21.474611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.331 [2024-06-07 21:48:21.474620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.331 qpair failed and we were unable to recover it. 00:31:21.331 [2024-06-07 21:48:21.474884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.331 [2024-06-07 21:48:21.474894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.331 qpair failed and we were unable to recover it. 00:31:21.331 [2024-06-07 21:48:21.475212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.331 [2024-06-07 21:48:21.475222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.331 qpair failed and we were unable to recover it. 00:31:21.331 [2024-06-07 21:48:21.475439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.331 [2024-06-07 21:48:21.475448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.331 qpair failed and we were unable to recover it. 00:31:21.331 [2024-06-07 21:48:21.475581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.331 [2024-06-07 21:48:21.475591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.331 qpair failed and we were unable to recover it. 00:31:21.331 [2024-06-07 21:48:21.475817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.331 [2024-06-07 21:48:21.475826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.331 qpair failed and we were unable to recover it. 00:31:21.331 [2024-06-07 21:48:21.476131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.331 [2024-06-07 21:48:21.476163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.331 qpair failed and we were unable to recover it. 00:31:21.331 [2024-06-07 21:48:21.476479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.331 [2024-06-07 21:48:21.476510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.331 qpair failed and we were unable to recover it. 00:31:21.331 [2024-06-07 21:48:21.476780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.331 [2024-06-07 21:48:21.476816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.331 qpair failed and we were unable to recover it. 00:31:21.331 [2024-06-07 21:48:21.477129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.331 [2024-06-07 21:48:21.477161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.331 qpair failed and we were unable to recover it. 00:31:21.331 [2024-06-07 21:48:21.477421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.331 [2024-06-07 21:48:21.477451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.331 qpair failed and we were unable to recover it. 00:31:21.331 [2024-06-07 21:48:21.477848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.331 [2024-06-07 21:48:21.477857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.331 qpair failed and we were unable to recover it. 00:31:21.331 [2024-06-07 21:48:21.478004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.331 [2024-06-07 21:48:21.478014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.331 qpair failed and we were unable to recover it. 00:31:21.331 [2024-06-07 21:48:21.478302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.332 [2024-06-07 21:48:21.478311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.332 qpair failed and we were unable to recover it. 00:31:21.332 [2024-06-07 21:48:21.478528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.332 [2024-06-07 21:48:21.478538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.332 qpair failed and we were unable to recover it. 00:31:21.332 [2024-06-07 21:48:21.478730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.332 [2024-06-07 21:48:21.478739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.332 qpair failed and we were unable to recover it. 00:31:21.332 [2024-06-07 21:48:21.478952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.332 [2024-06-07 21:48:21.478961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.332 qpair failed and we were unable to recover it. 00:31:21.332 [2024-06-07 21:48:21.479244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.332 [2024-06-07 21:48:21.479275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.332 qpair failed and we were unable to recover it. 00:31:21.332 [2024-06-07 21:48:21.479586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.332 [2024-06-07 21:48:21.479616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.332 qpair failed and we were unable to recover it. 00:31:21.332 [2024-06-07 21:48:21.479884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.332 [2024-06-07 21:48:21.479913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.332 qpair failed and we were unable to recover it. 00:31:21.332 [2024-06-07 21:48:21.480229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.332 [2024-06-07 21:48:21.480260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.332 qpair failed and we were unable to recover it. 00:31:21.332 [2024-06-07 21:48:21.480470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.332 [2024-06-07 21:48:21.480501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.332 qpair failed and we were unable to recover it. 00:31:21.332 [2024-06-07 21:48:21.480817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.332 [2024-06-07 21:48:21.480826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.332 qpair failed and we were unable to recover it. 00:31:21.332 [2024-06-07 21:48:21.481094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.332 [2024-06-07 21:48:21.481103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.332 qpair failed and we were unable to recover it. 00:31:21.332 [2024-06-07 21:48:21.481333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.332 [2024-06-07 21:48:21.481343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.332 qpair failed and we were unable to recover it. 00:31:21.332 [2024-06-07 21:48:21.481662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.332 [2024-06-07 21:48:21.481671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.332 qpair failed and we were unable to recover it. 00:31:21.332 [2024-06-07 21:48:21.481902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.332 [2024-06-07 21:48:21.481912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.332 qpair failed and we were unable to recover it. 00:31:21.332 [2024-06-07 21:48:21.482127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.332 [2024-06-07 21:48:21.482158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.332 qpair failed and we were unable to recover it. 00:31:21.332 [2024-06-07 21:48:21.482496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.332 [2024-06-07 21:48:21.482527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.332 qpair failed and we were unable to recover it. 00:31:21.332 [2024-06-07 21:48:21.482847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.332 [2024-06-07 21:48:21.482877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.332 qpair failed and we were unable to recover it. 00:31:21.332 [2024-06-07 21:48:21.483238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.332 [2024-06-07 21:48:21.483247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.332 qpair failed and we were unable to recover it. 00:31:21.332 [2024-06-07 21:48:21.483408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.332 [2024-06-07 21:48:21.483417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.332 qpair failed and we were unable to recover it. 00:31:21.332 [2024-06-07 21:48:21.483626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.332 [2024-06-07 21:48:21.483635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.332 qpair failed and we were unable to recover it. 00:31:21.332 [2024-06-07 21:48:21.483944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.332 [2024-06-07 21:48:21.483974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.332 qpair failed and we were unable to recover it. 00:31:21.332 [2024-06-07 21:48:21.484245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.332 [2024-06-07 21:48:21.484277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.332 qpair failed and we were unable to recover it. 00:31:21.332 [2024-06-07 21:48:21.484609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.332 [2024-06-07 21:48:21.484639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.332 qpair failed and we were unable to recover it. 00:31:21.332 [2024-06-07 21:48:21.484817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.332 [2024-06-07 21:48:21.484848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.332 qpair failed and we were unable to recover it. 00:31:21.332 [2024-06-07 21:48:21.485787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.332 [2024-06-07 21:48:21.485807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.332 qpair failed and we were unable to recover it. 00:31:21.332 [2024-06-07 21:48:21.486113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.332 [2024-06-07 21:48:21.486124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.332 qpair failed and we were unable to recover it. 00:31:21.332 [2024-06-07 21:48:21.486340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.332 [2024-06-07 21:48:21.486350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.332 qpair failed and we were unable to recover it. 00:31:21.332 [2024-06-07 21:48:21.486631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.332 [2024-06-07 21:48:21.486641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.332 qpair failed and we were unable to recover it. 00:31:21.332 [2024-06-07 21:48:21.486907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.332 [2024-06-07 21:48:21.486916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.332 qpair failed and we were unable to recover it. 00:31:21.332 [2024-06-07 21:48:21.487133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.332 [2024-06-07 21:48:21.487143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.332 qpair failed and we were unable to recover it. 00:31:21.332 [2024-06-07 21:48:21.487294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.332 [2024-06-07 21:48:21.487303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.332 qpair failed and we were unable to recover it. 00:31:21.332 [2024-06-07 21:48:21.487520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.332 [2024-06-07 21:48:21.487529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.332 qpair failed and we were unable to recover it. 00:31:21.332 [2024-06-07 21:48:21.487845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.332 [2024-06-07 21:48:21.487855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.332 qpair failed and we were unable to recover it. 00:31:21.332 [2024-06-07 21:48:21.488006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.332 [2024-06-07 21:48:21.488016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.332 qpair failed and we were unable to recover it. 00:31:21.332 [2024-06-07 21:48:21.488182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.332 [2024-06-07 21:48:21.488191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.332 qpair failed and we were unable to recover it. 00:31:21.332 [2024-06-07 21:48:21.488460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.332 [2024-06-07 21:48:21.488472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.332 qpair failed and we were unable to recover it. 00:31:21.332 [2024-06-07 21:48:21.488805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.332 [2024-06-07 21:48:21.488836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.332 qpair failed and we were unable to recover it. 00:31:21.332 [2024-06-07 21:48:21.489096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.332 [2024-06-07 21:48:21.489128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.332 qpair failed and we were unable to recover it. 00:31:21.332 [2024-06-07 21:48:21.489460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.333 [2024-06-07 21:48:21.489491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.333 qpair failed and we were unable to recover it. 00:31:21.333 [2024-06-07 21:48:21.489782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.333 [2024-06-07 21:48:21.489812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.333 qpair failed and we were unable to recover it. 00:31:21.333 [2024-06-07 21:48:21.490066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.333 [2024-06-07 21:48:21.490097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.333 qpair failed and we were unable to recover it. 00:31:21.333 [2024-06-07 21:48:21.490345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.333 [2024-06-07 21:48:21.490376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.333 qpair failed and we were unable to recover it. 00:31:21.333 [2024-06-07 21:48:21.490629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.333 [2024-06-07 21:48:21.490638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.333 qpair failed and we were unable to recover it. 00:31:21.333 [2024-06-07 21:48:21.490958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.333 [2024-06-07 21:48:21.490966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.333 qpair failed and we were unable to recover it. 00:31:21.333 [2024-06-07 21:48:21.491172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.333 [2024-06-07 21:48:21.491181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.333 qpair failed and we were unable to recover it. 00:31:21.333 [2024-06-07 21:48:21.491402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.333 [2024-06-07 21:48:21.491411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.333 qpair failed and we were unable to recover it. 00:31:21.333 [2024-06-07 21:48:21.491615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.333 [2024-06-07 21:48:21.491624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.333 qpair failed and we were unable to recover it. 00:31:21.333 [2024-06-07 21:48:21.491871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.333 [2024-06-07 21:48:21.491901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.333 qpair failed and we were unable to recover it. 00:31:21.333 [2024-06-07 21:48:21.492276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.333 [2024-06-07 21:48:21.492308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.333 qpair failed and we were unable to recover it. 00:31:21.333 [2024-06-07 21:48:21.492626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.333 [2024-06-07 21:48:21.492657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.333 qpair failed and we were unable to recover it. 00:31:21.333 [2024-06-07 21:48:21.492983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.333 [2024-06-07 21:48:21.493014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.333 qpair failed and we were unable to recover it. 00:31:21.333 [2024-06-07 21:48:21.493356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.333 [2024-06-07 21:48:21.493387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.333 qpair failed and we were unable to recover it. 00:31:21.333 [2024-06-07 21:48:21.493598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.333 [2024-06-07 21:48:21.493607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.333 qpair failed and we were unable to recover it. 00:31:21.333 [2024-06-07 21:48:21.493857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.333 [2024-06-07 21:48:21.493889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.333 qpair failed and we were unable to recover it. 00:31:21.333 [2024-06-07 21:48:21.494098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.333 [2024-06-07 21:48:21.494130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.333 qpair failed and we were unable to recover it. 00:31:21.333 [2024-06-07 21:48:21.494414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.333 [2024-06-07 21:48:21.494445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.333 qpair failed and we were unable to recover it. 00:31:21.333 [2024-06-07 21:48:21.494656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.333 [2024-06-07 21:48:21.494687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.333 qpair failed and we were unable to recover it. 00:31:21.333 [2024-06-07 21:48:21.494968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.333 [2024-06-07 21:48:21.494999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.333 qpair failed and we were unable to recover it. 00:31:21.333 [2024-06-07 21:48:21.495368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.333 [2024-06-07 21:48:21.495434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7b4000b90 with addr=10.0.0.2, port=4420 00:31:21.333 qpair failed and we were unable to recover it. 00:31:21.333 [2024-06-07 21:48:21.495796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.333 [2024-06-07 21:48:21.495830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7b4000b90 with addr=10.0.0.2, port=4420 00:31:21.333 qpair failed and we were unable to recover it. 00:31:21.333 [2024-06-07 21:48:21.496102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.333 [2024-06-07 21:48:21.496136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7b4000b90 with addr=10.0.0.2, port=4420 00:31:21.333 qpair failed and we were unable to recover it. 00:31:21.333 [2024-06-07 21:48:21.496345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.333 [2024-06-07 21:48:21.496376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7b4000b90 with addr=10.0.0.2, port=4420 00:31:21.333 qpair failed and we were unable to recover it. 00:31:21.333 [2024-06-07 21:48:21.496598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.333 [2024-06-07 21:48:21.496628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7b4000b90 with addr=10.0.0.2, port=4420 00:31:21.333 qpair failed and we were unable to recover it. 00:31:21.333 [2024-06-07 21:48:21.496824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.333 [2024-06-07 21:48:21.496854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7b4000b90 with addr=10.0.0.2, port=4420 00:31:21.333 qpair failed and we were unable to recover it. 00:31:21.333 [2024-06-07 21:48:21.497046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.333 [2024-06-07 21:48:21.497077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7b4000b90 with addr=10.0.0.2, port=4420 00:31:21.333 qpair failed and we were unable to recover it. 00:31:21.333 [2024-06-07 21:48:21.497415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.333 [2024-06-07 21:48:21.497445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7b4000b90 with addr=10.0.0.2, port=4420 00:31:21.333 qpair failed and we were unable to recover it. 00:31:21.333 [2024-06-07 21:48:21.497727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.333 [2024-06-07 21:48:21.497757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7b4000b90 with addr=10.0.0.2, port=4420 00:31:21.333 qpair failed and we were unable to recover it. 00:31:21.333 [2024-06-07 21:48:21.498067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.333 [2024-06-07 21:48:21.498098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7b4000b90 with addr=10.0.0.2, port=4420 00:31:21.333 qpair failed and we were unable to recover it. 00:31:21.333 [2024-06-07 21:48:21.498368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.333 [2024-06-07 21:48:21.498399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7b4000b90 with addr=10.0.0.2, port=4420 00:31:21.333 qpair failed and we were unable to recover it. 00:31:21.333 [2024-06-07 21:48:21.498737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.333 [2024-06-07 21:48:21.498766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7b4000b90 with addr=10.0.0.2, port=4420 00:31:21.333 qpair failed and we were unable to recover it. 00:31:21.333 [2024-06-07 21:48:21.499020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.333 [2024-06-07 21:48:21.499035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.333 qpair failed and we were unable to recover it. 00:31:21.333 [2024-06-07 21:48:21.499252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.333 [2024-06-07 21:48:21.499262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.333 qpair failed and we were unable to recover it. 00:31:21.333 [2024-06-07 21:48:21.499478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.333 [2024-06-07 21:48:21.499487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.333 qpair failed and we were unable to recover it. 00:31:21.333 [2024-06-07 21:48:21.499648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.333 [2024-06-07 21:48:21.499657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.333 qpair failed and we were unable to recover it. 00:31:21.333 [2024-06-07 21:48:21.499954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.333 [2024-06-07 21:48:21.499985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.333 qpair failed and we were unable to recover it. 00:31:21.333 [2024-06-07 21:48:21.500255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.333 [2024-06-07 21:48:21.500291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.333 qpair failed and we were unable to recover it. 00:31:21.333 [2024-06-07 21:48:21.500536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.333 [2024-06-07 21:48:21.500567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.333 qpair failed and we were unable to recover it. 00:31:21.334 [2024-06-07 21:48:21.500823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.334 [2024-06-07 21:48:21.500853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.334 qpair failed and we were unable to recover it. 00:31:21.334 [2024-06-07 21:48:21.501123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.334 [2024-06-07 21:48:21.501133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.334 qpair failed and we were unable to recover it. 00:31:21.334 [2024-06-07 21:48:21.501399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.334 [2024-06-07 21:48:21.501408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.334 qpair failed and we were unable to recover it. 00:31:21.334 [2024-06-07 21:48:21.501686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.334 [2024-06-07 21:48:21.501695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.334 qpair failed and we were unable to recover it. 00:31:21.334 [2024-06-07 21:48:21.501934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.334 [2024-06-07 21:48:21.501944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.334 qpair failed and we were unable to recover it. 00:31:21.334 [2024-06-07 21:48:21.502169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.334 [2024-06-07 21:48:21.502178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.334 qpair failed and we were unable to recover it. 00:31:21.334 [2024-06-07 21:48:21.502402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.334 [2024-06-07 21:48:21.502432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.334 qpair failed and we were unable to recover it. 00:31:21.334 [2024-06-07 21:48:21.502698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.334 [2024-06-07 21:48:21.502729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.334 qpair failed and we were unable to recover it. 00:31:21.334 [2024-06-07 21:48:21.503064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.334 [2024-06-07 21:48:21.503096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.334 qpair failed and we were unable to recover it. 00:31:21.334 [2024-06-07 21:48:21.503357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.334 [2024-06-07 21:48:21.503387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.334 qpair failed and we were unable to recover it. 00:31:21.334 [2024-06-07 21:48:21.503640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.334 [2024-06-07 21:48:21.503670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.334 qpair failed and we were unable to recover it. 00:31:21.334 [2024-06-07 21:48:21.504015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.334 [2024-06-07 21:48:21.504029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.334 qpair failed and we were unable to recover it. 00:31:21.334 [2024-06-07 21:48:21.504273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.334 [2024-06-07 21:48:21.504283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.334 qpair failed and we were unable to recover it. 00:31:21.334 [2024-06-07 21:48:21.504601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.334 [2024-06-07 21:48:21.504611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.334 qpair failed and we were unable to recover it. 00:31:21.334 [2024-06-07 21:48:21.504873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.334 [2024-06-07 21:48:21.504882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.334 qpair failed and we were unable to recover it. 00:31:21.334 [2024-06-07 21:48:21.505095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.334 [2024-06-07 21:48:21.505104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.334 qpair failed and we were unable to recover it. 00:31:21.334 [2024-06-07 21:48:21.505345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.334 [2024-06-07 21:48:21.505354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.334 qpair failed and we were unable to recover it. 00:31:21.334 [2024-06-07 21:48:21.505557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.334 [2024-06-07 21:48:21.505567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.334 qpair failed and we were unable to recover it. 00:31:21.334 [2024-06-07 21:48:21.505837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.334 [2024-06-07 21:48:21.505846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.334 qpair failed and we were unable to recover it. 00:31:21.334 [2024-06-07 21:48:21.506084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.334 [2024-06-07 21:48:21.506116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.334 qpair failed and we were unable to recover it. 00:31:21.334 [2024-06-07 21:48:21.506369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.334 [2024-06-07 21:48:21.506399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.334 qpair failed and we were unable to recover it. 00:31:21.334 [2024-06-07 21:48:21.506764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.334 [2024-06-07 21:48:21.506794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.334 qpair failed and we were unable to recover it. 00:31:21.334 [2024-06-07 21:48:21.507043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.334 [2024-06-07 21:48:21.507052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.334 qpair failed and we were unable to recover it. 00:31:21.334 [2024-06-07 21:48:21.507261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.334 [2024-06-07 21:48:21.507270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.334 qpair failed and we were unable to recover it. 00:31:21.334 [2024-06-07 21:48:21.507538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.334 [2024-06-07 21:48:21.507547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.334 qpair failed and we were unable to recover it. 00:31:21.334 [2024-06-07 21:48:21.507710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.334 [2024-06-07 21:48:21.507720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.334 qpair failed and we were unable to recover it. 00:31:21.334 [2024-06-07 21:48:21.507984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.334 [2024-06-07 21:48:21.507993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.334 qpair failed and we were unable to recover it. 00:31:21.334 [2024-06-07 21:48:21.508260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.334 [2024-06-07 21:48:21.508270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.334 qpair failed and we were unable to recover it. 00:31:21.334 [2024-06-07 21:48:21.508420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.334 [2024-06-07 21:48:21.508429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.334 qpair failed and we were unable to recover it. 00:31:21.334 [2024-06-07 21:48:21.508649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.334 [2024-06-07 21:48:21.508659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.334 qpair failed and we were unable to recover it. 00:31:21.334 [2024-06-07 21:48:21.508900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.334 [2024-06-07 21:48:21.508909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.334 qpair failed and we were unable to recover it. 00:31:21.334 [2024-06-07 21:48:21.509129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.334 [2024-06-07 21:48:21.509138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.334 qpair failed and we were unable to recover it. 00:31:21.334 [2024-06-07 21:48:21.509384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.334 [2024-06-07 21:48:21.509394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.334 qpair failed and we were unable to recover it. 00:31:21.334 [2024-06-07 21:48:21.509690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.334 [2024-06-07 21:48:21.509699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.334 qpair failed and we were unable to recover it. 00:31:21.334 [2024-06-07 21:48:21.510043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.334 [2024-06-07 21:48:21.510053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.334 qpair failed and we were unable to recover it. 00:31:21.334 [2024-06-07 21:48:21.510300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.334 [2024-06-07 21:48:21.510309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.334 qpair failed and we were unable to recover it. 00:31:21.334 [2024-06-07 21:48:21.510624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.334 [2024-06-07 21:48:21.510633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.334 qpair failed and we were unable to recover it. 00:31:21.334 [2024-06-07 21:48:21.510860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.335 [2024-06-07 21:48:21.510870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.335 qpair failed and we were unable to recover it. 00:31:21.335 [2024-06-07 21:48:21.511030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.335 [2024-06-07 21:48:21.511041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.335 qpair failed and we were unable to recover it. 00:31:21.335 [2024-06-07 21:48:21.511309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.335 [2024-06-07 21:48:21.511340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.335 qpair failed and we were unable to recover it. 00:31:21.335 [2024-06-07 21:48:21.511539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.335 [2024-06-07 21:48:21.511569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.335 qpair failed and we were unable to recover it. 00:31:21.335 [2024-06-07 21:48:21.511889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.335 [2024-06-07 21:48:21.511919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.335 qpair failed and we were unable to recover it. 00:31:21.335 [2024-06-07 21:48:21.512251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.335 [2024-06-07 21:48:21.512282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.335 qpair failed and we were unable to recover it. 00:31:21.335 [2024-06-07 21:48:21.512593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.335 [2024-06-07 21:48:21.512624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.335 qpair failed and we were unable to recover it. 00:31:21.335 [2024-06-07 21:48:21.512995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.335 [2024-06-07 21:48:21.513035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.335 qpair failed and we were unable to recover it. 00:31:21.335 [2024-06-07 21:48:21.513325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.335 [2024-06-07 21:48:21.513356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.335 qpair failed and we were unable to recover it. 00:31:21.335 [2024-06-07 21:48:21.513569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.335 [2024-06-07 21:48:21.513600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.335 qpair failed and we were unable to recover it. 00:31:21.335 [2024-06-07 21:48:21.513902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.335 [2024-06-07 21:48:21.513911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.335 qpair failed and we were unable to recover it. 00:31:21.335 [2024-06-07 21:48:21.514209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.335 [2024-06-07 21:48:21.514219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.335 qpair failed and we were unable to recover it. 00:31:21.335 [2024-06-07 21:48:21.514475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.335 [2024-06-07 21:48:21.514484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.335 qpair failed and we were unable to recover it. 00:31:21.335 [2024-06-07 21:48:21.514727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.335 [2024-06-07 21:48:21.514736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.335 qpair failed and we were unable to recover it. 00:31:21.335 [2024-06-07 21:48:21.515016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.335 [2024-06-07 21:48:21.515029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.335 qpair failed and we were unable to recover it. 00:31:21.335 [2024-06-07 21:48:21.515273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.335 [2024-06-07 21:48:21.515282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.335 qpair failed and we were unable to recover it. 00:31:21.335 [2024-06-07 21:48:21.515494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.335 [2024-06-07 21:48:21.515504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.335 qpair failed and we were unable to recover it. 00:31:21.335 [2024-06-07 21:48:21.515757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.335 [2024-06-07 21:48:21.515766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.335 qpair failed and we were unable to recover it. 00:31:21.335 [2024-06-07 21:48:21.515977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.335 [2024-06-07 21:48:21.515986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.335 qpair failed and we were unable to recover it. 00:31:21.335 [2024-06-07 21:48:21.516299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.335 [2024-06-07 21:48:21.516309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.335 qpair failed and we were unable to recover it. 00:31:21.335 [2024-06-07 21:48:21.516474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.335 [2024-06-07 21:48:21.516483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.335 qpair failed and we were unable to recover it. 00:31:21.335 [2024-06-07 21:48:21.516642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.335 [2024-06-07 21:48:21.516651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.335 qpair failed and we were unable to recover it. 00:31:21.335 [2024-06-07 21:48:21.516810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.335 [2024-06-07 21:48:21.516820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.335 qpair failed and we were unable to recover it. 00:31:21.335 [2024-06-07 21:48:21.517111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.335 [2024-06-07 21:48:21.517121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.335 qpair failed and we were unable to recover it. 00:31:21.335 [2024-06-07 21:48:21.517390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.335 [2024-06-07 21:48:21.517400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.335 qpair failed and we were unable to recover it. 00:31:21.335 [2024-06-07 21:48:21.517716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.335 [2024-06-07 21:48:21.517725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.335 qpair failed and we were unable to recover it. 00:31:21.335 [2024-06-07 21:48:21.517933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.335 [2024-06-07 21:48:21.517942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.335 qpair failed and we were unable to recover it. 00:31:21.335 [2024-06-07 21:48:21.518257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.335 [2024-06-07 21:48:21.518267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.335 qpair failed and we were unable to recover it. 00:31:21.335 [2024-06-07 21:48:21.518534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.335 [2024-06-07 21:48:21.518543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.335 qpair failed and we were unable to recover it. 00:31:21.335 [2024-06-07 21:48:21.518731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.335 [2024-06-07 21:48:21.518762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.335 qpair failed and we were unable to recover it. 00:31:21.335 [2024-06-07 21:48:21.518963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.335 [2024-06-07 21:48:21.518994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.335 qpair failed and we were unable to recover it. 00:31:21.335 [2024-06-07 21:48:21.519213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.335 [2024-06-07 21:48:21.519245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.335 qpair failed and we were unable to recover it. 00:31:21.335 [2024-06-07 21:48:21.519454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.335 [2024-06-07 21:48:21.519485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.335 qpair failed and we were unable to recover it. 00:31:21.335 [2024-06-07 21:48:21.519834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.335 [2024-06-07 21:48:21.519877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.335 qpair failed and we were unable to recover it. 00:31:21.336 [2024-06-07 21:48:21.520212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.336 [2024-06-07 21:48:21.520222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.336 qpair failed and we were unable to recover it. 00:31:21.336 [2024-06-07 21:48:21.520460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.336 [2024-06-07 21:48:21.520490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.336 qpair failed and we were unable to recover it. 00:31:21.336 [2024-06-07 21:48:21.520766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.336 [2024-06-07 21:48:21.520797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.336 qpair failed and we were unable to recover it. 00:31:21.336 [2024-06-07 21:48:21.520988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.336 [2024-06-07 21:48:21.521035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.336 qpair failed and we were unable to recover it. 00:31:21.336 [2024-06-07 21:48:21.521325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.336 [2024-06-07 21:48:21.521335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.336 qpair failed and we were unable to recover it. 00:31:21.336 [2024-06-07 21:48:21.521500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.336 [2024-06-07 21:48:21.521509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.336 qpair failed and we were unable to recover it. 00:31:21.336 [2024-06-07 21:48:21.521737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.336 [2024-06-07 21:48:21.521746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.336 qpair failed and we were unable to recover it. 00:31:21.336 [2024-06-07 21:48:21.521961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.336 [2024-06-07 21:48:21.521972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.336 qpair failed and we were unable to recover it. 00:31:21.336 [2024-06-07 21:48:21.522198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.336 [2024-06-07 21:48:21.522207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.336 qpair failed and we were unable to recover it. 00:31:21.336 [2024-06-07 21:48:21.522425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.336 [2024-06-07 21:48:21.522434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.336 qpair failed and we were unable to recover it. 00:31:21.336 [2024-06-07 21:48:21.522680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.336 [2024-06-07 21:48:21.522689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.336 qpair failed and we were unable to recover it. 00:31:21.336 [2024-06-07 21:48:21.523014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.336 [2024-06-07 21:48:21.523023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.336 qpair failed and we were unable to recover it. 00:31:21.336 [2024-06-07 21:48:21.523286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.336 [2024-06-07 21:48:21.523296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.336 qpair failed and we were unable to recover it. 00:31:21.336 [2024-06-07 21:48:21.523808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.336 [2024-06-07 21:48:21.523840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.336 qpair failed and we were unable to recover it. 00:31:21.336 [2024-06-07 21:48:21.524099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.336 [2024-06-07 21:48:21.524130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.336 qpair failed and we were unable to recover it. 00:31:21.336 [2024-06-07 21:48:21.524397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.336 [2024-06-07 21:48:21.524428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.336 qpair failed and we were unable to recover it. 00:31:21.336 [2024-06-07 21:48:21.524626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.336 [2024-06-07 21:48:21.524656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.336 qpair failed and we were unable to recover it. 00:31:21.336 [2024-06-07 21:48:21.524973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.336 [2024-06-07 21:48:21.524982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.336 qpair failed and we were unable to recover it. 00:31:21.336 [2024-06-07 21:48:21.525232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.336 [2024-06-07 21:48:21.525241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.336 qpair failed and we were unable to recover it. 00:31:21.336 [2024-06-07 21:48:21.525439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.336 [2024-06-07 21:48:21.525449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.336 qpair failed and we were unable to recover it. 00:31:21.336 [2024-06-07 21:48:21.525684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.336 [2024-06-07 21:48:21.525714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.336 qpair failed and we were unable to recover it. 00:31:21.336 [2024-06-07 21:48:21.525966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.336 [2024-06-07 21:48:21.525996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.336 qpair failed and we were unable to recover it. 00:31:21.336 [2024-06-07 21:48:21.526327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.336 [2024-06-07 21:48:21.526396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b4d60 with addr=10.0.0.2, port=4420 00:31:21.336 qpair failed and we were unable to recover it. 00:31:21.336 [2024-06-07 21:48:21.526846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.336 [2024-06-07 21:48:21.526880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b4d60 with addr=10.0.0.2, port=4420 00:31:21.336 qpair failed and we were unable to recover it. 00:31:21.336 [2024-06-07 21:48:21.527200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.336 [2024-06-07 21:48:21.527233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b4d60 with addr=10.0.0.2, port=4420 00:31:21.336 qpair failed and we were unable to recover it. 00:31:21.336 [2024-06-07 21:48:21.527457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.336 [2024-06-07 21:48:21.527489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b4d60 with addr=10.0.0.2, port=4420 00:31:21.336 qpair failed and we were unable to recover it. 00:31:21.336 [2024-06-07 21:48:21.527765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.336 [2024-06-07 21:48:21.527796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b4d60 with addr=10.0.0.2, port=4420 00:31:21.336 qpair failed and we were unable to recover it. 00:31:21.336 [2024-06-07 21:48:21.528096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.336 [2024-06-07 21:48:21.528129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b4d60 with addr=10.0.0.2, port=4420 00:31:21.336 qpair failed and we were unable to recover it. 00:31:21.336 [2024-06-07 21:48:21.528295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.336 [2024-06-07 21:48:21.528306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.336 qpair failed and we were unable to recover it. 00:31:21.336 [2024-06-07 21:48:21.528461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.336 [2024-06-07 21:48:21.528471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.336 qpair failed and we were unable to recover it. 00:31:21.336 [2024-06-07 21:48:21.528615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.336 [2024-06-07 21:48:21.528624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.336 qpair failed and we were unable to recover it. 00:31:21.336 [2024-06-07 21:48:21.528905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.336 [2024-06-07 21:48:21.528914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.336 qpair failed and we were unable to recover it. 00:31:21.336 [2024-06-07 21:48:21.529207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.336 [2024-06-07 21:48:21.529217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.336 qpair failed and we were unable to recover it. 00:31:21.336 [2024-06-07 21:48:21.529385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.336 [2024-06-07 21:48:21.529394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.336 qpair failed and we were unable to recover it. 00:31:21.336 [2024-06-07 21:48:21.529616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.336 [2024-06-07 21:48:21.529646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.336 qpair failed and we were unable to recover it. 00:31:21.336 [2024-06-07 21:48:21.530130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.336 [2024-06-07 21:48:21.530164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.336 qpair failed and we were unable to recover it. 00:31:21.336 [2024-06-07 21:48:21.530435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.336 [2024-06-07 21:48:21.530465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.336 qpair failed and we were unable to recover it. 00:31:21.336 [2024-06-07 21:48:21.530797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.336 [2024-06-07 21:48:21.530827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.336 qpair failed and we were unable to recover it. 00:31:21.336 [2024-06-07 21:48:21.531161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.337 [2024-06-07 21:48:21.531170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.337 qpair failed and we were unable to recover it. 00:31:21.337 [2024-06-07 21:48:21.531324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.337 [2024-06-07 21:48:21.531333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.337 qpair failed and we were unable to recover it. 00:31:21.337 [2024-06-07 21:48:21.531475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.337 [2024-06-07 21:48:21.531485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.337 qpair failed and we were unable to recover it. 00:31:21.337 [2024-06-07 21:48:21.531699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.337 [2024-06-07 21:48:21.531709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.337 qpair failed and we were unable to recover it. 00:31:21.337 [2024-06-07 21:48:21.531915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.337 [2024-06-07 21:48:21.531924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.337 qpair failed and we were unable to recover it. 00:31:21.337 [2024-06-07 21:48:21.532137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.337 [2024-06-07 21:48:21.532146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.337 qpair failed and we were unable to recover it. 00:31:21.337 [2024-06-07 21:48:21.532333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.337 [2024-06-07 21:48:21.532342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.337 qpair failed and we were unable to recover it. 00:31:21.337 [2024-06-07 21:48:21.532539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.337 [2024-06-07 21:48:21.532548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.337 qpair failed and we were unable to recover it. 00:31:21.337 [2024-06-07 21:48:21.532817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.337 [2024-06-07 21:48:21.532826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.337 qpair failed and we were unable to recover it. 00:31:21.337 [2024-06-07 21:48:21.533122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.337 [2024-06-07 21:48:21.533133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.337 qpair failed and we were unable to recover it. 00:31:21.337 [2024-06-07 21:48:21.533367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.337 [2024-06-07 21:48:21.533376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.337 qpair failed and we were unable to recover it. 00:31:21.337 [2024-06-07 21:48:21.533543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.337 [2024-06-07 21:48:21.533552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.337 qpair failed and we were unable to recover it. 00:31:21.337 [2024-06-07 21:48:21.533846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.337 [2024-06-07 21:48:21.533855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.337 qpair failed and we were unable to recover it. 00:31:21.337 [2024-06-07 21:48:21.534074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.337 [2024-06-07 21:48:21.534084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.337 qpair failed and we were unable to recover it. 00:31:21.337 [2024-06-07 21:48:21.534300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.337 [2024-06-07 21:48:21.534309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.337 qpair failed and we were unable to recover it. 00:31:21.337 [2024-06-07 21:48:21.534471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.337 [2024-06-07 21:48:21.534480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.337 qpair failed and we were unable to recover it. 00:31:21.337 [2024-06-07 21:48:21.534705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.337 [2024-06-07 21:48:21.534714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.337 qpair failed and we were unable to recover it. 00:31:21.337 [2024-06-07 21:48:21.534921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.337 [2024-06-07 21:48:21.534930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.337 qpair failed and we were unable to recover it. 00:31:21.337 [2024-06-07 21:48:21.535202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.337 [2024-06-07 21:48:21.535211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.337 qpair failed and we were unable to recover it. 00:31:21.337 [2024-06-07 21:48:21.535481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.337 [2024-06-07 21:48:21.535490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.337 qpair failed and we were unable to recover it. 00:31:21.337 [2024-06-07 21:48:21.535703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.337 [2024-06-07 21:48:21.535712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.337 qpair failed and we were unable to recover it. 00:31:21.337 [2024-06-07 21:48:21.535986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.337 [2024-06-07 21:48:21.535996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.337 qpair failed and we were unable to recover it. 00:31:21.337 [2024-06-07 21:48:21.536160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.337 [2024-06-07 21:48:21.536170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.337 qpair failed and we were unable to recover it. 00:31:21.337 [2024-06-07 21:48:21.536464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.337 [2024-06-07 21:48:21.536473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.337 qpair failed and we were unable to recover it. 00:31:21.337 [2024-06-07 21:48:21.536789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.337 [2024-06-07 21:48:21.536798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.337 qpair failed and we were unable to recover it. 00:31:21.337 [2024-06-07 21:48:21.537101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.337 [2024-06-07 21:48:21.537110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.337 qpair failed and we were unable to recover it. 00:31:21.337 [2024-06-07 21:48:21.537275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.337 [2024-06-07 21:48:21.537284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.337 qpair failed and we were unable to recover it. 00:31:21.337 [2024-06-07 21:48:21.537523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.337 [2024-06-07 21:48:21.537532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.337 qpair failed and we were unable to recover it. 00:31:21.337 [2024-06-07 21:48:21.537683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.337 [2024-06-07 21:48:21.537693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.337 qpair failed and we were unable to recover it. 00:31:21.337 [2024-06-07 21:48:21.538001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.337 [2024-06-07 21:48:21.538010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.337 qpair failed and we were unable to recover it. 00:31:21.337 [2024-06-07 21:48:21.538245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.337 [2024-06-07 21:48:21.538255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.337 qpair failed and we were unable to recover it. 00:31:21.337 [2024-06-07 21:48:21.538555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.337 [2024-06-07 21:48:21.538564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.337 qpair failed and we were unable to recover it. 00:31:21.337 [2024-06-07 21:48:21.538877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.337 [2024-06-07 21:48:21.538908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.337 qpair failed and we were unable to recover it. 00:31:21.337 [2024-06-07 21:48:21.539152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.337 [2024-06-07 21:48:21.539184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.337 qpair failed and we were unable to recover it. 00:31:21.337 [2024-06-07 21:48:21.539496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.337 [2024-06-07 21:48:21.539526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.337 qpair failed and we were unable to recover it. 00:31:21.337 [2024-06-07 21:48:21.539735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.337 [2024-06-07 21:48:21.539766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.337 qpair failed and we were unable to recover it. 00:31:21.337 [2024-06-07 21:48:21.540043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.337 [2024-06-07 21:48:21.540055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.337 qpair failed and we were unable to recover it. 00:31:21.337 [2024-06-07 21:48:21.540350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.337 [2024-06-07 21:48:21.540359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.337 qpair failed and we were unable to recover it. 00:31:21.337 [2024-06-07 21:48:21.540662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.337 [2024-06-07 21:48:21.540693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.337 qpair failed and we were unable to recover it. 00:31:21.338 [2024-06-07 21:48:21.540955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.338 [2024-06-07 21:48:21.540985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.338 qpair failed and we were unable to recover it. 00:31:21.338 [2024-06-07 21:48:21.541190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.338 [2024-06-07 21:48:21.541222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.338 qpair failed and we were unable to recover it. 00:31:21.338 [2024-06-07 21:48:21.541550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.338 [2024-06-07 21:48:21.541580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.338 qpair failed and we were unable to recover it. 00:31:21.338 [2024-06-07 21:48:21.541865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.338 [2024-06-07 21:48:21.541896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.338 qpair failed and we were unable to recover it. 00:31:21.338 [2024-06-07 21:48:21.542147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.338 [2024-06-07 21:48:21.542179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.338 qpair failed and we were unable to recover it. 00:31:21.338 [2024-06-07 21:48:21.542423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.338 [2024-06-07 21:48:21.542453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.338 qpair failed and we were unable to recover it. 00:31:21.338 [2024-06-07 21:48:21.542797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.338 [2024-06-07 21:48:21.542828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.338 qpair failed and we were unable to recover it. 00:31:21.338 [2024-06-07 21:48:21.543092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.338 [2024-06-07 21:48:21.543124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.338 qpair failed and we were unable to recover it. 00:31:21.338 [2024-06-07 21:48:21.543471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.338 [2024-06-07 21:48:21.543501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.338 qpair failed and we were unable to recover it. 00:31:21.338 [2024-06-07 21:48:21.543706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.338 [2024-06-07 21:48:21.543737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.338 qpair failed and we were unable to recover it. 00:31:21.338 [2024-06-07 21:48:21.544065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.338 [2024-06-07 21:48:21.544096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.338 qpair failed and we were unable to recover it. 00:31:21.338 [2024-06-07 21:48:21.544377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.338 [2024-06-07 21:48:21.544408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.338 qpair failed and we were unable to recover it. 00:31:21.338 [2024-06-07 21:48:21.544696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.338 [2024-06-07 21:48:21.544727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.338 qpair failed and we were unable to recover it. 00:31:21.338 [2024-06-07 21:48:21.545078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.338 [2024-06-07 21:48:21.545088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.338 qpair failed and we were unable to recover it. 00:31:21.338 [2024-06-07 21:48:21.545391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.338 [2024-06-07 21:48:21.545400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.338 qpair failed and we were unable to recover it. 00:31:21.338 [2024-06-07 21:48:21.545618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.338 [2024-06-07 21:48:21.545627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.338 qpair failed and we were unable to recover it. 00:31:21.338 [2024-06-07 21:48:21.545924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.338 [2024-06-07 21:48:21.545933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.338 qpair failed and we were unable to recover it. 00:31:21.338 [2024-06-07 21:48:21.546097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.338 [2024-06-07 21:48:21.546108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.338 qpair failed and we were unable to recover it. 00:31:21.338 [2024-06-07 21:48:21.546321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.338 [2024-06-07 21:48:21.546330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.338 qpair failed and we were unable to recover it. 00:31:21.338 [2024-06-07 21:48:21.546495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.338 [2024-06-07 21:48:21.546505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.338 qpair failed and we were unable to recover it. 00:31:21.338 [2024-06-07 21:48:21.546800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.338 [2024-06-07 21:48:21.546830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.338 qpair failed and we were unable to recover it. 00:31:21.338 [2024-06-07 21:48:21.547099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.338 [2024-06-07 21:48:21.547131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.338 qpair failed and we were unable to recover it. 00:31:21.338 [2024-06-07 21:48:21.547443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.338 [2024-06-07 21:48:21.547473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.338 qpair failed and we were unable to recover it. 00:31:21.338 [2024-06-07 21:48:21.547813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.338 [2024-06-07 21:48:21.547844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.338 qpair failed and we were unable to recover it. 00:31:21.338 [2024-06-07 21:48:21.548100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.338 [2024-06-07 21:48:21.548109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.338 qpair failed and we were unable to recover it. 00:31:21.338 [2024-06-07 21:48:21.548402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.338 [2024-06-07 21:48:21.548411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.338 qpair failed and we were unable to recover it. 00:31:21.338 [2024-06-07 21:48:21.548669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.338 [2024-06-07 21:48:21.548679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.338 qpair failed and we were unable to recover it. 00:31:21.338 [2024-06-07 21:48:21.548892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.338 [2024-06-07 21:48:21.548901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.338 qpair failed and we were unable to recover it. 00:31:21.338 [2024-06-07 21:48:21.549175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.338 [2024-06-07 21:48:21.549185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.338 qpair failed and we were unable to recover it. 00:31:21.338 [2024-06-07 21:48:21.549402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.338 [2024-06-07 21:48:21.549411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.338 qpair failed and we were unable to recover it. 00:31:21.338 [2024-06-07 21:48:21.549622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.338 [2024-06-07 21:48:21.549631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.338 qpair failed and we were unable to recover it. 00:31:21.338 [2024-06-07 21:48:21.549969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.338 [2024-06-07 21:48:21.549978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.338 qpair failed and we were unable to recover it. 00:31:21.338 [2024-06-07 21:48:21.550228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.338 [2024-06-07 21:48:21.550259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.338 qpair failed and we were unable to recover it. 00:31:21.338 [2024-06-07 21:48:21.550510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.338 [2024-06-07 21:48:21.550540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.338 qpair failed and we were unable to recover it. 00:31:21.338 [2024-06-07 21:48:21.550849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.338 [2024-06-07 21:48:21.550858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.338 qpair failed and we were unable to recover it. 00:31:21.338 [2024-06-07 21:48:21.551134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.338 [2024-06-07 21:48:21.551143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.338 qpair failed and we were unable to recover it. 00:31:21.338 [2024-06-07 21:48:21.551316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.338 [2024-06-07 21:48:21.551325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.338 qpair failed and we were unable to recover it. 00:31:21.338 [2024-06-07 21:48:21.551589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.338 [2024-06-07 21:48:21.551625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.338 qpair failed and we were unable to recover it. 00:31:21.338 [2024-06-07 21:48:21.551882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.339 [2024-06-07 21:48:21.551913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.339 qpair failed and we were unable to recover it. 00:31:21.339 [2024-06-07 21:48:21.552166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.339 [2024-06-07 21:48:21.552175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.339 qpair failed and we were unable to recover it. 00:31:21.339 [2024-06-07 21:48:21.552404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.339 [2024-06-07 21:48:21.552413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.339 qpair failed and we were unable to recover it. 00:31:21.339 [2024-06-07 21:48:21.552561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.339 [2024-06-07 21:48:21.552570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.339 qpair failed and we were unable to recover it. 00:31:21.339 [2024-06-07 21:48:21.552922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.339 [2024-06-07 21:48:21.552931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.339 qpair failed and we were unable to recover it. 00:31:21.339 [2024-06-07 21:48:21.553144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.339 [2024-06-07 21:48:21.553154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.339 qpair failed and we were unable to recover it. 00:31:21.339 [2024-06-07 21:48:21.553361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.339 [2024-06-07 21:48:21.553370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.339 qpair failed and we were unable to recover it. 00:31:21.339 [2024-06-07 21:48:21.553533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.339 [2024-06-07 21:48:21.553542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.339 qpair failed and we were unable to recover it. 00:31:21.339 [2024-06-07 21:48:21.553830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.339 [2024-06-07 21:48:21.553838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.339 qpair failed and we were unable to recover it. 00:31:21.339 [2024-06-07 21:48:21.554136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.339 [2024-06-07 21:48:21.554146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.339 qpair failed and we were unable to recover it. 00:31:21.339 [2024-06-07 21:48:21.554354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.339 [2024-06-07 21:48:21.554364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.339 qpair failed and we were unable to recover it. 00:31:21.339 [2024-06-07 21:48:21.554581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.339 [2024-06-07 21:48:21.554590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.339 qpair failed and we were unable to recover it. 00:31:21.339 [2024-06-07 21:48:21.554764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.339 [2024-06-07 21:48:21.554773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.339 qpair failed and we were unable to recover it. 00:31:21.339 [2024-06-07 21:48:21.554933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.339 [2024-06-07 21:48:21.554969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.339 qpair failed and we were unable to recover it. 00:31:21.339 [2024-06-07 21:48:21.555261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.339 [2024-06-07 21:48:21.555292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.339 qpair failed and we were unable to recover it. 00:31:21.339 [2024-06-07 21:48:21.555555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.339 [2024-06-07 21:48:21.555586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.339 qpair failed and we were unable to recover it. 00:31:21.339 [2024-06-07 21:48:21.555865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.339 [2024-06-07 21:48:21.555895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.339 qpair failed and we were unable to recover it. 00:31:21.339 [2024-06-07 21:48:21.556240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.339 [2024-06-07 21:48:21.556250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.339 qpair failed and we were unable to recover it. 00:31:21.339 [2024-06-07 21:48:21.556543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.339 [2024-06-07 21:48:21.556553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.339 qpair failed and we were unable to recover it. 00:31:21.339 [2024-06-07 21:48:21.556792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.339 [2024-06-07 21:48:21.556802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.339 qpair failed and we were unable to recover it. 00:31:21.339 [2024-06-07 21:48:21.557003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.339 [2024-06-07 21:48:21.557012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.339 qpair failed and we were unable to recover it. 00:31:21.339 [2024-06-07 21:48:21.557293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.339 [2024-06-07 21:48:21.557303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.339 qpair failed and we were unable to recover it. 00:31:21.339 [2024-06-07 21:48:21.557521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.339 [2024-06-07 21:48:21.557530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.339 qpair failed and we were unable to recover it. 00:31:21.339 [2024-06-07 21:48:21.557692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.339 [2024-06-07 21:48:21.557712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.339 qpair failed and we were unable to recover it. 00:31:21.339 [2024-06-07 21:48:21.558008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.339 [2024-06-07 21:48:21.558047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.339 qpair failed and we were unable to recover it. 00:31:21.339 [2024-06-07 21:48:21.558315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.339 [2024-06-07 21:48:21.558346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.339 qpair failed and we were unable to recover it. 00:31:21.339 [2024-06-07 21:48:21.558605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.339 [2024-06-07 21:48:21.558635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.339 qpair failed and we were unable to recover it. 00:31:21.339 [2024-06-07 21:48:21.558991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.339 [2024-06-07 21:48:21.559022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.339 qpair failed and we were unable to recover it. 00:31:21.339 [2024-06-07 21:48:21.559293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.339 [2024-06-07 21:48:21.559323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.339 qpair failed and we were unable to recover it. 00:31:21.339 [2024-06-07 21:48:21.559636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.339 [2024-06-07 21:48:21.559666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.339 qpair failed and we were unable to recover it. 00:31:21.339 [2024-06-07 21:48:21.559999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.339 [2024-06-07 21:48:21.560040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.339 qpair failed and we were unable to recover it. 00:31:21.339 [2024-06-07 21:48:21.560295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.339 [2024-06-07 21:48:21.560325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.339 qpair failed and we were unable to recover it. 00:31:21.339 [2024-06-07 21:48:21.560692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.339 [2024-06-07 21:48:21.560722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.339 qpair failed and we were unable to recover it. 00:31:21.339 [2024-06-07 21:48:21.560994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.339 [2024-06-07 21:48:21.561023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.340 qpair failed and we were unable to recover it. 00:31:21.340 [2024-06-07 21:48:21.561384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.340 [2024-06-07 21:48:21.561414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.340 qpair failed and we were unable to recover it. 00:31:21.340 [2024-06-07 21:48:21.561756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.340 [2024-06-07 21:48:21.561766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.340 qpair failed and we were unable to recover it. 00:31:21.340 [2024-06-07 21:48:21.562000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.340 [2024-06-07 21:48:21.562009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.340 qpair failed and we were unable to recover it. 00:31:21.340 [2024-06-07 21:48:21.562256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.340 [2024-06-07 21:48:21.562265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.340 qpair failed and we were unable to recover it. 00:31:21.340 [2024-06-07 21:48:21.562434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.340 [2024-06-07 21:48:21.562443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.340 qpair failed and we were unable to recover it. 00:31:21.340 [2024-06-07 21:48:21.562709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.340 [2024-06-07 21:48:21.562720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.340 qpair failed and we were unable to recover it. 00:31:21.340 [2024-06-07 21:48:21.563020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.340 [2024-06-07 21:48:21.563033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.340 qpair failed and we were unable to recover it. 00:31:21.340 [2024-06-07 21:48:21.563394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.340 [2024-06-07 21:48:21.563403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.340 qpair failed and we were unable to recover it. 00:31:21.340 [2024-06-07 21:48:21.563641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.340 [2024-06-07 21:48:21.563650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.340 qpair failed and we were unable to recover it. 00:31:21.340 [2024-06-07 21:48:21.563937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.340 [2024-06-07 21:48:21.563946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.340 qpair failed and we were unable to recover it. 00:31:21.340 [2024-06-07 21:48:21.564269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.340 [2024-06-07 21:48:21.564279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.340 qpair failed and we were unable to recover it. 00:31:21.340 [2024-06-07 21:48:21.564591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.340 [2024-06-07 21:48:21.564601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.340 qpair failed and we were unable to recover it. 00:31:21.340 [2024-06-07 21:48:21.564871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.340 [2024-06-07 21:48:21.564902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.340 qpair failed and we were unable to recover it. 00:31:21.340 [2024-06-07 21:48:21.565162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.340 [2024-06-07 21:48:21.565193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.340 qpair failed and we were unable to recover it. 00:31:21.340 [2024-06-07 21:48:21.565531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.340 [2024-06-07 21:48:21.565562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.340 qpair failed and we were unable to recover it. 00:31:21.340 [2024-06-07 21:48:21.565903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.340 [2024-06-07 21:48:21.565934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.340 qpair failed and we were unable to recover it. 00:31:21.340 [2024-06-07 21:48:21.566222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.340 [2024-06-07 21:48:21.566232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.340 qpair failed and we were unable to recover it. 00:31:21.340 [2024-06-07 21:48:21.566402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.340 [2024-06-07 21:48:21.566411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.340 qpair failed and we were unable to recover it. 00:31:21.340 [2024-06-07 21:48:21.566630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.615 [2024-06-07 21:48:21.566639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.615 qpair failed and we were unable to recover it. 00:31:21.615 [2024-06-07 21:48:21.566873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.615 [2024-06-07 21:48:21.566882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.615 qpair failed and we were unable to recover it. 00:31:21.615 [2024-06-07 21:48:21.567216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.615 [2024-06-07 21:48:21.567226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-07 21:48:21.567438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-07 21:48:21.567448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-07 21:48:21.567693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-07 21:48:21.567703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-07 21:48:21.567940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-07 21:48:21.567949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-07 21:48:21.568221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-07 21:48:21.568231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-07 21:48:21.568466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-07 21:48:21.568475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-07 21:48:21.568682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-07 21:48:21.568691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-07 21:48:21.568928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-07 21:48:21.568937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-07 21:48:21.569221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-07 21:48:21.569231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-07 21:48:21.569375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-07 21:48:21.569384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-07 21:48:21.569548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-07 21:48:21.569558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-07 21:48:21.569803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-07 21:48:21.569813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-07 21:48:21.570138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-07 21:48:21.570148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-07 21:48:21.570421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-07 21:48:21.570451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-07 21:48:21.570704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-07 21:48:21.570734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-07 21:48:21.571121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-07 21:48:21.571152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-07 21:48:21.571435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-07 21:48:21.571466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-07 21:48:21.571677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-07 21:48:21.571708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-07 21:48:21.572044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-07 21:48:21.572076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-07 21:48:21.572336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-07 21:48:21.572366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-07 21:48:21.572645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-07 21:48:21.572675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-07 21:48:21.572990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-07 21:48:21.573020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-07 21:48:21.573350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-07 21:48:21.573380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-07 21:48:21.573723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-07 21:48:21.573753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-07 21:48:21.574064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-07 21:48:21.574099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-07 21:48:21.574416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-07 21:48:21.574428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-07 21:48:21.574588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-07 21:48:21.574598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-07 21:48:21.574828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-07 21:48:21.574837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-07 21:48:21.575051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-07 21:48:21.575060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-07 21:48:21.575357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-07 21:48:21.575367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-07 21:48:21.575648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-07 21:48:21.575657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-07 21:48:21.575941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-07 21:48:21.575971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-07 21:48:21.576358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-07 21:48:21.576390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-07 21:48:21.576653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-07 21:48:21.576683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-07 21:48:21.577012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-07 21:48:21.577052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-07 21:48:21.577305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-07 21:48:21.577335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-07 21:48:21.577647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-07 21:48:21.577678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-07 21:48:21.577957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.616 [2024-06-07 21:48:21.577986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.616 qpair failed and we were unable to recover it. 00:31:21.616 [2024-06-07 21:48:21.578340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-07 21:48:21.578349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-07 21:48:21.578644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-07 21:48:21.578653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-07 21:48:21.578913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-07 21:48:21.578943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-07 21:48:21.579253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-07 21:48:21.579285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-07 21:48:21.579610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-07 21:48:21.579641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-07 21:48:21.579927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-07 21:48:21.579957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-07 21:48:21.580246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-07 21:48:21.580277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-07 21:48:21.580544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-07 21:48:21.580574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-07 21:48:21.580835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-07 21:48:21.580844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-07 21:48:21.581080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-07 21:48:21.581090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-07 21:48:21.581303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-07 21:48:21.581312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-07 21:48:21.581534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-07 21:48:21.581543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-07 21:48:21.581822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-07 21:48:21.581831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-07 21:48:21.582048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-07 21:48:21.582058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-07 21:48:21.582273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-07 21:48:21.582282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-07 21:48:21.582496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-07 21:48:21.582505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-07 21:48:21.582799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-07 21:48:21.582808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-07 21:48:21.583090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-07 21:48:21.583100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-07 21:48:21.583255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-07 21:48:21.583264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-07 21:48:21.583427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-07 21:48:21.583436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-07 21:48:21.583663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-07 21:48:21.583673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-07 21:48:21.583960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-07 21:48:21.583991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-07 21:48:21.584296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-07 21:48:21.584327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-07 21:48:21.584583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-07 21:48:21.584613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-07 21:48:21.584879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-07 21:48:21.584910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-07 21:48:21.585250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-07 21:48:21.585260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-07 21:48:21.585502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-07 21:48:21.585511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-07 21:48:21.585672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-07 21:48:21.585683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-07 21:48:21.585843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-07 21:48:21.585852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-07 21:48:21.586159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-07 21:48:21.586191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-07 21:48:21.586406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-07 21:48:21.586437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-07 21:48:21.586765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-07 21:48:21.586795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-07 21:48:21.587138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-07 21:48:21.587170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-07 21:48:21.587465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-07 21:48:21.587495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-07 21:48:21.587851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-07 21:48:21.587882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-07 21:48:21.588219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-07 21:48:21.588229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-07 21:48:21.588495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-07 21:48:21.588504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.617 [2024-06-07 21:48:21.588768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.617 [2024-06-07 21:48:21.588777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.617 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-07 21:48:21.589023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-07 21:48:21.589036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-07 21:48:21.589345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-07 21:48:21.589355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-07 21:48:21.589568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-07 21:48:21.589578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-07 21:48:21.589810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-07 21:48:21.589819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-07 21:48:21.590085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-07 21:48:21.590095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-07 21:48:21.590255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-07 21:48:21.590264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-07 21:48:21.590503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-07 21:48:21.590513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-07 21:48:21.590797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-07 21:48:21.590827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-07 21:48:21.591083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-07 21:48:21.591115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-07 21:48:21.591448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-07 21:48:21.591480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-07 21:48:21.591768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-07 21:48:21.591798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-07 21:48:21.592157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-07 21:48:21.592187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-07 21:48:21.592467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-07 21:48:21.592497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-07 21:48:21.592790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-07 21:48:21.592821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-07 21:48:21.593169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-07 21:48:21.593201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-07 21:48:21.593536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-07 21:48:21.593566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-07 21:48:21.593922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-07 21:48:21.593952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-07 21:48:21.594252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-07 21:48:21.594283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-07 21:48:21.594490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-07 21:48:21.594520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-07 21:48:21.594781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-07 21:48:21.594811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-07 21:48:21.595178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-07 21:48:21.595210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-07 21:48:21.595414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-07 21:48:21.595446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-07 21:48:21.595756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-07 21:48:21.595787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-07 21:48:21.596113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-07 21:48:21.596160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-07 21:48:21.596489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-07 21:48:21.596519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-07 21:48:21.596869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-07 21:48:21.596900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-07 21:48:21.597218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-07 21:48:21.597249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-07 21:48:21.597463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-07 21:48:21.597493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-07 21:48:21.597779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-07 21:48:21.597809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-07 21:48:21.598146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-07 21:48:21.598183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-07 21:48:21.598448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-07 21:48:21.598478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-07 21:48:21.598852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-07 21:48:21.598882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-07 21:48:21.599134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-07 21:48:21.599166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-07 21:48:21.599411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-07 21:48:21.599442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-07 21:48:21.599828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-07 21:48:21.599858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-07 21:48:21.600217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-07 21:48:21.600226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-07 21:48:21.600522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-07 21:48:21.600552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.618 [2024-06-07 21:48:21.600923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.618 [2024-06-07 21:48:21.600954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.618 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-07 21:48:21.601217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-07 21:48:21.601227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-07 21:48:21.601441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-07 21:48:21.601450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-07 21:48:21.601738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-07 21:48:21.601768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-07 21:48:21.602089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-07 21:48:21.602120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-07 21:48:21.602433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-07 21:48:21.602442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-07 21:48:21.602611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-07 21:48:21.602620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-07 21:48:21.602955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-07 21:48:21.602964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-07 21:48:21.603172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-07 21:48:21.603182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-07 21:48:21.603456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-07 21:48:21.603466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-07 21:48:21.603806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-07 21:48:21.603838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-07 21:48:21.604182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-07 21:48:21.604213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-07 21:48:21.604529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-07 21:48:21.604559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-07 21:48:21.604835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-07 21:48:21.604866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-07 21:48:21.605220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-07 21:48:21.605251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-07 21:48:21.605532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-07 21:48:21.605562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-07 21:48:21.605842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-07 21:48:21.605872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-07 21:48:21.606135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-07 21:48:21.606145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-07 21:48:21.606313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-07 21:48:21.606322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-07 21:48:21.606615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-07 21:48:21.606624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-07 21:48:21.606909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-07 21:48:21.606919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-07 21:48:21.607137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-07 21:48:21.607169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-07 21:48:21.607373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-07 21:48:21.607405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-07 21:48:21.607606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-07 21:48:21.607636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-07 21:48:21.608000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-07 21:48:21.608040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-07 21:48:21.608383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-07 21:48:21.608413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-07 21:48:21.608740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-07 21:48:21.608770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-07 21:48:21.609039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-07 21:48:21.609071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-07 21:48:21.609323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-07 21:48:21.609333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-07 21:48:21.609495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-07 21:48:21.609504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-07 21:48:21.609661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-07 21:48:21.609670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-07 21:48:21.610041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-07 21:48:21.610050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-07 21:48:21.610355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-07 21:48:21.610366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-07 21:48:21.610613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-07 21:48:21.610622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-07 21:48:21.610855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-07 21:48:21.610864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-07 21:48:21.611112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-07 21:48:21.611122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-07 21:48:21.611400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-07 21:48:21.611410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-07 21:48:21.611569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-07 21:48:21.611579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-07 21:48:21.611932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.619 [2024-06-07 21:48:21.611942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.619 qpair failed and we were unable to recover it. 00:31:21.619 [2024-06-07 21:48:21.612202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-07 21:48:21.612233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-07 21:48:21.612435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-07 21:48:21.612465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-07 21:48:21.612659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-07 21:48:21.612690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-07 21:48:21.613032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-07 21:48:21.613041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-07 21:48:21.613185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-07 21:48:21.613195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-07 21:48:21.613464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-07 21:48:21.613474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-07 21:48:21.613703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-07 21:48:21.613712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-07 21:48:21.614011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-07 21:48:21.614051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-07 21:48:21.614260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-07 21:48:21.614289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-07 21:48:21.614618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-07 21:48:21.614649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-07 21:48:21.614835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-07 21:48:21.614844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-07 21:48:21.615145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-07 21:48:21.615176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-07 21:48:21.615467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-07 21:48:21.615498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-07 21:48:21.615708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-07 21:48:21.615739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-07 21:48:21.616089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-07 21:48:21.616120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-07 21:48:21.616389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-07 21:48:21.616398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-07 21:48:21.616553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-07 21:48:21.616562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-07 21:48:21.616819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-07 21:48:21.616829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-07 21:48:21.617079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-07 21:48:21.617110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-07 21:48:21.617407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-07 21:48:21.617438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-07 21:48:21.617656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-07 21:48:21.617687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-07 21:48:21.618020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-07 21:48:21.618060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-07 21:48:21.618357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-07 21:48:21.618387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-07 21:48:21.618730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-07 21:48:21.618760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-07 21:48:21.619012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-07 21:48:21.619021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-07 21:48:21.619222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-07 21:48:21.619232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-07 21:48:21.619476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-07 21:48:21.619485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-07 21:48:21.619794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-07 21:48:21.619825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-07 21:48:21.620087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-07 21:48:21.620097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-07 21:48:21.620335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-07 21:48:21.620345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-07 21:48:21.620484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-07 21:48:21.620493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-07 21:48:21.620762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.620 [2024-06-07 21:48:21.620771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.620 qpair failed and we were unable to recover it. 00:31:21.620 [2024-06-07 21:48:21.621070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-07 21:48:21.621102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-07 21:48:21.621392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-07 21:48:21.621427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-07 21:48:21.621804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-07 21:48:21.621835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-07 21:48:21.622173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-07 21:48:21.622183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-07 21:48:21.622436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-07 21:48:21.622467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-07 21:48:21.622667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-07 21:48:21.622698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-07 21:48:21.623044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-07 21:48:21.623076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-07 21:48:21.623332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-07 21:48:21.623341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-07 21:48:21.623562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-07 21:48:21.623571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-07 21:48:21.623825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-07 21:48:21.623834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-07 21:48:21.624148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-07 21:48:21.624158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-07 21:48:21.624373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-07 21:48:21.624382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-07 21:48:21.624659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-07 21:48:21.624669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-07 21:48:21.624987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-07 21:48:21.624996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-07 21:48:21.625313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-07 21:48:21.625344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-07 21:48:21.625684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-07 21:48:21.625715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-07 21:48:21.626046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-07 21:48:21.626055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-07 21:48:21.626386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-07 21:48:21.626417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-07 21:48:21.626720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-07 21:48:21.626751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-07 21:48:21.627087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-07 21:48:21.627119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-07 21:48:21.627423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-07 21:48:21.627432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-07 21:48:21.627609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-07 21:48:21.627618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-07 21:48:21.627845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-07 21:48:21.627854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-07 21:48:21.628000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-07 21:48:21.628009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-07 21:48:21.628251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-07 21:48:21.628271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-07 21:48:21.628561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-07 21:48:21.628571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-07 21:48:21.628723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-07 21:48:21.628732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-07 21:48:21.629000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-07 21:48:21.629010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-07 21:48:21.629186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-07 21:48:21.629196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-07 21:48:21.629486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-07 21:48:21.629517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-07 21:48:21.629790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-07 21:48:21.629820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-07 21:48:21.630067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-07 21:48:21.630077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-07 21:48:21.630273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-07 21:48:21.630283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-07 21:48:21.630503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-07 21:48:21.630512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-07 21:48:21.630654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-07 21:48:21.630663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-07 21:48:21.630973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-07 21:48:21.630983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-07 21:48:21.631284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-07 21:48:21.631316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-07 21:48:21.631682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.621 [2024-06-07 21:48:21.631712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.621 qpair failed and we were unable to recover it. 00:31:21.621 [2024-06-07 21:48:21.631964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-07 21:48:21.631973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-07 21:48:21.632794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-07 21:48:21.632816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-07 21:48:21.633096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-07 21:48:21.633107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-07 21:48:21.633320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-07 21:48:21.633331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-07 21:48:21.633549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-07 21:48:21.633559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-07 21:48:21.633922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-07 21:48:21.633931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-07 21:48:21.634168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-07 21:48:21.634179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-07 21:48:21.634465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-07 21:48:21.634474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-07 21:48:21.634875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-07 21:48:21.634884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-07 21:48:21.635097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-07 21:48:21.635107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-07 21:48:21.635341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-07 21:48:21.635351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-07 21:48:21.635622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-07 21:48:21.635631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-07 21:48:21.635864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-07 21:48:21.635873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-07 21:48:21.636188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-07 21:48:21.636198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-07 21:48:21.636420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-07 21:48:21.636430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-07 21:48:21.636641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-07 21:48:21.636650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-07 21:48:21.636891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-07 21:48:21.636901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-07 21:48:21.637227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-07 21:48:21.637238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-07 21:48:21.637488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-07 21:48:21.637497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-07 21:48:21.637699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-07 21:48:21.637709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-07 21:48:21.637929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-07 21:48:21.637939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-07 21:48:21.638222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-07 21:48:21.638232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-07 21:48:21.638399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-07 21:48:21.638409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-07 21:48:21.638678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-07 21:48:21.638688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-07 21:48:21.638955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-07 21:48:21.638965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-07 21:48:21.639254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-07 21:48:21.639264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-07 21:48:21.639501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-07 21:48:21.639511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-07 21:48:21.639755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-07 21:48:21.639764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-07 21:48:21.640082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-07 21:48:21.640092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-07 21:48:21.640307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-07 21:48:21.640317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-07 21:48:21.640532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-07 21:48:21.640542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-07 21:48:21.640891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-07 21:48:21.640901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-07 21:48:21.641168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-07 21:48:21.641178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-07 21:48:21.641412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-07 21:48:21.641421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-07 21:48:21.641776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-07 21:48:21.641785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-07 21:48:21.642104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-07 21:48:21.642114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-07 21:48:21.642330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-07 21:48:21.642339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-07 21:48:21.642554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.622 [2024-06-07 21:48:21.642563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.622 qpair failed and we were unable to recover it. 00:31:21.622 [2024-06-07 21:48:21.642831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-07 21:48:21.642840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-07 21:48:21.643149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-07 21:48:21.643159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-07 21:48:21.643378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-07 21:48:21.643387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-07 21:48:21.643585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-07 21:48:21.643595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-07 21:48:21.643754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-07 21:48:21.643763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-07 21:48:21.644061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-07 21:48:21.644079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-07 21:48:21.644298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-07 21:48:21.644307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-07 21:48:21.644553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-07 21:48:21.644562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-07 21:48:21.644775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-07 21:48:21.644784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-07 21:48:21.645077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-07 21:48:21.645087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-07 21:48:21.645352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-07 21:48:21.645361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-07 21:48:21.645574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-07 21:48:21.645583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-07 21:48:21.645733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-07 21:48:21.645743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-07 21:48:21.645950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-07 21:48:21.645960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-07 21:48:21.646247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-07 21:48:21.646257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-07 21:48:21.646507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-07 21:48:21.646516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-07 21:48:21.646756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-07 21:48:21.646765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-07 21:48:21.647036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-07 21:48:21.647046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-07 21:48:21.647222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-07 21:48:21.647231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-07 21:48:21.647380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-07 21:48:21.647390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-07 21:48:21.647627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-07 21:48:21.647636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-07 21:48:21.647979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-07 21:48:21.647989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-07 21:48:21.648285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-07 21:48:21.648295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-07 21:48:21.648504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-07 21:48:21.648513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-07 21:48:21.648753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-07 21:48:21.648767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-07 21:48:21.649102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-07 21:48:21.649113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-07 21:48:21.649378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-07 21:48:21.649388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-07 21:48:21.649731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-07 21:48:21.649740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-07 21:48:21.650011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-07 21:48:21.650021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-07 21:48:21.650361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-07 21:48:21.650370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-07 21:48:21.650706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-07 21:48:21.650716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-07 21:48:21.650983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-07 21:48:21.650992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-07 21:48:21.651271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-07 21:48:21.651281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-07 21:48:21.651528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-07 21:48:21.651537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-07 21:48:21.651772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-07 21:48:21.651782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-07 21:48:21.652061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-07 21:48:21.652071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-07 21:48:21.652378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-07 21:48:21.652387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-07 21:48:21.652612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-07 21:48:21.652621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.623 [2024-06-07 21:48:21.652883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.623 [2024-06-07 21:48:21.652892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.623 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-07 21:48:21.653101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-07 21:48:21.653112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-07 21:48:21.653390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-07 21:48:21.653399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-07 21:48:21.653603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-07 21:48:21.653612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-07 21:48:21.653823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-07 21:48:21.653832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-07 21:48:21.653975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-07 21:48:21.653984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-07 21:48:21.654194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-07 21:48:21.654203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-07 21:48:21.654422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-07 21:48:21.654436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-07 21:48:21.654652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-07 21:48:21.654661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-07 21:48:21.654963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-07 21:48:21.654973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-07 21:48:21.655213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-07 21:48:21.655223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-07 21:48:21.655513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-07 21:48:21.655523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-07 21:48:21.655857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-07 21:48:21.655866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-07 21:48:21.656160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-07 21:48:21.656170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-07 21:48:21.656334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-07 21:48:21.656344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-07 21:48:21.656561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-07 21:48:21.656570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-07 21:48:21.656870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-07 21:48:21.656879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-07 21:48:21.657233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-07 21:48:21.657243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-07 21:48:21.657528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-07 21:48:21.657537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-07 21:48:21.657771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-07 21:48:21.657780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-07 21:48:21.658055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-07 21:48:21.658064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-07 21:48:21.658281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-07 21:48:21.658290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-07 21:48:21.658508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-07 21:48:21.658517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-07 21:48:21.658802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-07 21:48:21.658811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-07 21:48:21.659133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-07 21:48:21.659143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-07 21:48:21.659443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-07 21:48:21.659452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-07 21:48:21.659674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-07 21:48:21.659684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-07 21:48:21.659953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-07 21:48:21.659962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-07 21:48:21.660191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-07 21:48:21.660201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-07 21:48:21.660422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-07 21:48:21.660431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-07 21:48:21.660653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-07 21:48:21.660662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-07 21:48:21.660879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-07 21:48:21.660889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-07 21:48:21.661182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-07 21:48:21.661191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-07 21:48:21.661408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-07 21:48:21.661418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-07 21:48:21.661623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-07 21:48:21.661632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-07 21:48:21.661865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-07 21:48:21.661875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-07 21:48:21.662105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-07 21:48:21.662114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.624 [2024-06-07 21:48:21.662331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.624 [2024-06-07 21:48:21.662340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.624 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-07 21:48:21.662485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-07 21:48:21.662495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-07 21:48:21.662711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-07 21:48:21.662720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-07 21:48:21.663012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-07 21:48:21.663022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-07 21:48:21.663228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-07 21:48:21.663238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-07 21:48:21.663531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-07 21:48:21.663540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-07 21:48:21.663880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-07 21:48:21.663889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-07 21:48:21.664180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-07 21:48:21.664190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-07 21:48:21.664409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-07 21:48:21.664419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-07 21:48:21.664583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-07 21:48:21.664593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-07 21:48:21.664946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-07 21:48:21.664957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-07 21:48:21.665208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-07 21:48:21.665218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-07 21:48:21.665438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-07 21:48:21.665447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-07 21:48:21.665734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-07 21:48:21.665744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-07 21:48:21.666009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-07 21:48:21.666019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-07 21:48:21.666357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-07 21:48:21.666368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-07 21:48:21.666530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-07 21:48:21.666539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-07 21:48:21.666765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-07 21:48:21.666774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-07 21:48:21.667016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-07 21:48:21.667028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-07 21:48:21.667189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-07 21:48:21.667198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-07 21:48:21.667409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-07 21:48:21.667418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-07 21:48:21.667635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-07 21:48:21.667644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-07 21:48:21.667938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-07 21:48:21.667948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-07 21:48:21.668242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-07 21:48:21.668252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-07 21:48:21.668475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-07 21:48:21.668485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-07 21:48:21.668728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-07 21:48:21.668738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-07 21:48:21.669043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-07 21:48:21.669054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-07 21:48:21.669314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-07 21:48:21.669324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-07 21:48:21.669536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-07 21:48:21.669545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-07 21:48:21.669855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-07 21:48:21.669865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-07 21:48:21.670133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-07 21:48:21.670143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-07 21:48:21.670358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-07 21:48:21.670367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-07 21:48:21.670642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-07 21:48:21.670651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.625 [2024-06-07 21:48:21.670979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.625 [2024-06-07 21:48:21.670989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.625 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-07 21:48:21.671253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-07 21:48:21.671262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-07 21:48:21.671491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-07 21:48:21.671501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-07 21:48:21.671767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-07 21:48:21.671776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-07 21:48:21.672011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-07 21:48:21.672021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-07 21:48:21.672320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-07 21:48:21.672329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-07 21:48:21.672550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-07 21:48:21.672559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-07 21:48:21.672868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-07 21:48:21.672878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-07 21:48:21.673035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-07 21:48:21.673045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-07 21:48:21.673314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-07 21:48:21.673324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-07 21:48:21.673538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-07 21:48:21.673548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-07 21:48:21.673799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-07 21:48:21.673809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-07 21:48:21.673971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-07 21:48:21.673980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-07 21:48:21.674294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-07 21:48:21.674304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-07 21:48:21.674520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-07 21:48:21.674529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-07 21:48:21.674769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-07 21:48:21.674779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-07 21:48:21.675067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-07 21:48:21.675077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-07 21:48:21.675342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-07 21:48:21.675354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-07 21:48:21.675569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-07 21:48:21.675578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-07 21:48:21.675886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-07 21:48:21.675895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-07 21:48:21.676180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-07 21:48:21.676190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-07 21:48:21.676391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-07 21:48:21.676400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-07 21:48:21.676611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-07 21:48:21.676620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-07 21:48:21.676778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-07 21:48:21.676788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-07 21:48:21.677062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-07 21:48:21.677072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-07 21:48:21.677338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-07 21:48:21.677347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-07 21:48:21.677544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-07 21:48:21.677554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-07 21:48:21.677809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-07 21:48:21.677818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-07 21:48:21.677976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-07 21:48:21.677986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-07 21:48:21.678214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-07 21:48:21.678224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-07 21:48:21.678370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-07 21:48:21.678379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-07 21:48:21.678540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-07 21:48:21.678549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-07 21:48:21.678780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-07 21:48:21.678790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-07 21:48:21.679069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-07 21:48:21.679078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-07 21:48:21.679235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-07 21:48:21.679244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-07 21:48:21.679511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-07 21:48:21.679520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-07 21:48:21.679705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-07 21:48:21.679735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-07 21:48:21.680021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-07 21:48:21.680061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-07 21:48:21.680260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.626 [2024-06-07 21:48:21.680291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.626 qpair failed and we were unable to recover it. 00:31:21.626 [2024-06-07 21:48:21.680570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-07 21:48:21.680600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-07 21:48:21.680909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-07 21:48:21.680940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-07 21:48:21.681285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-07 21:48:21.681320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-07 21:48:21.681579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-07 21:48:21.681610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-07 21:48:21.681971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-07 21:48:21.682002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-07 21:48:21.682213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-07 21:48:21.682249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-07 21:48:21.682521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-07 21:48:21.682552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-07 21:48:21.682937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-07 21:48:21.682968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-07 21:48:21.683239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-07 21:48:21.683249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-07 21:48:21.683391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-07 21:48:21.683400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-07 21:48:21.683624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-07 21:48:21.683654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-07 21:48:21.683905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-07 21:48:21.683936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-07 21:48:21.684302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-07 21:48:21.684334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-07 21:48:21.684586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-07 21:48:21.684616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-07 21:48:21.684958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-07 21:48:21.684989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-07 21:48:21.685204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-07 21:48:21.685236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-07 21:48:21.685526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-07 21:48:21.685556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-07 21:48:21.685905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-07 21:48:21.685935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-07 21:48:21.686213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-07 21:48:21.686223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-07 21:48:21.686465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-07 21:48:21.686474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-07 21:48:21.686828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-07 21:48:21.686858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-07 21:48:21.687202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-07 21:48:21.687233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-07 21:48:21.687511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-07 21:48:21.687519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-07 21:48:21.687819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-07 21:48:21.687828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-07 21:48:21.687984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-07 21:48:21.687993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-07 21:48:21.688332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-07 21:48:21.688342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-07 21:48:21.688503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-07 21:48:21.688512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-07 21:48:21.688777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-07 21:48:21.688786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-07 21:48:21.689052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-07 21:48:21.689062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-07 21:48:21.689288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-07 21:48:21.689297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-07 21:48:21.689565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-07 21:48:21.689574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-07 21:48:21.689780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-07 21:48:21.689789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-07 21:48:21.690121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-07 21:48:21.690132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-07 21:48:21.690349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-07 21:48:21.690359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-07 21:48:21.690632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-07 21:48:21.690641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-07 21:48:21.690912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-07 21:48:21.690921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-07 21:48:21.691219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-07 21:48:21.691229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-07 21:48:21.691465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.627 [2024-06-07 21:48:21.691474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.627 qpair failed and we were unable to recover it. 00:31:21.627 [2024-06-07 21:48:21.691759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-07 21:48:21.691768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-07 21:48:21.692066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-07 21:48:21.692099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-07 21:48:21.692359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-07 21:48:21.692391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-07 21:48:21.692597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-07 21:48:21.692627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-07 21:48:21.692966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-07 21:48:21.692997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-07 21:48:21.693337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-07 21:48:21.693368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-07 21:48:21.693618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-07 21:48:21.693649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-07 21:48:21.694008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-07 21:48:21.694055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-07 21:48:21.694415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-07 21:48:21.694424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-07 21:48:21.694640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-07 21:48:21.694649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-07 21:48:21.694934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-07 21:48:21.694943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-07 21:48:21.695244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-07 21:48:21.695254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-07 21:48:21.695415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-07 21:48:21.695424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-07 21:48:21.695574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-07 21:48:21.695584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-07 21:48:21.695749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-07 21:48:21.695758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-07 21:48:21.695995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-07 21:48:21.696005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-07 21:48:21.696263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-07 21:48:21.696295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-07 21:48:21.696610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-07 21:48:21.696641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-07 21:48:21.696986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-07 21:48:21.697017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-07 21:48:21.697237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-07 21:48:21.697268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-07 21:48:21.697518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-07 21:48:21.697549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-07 21:48:21.697847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-07 21:48:21.697878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-07 21:48:21.698191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-07 21:48:21.698222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-07 21:48:21.698538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-07 21:48:21.698570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-07 21:48:21.698752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-07 21:48:21.698781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-07 21:48:21.699079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-07 21:48:21.699110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-07 21:48:21.699294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-07 21:48:21.699304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-07 21:48:21.699621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-07 21:48:21.699630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-07 21:48:21.699949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-07 21:48:21.699958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-07 21:48:21.700282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-07 21:48:21.700292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-07 21:48:21.700526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-07 21:48:21.700535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-07 21:48:21.700910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-07 21:48:21.700919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-07 21:48:21.701076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-07 21:48:21.701085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-07 21:48:21.701408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-07 21:48:21.701418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-07 21:48:21.701638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-07 21:48:21.701647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-07 21:48:21.701945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-07 21:48:21.701954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-07 21:48:21.702175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-07 21:48:21.702185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-07 21:48:21.702333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.628 [2024-06-07 21:48:21.702342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.628 qpair failed and we were unable to recover it. 00:31:21.628 [2024-06-07 21:48:21.702561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-07 21:48:21.702570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-07 21:48:21.702886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-07 21:48:21.702895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-07 21:48:21.703215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-07 21:48:21.703225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-07 21:48:21.703492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-07 21:48:21.703501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-07 21:48:21.703649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-07 21:48:21.703657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-07 21:48:21.703956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-07 21:48:21.703965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-07 21:48:21.704265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-07 21:48:21.704275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-07 21:48:21.704493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-07 21:48:21.704503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-07 21:48:21.704743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-07 21:48:21.704752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-07 21:48:21.705003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-07 21:48:21.705014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-07 21:48:21.705246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-07 21:48:21.705255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-07 21:48:21.705490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-07 21:48:21.705499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-07 21:48:21.705744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-07 21:48:21.705753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-07 21:48:21.705997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-07 21:48:21.706006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-07 21:48:21.706374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-07 21:48:21.706384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-07 21:48:21.706600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-07 21:48:21.706610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-07 21:48:21.706837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-07 21:48:21.706846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-07 21:48:21.707112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-07 21:48:21.707121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-07 21:48:21.707340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-07 21:48:21.707349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-07 21:48:21.707565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-07 21:48:21.707574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-07 21:48:21.707779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-07 21:48:21.707789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-07 21:48:21.708082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-07 21:48:21.708092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-07 21:48:21.708377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-07 21:48:21.708387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-07 21:48:21.708628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-07 21:48:21.708638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-07 21:48:21.708883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-07 21:48:21.708892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-07 21:48:21.709131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-07 21:48:21.709140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-07 21:48:21.709438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-07 21:48:21.709448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-07 21:48:21.709665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-07 21:48:21.709675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-07 21:48:21.709939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-07 21:48:21.709948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-07 21:48:21.710161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-07 21:48:21.710171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-07 21:48:21.710386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-07 21:48:21.710395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-07 21:48:21.710610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-07 21:48:21.710619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-07 21:48:21.710846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-07 21:48:21.710856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.629 [2024-06-07 21:48:21.711124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.629 [2024-06-07 21:48:21.711134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.629 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-07 21:48:21.711350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-07 21:48:21.711359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-07 21:48:21.711519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-07 21:48:21.711528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-07 21:48:21.711736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-07 21:48:21.711745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-07 21:48:21.711979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-07 21:48:21.711988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-07 21:48:21.712129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-07 21:48:21.712139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-07 21:48:21.712350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-07 21:48:21.712359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-07 21:48:21.712591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-07 21:48:21.712600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-07 21:48:21.712897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-07 21:48:21.712906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-07 21:48:21.713191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-07 21:48:21.713200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-07 21:48:21.713397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-07 21:48:21.713406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-07 21:48:21.713632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-07 21:48:21.713641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-07 21:48:21.713957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-07 21:48:21.713966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-07 21:48:21.714153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-07 21:48:21.714163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-07 21:48:21.714381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-07 21:48:21.714391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-07 21:48:21.714709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-07 21:48:21.714719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-07 21:48:21.714946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-07 21:48:21.714958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-07 21:48:21.715206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-07 21:48:21.715216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-07 21:48:21.715514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-07 21:48:21.715524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-07 21:48:21.715731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-07 21:48:21.715741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-07 21:48:21.716009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-07 21:48:21.716019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-07 21:48:21.716350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-07 21:48:21.716360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-07 21:48:21.716544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-07 21:48:21.716553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-07 21:48:21.716882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-07 21:48:21.716912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-07 21:48:21.717118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-07 21:48:21.717150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-07 21:48:21.717485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-07 21:48:21.717495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-07 21:48:21.717702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-07 21:48:21.717712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-07 21:48:21.717939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-07 21:48:21.717948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-07 21:48:21.718210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-07 21:48:21.718220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-07 21:48:21.718515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-07 21:48:21.718525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-07 21:48:21.718813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-07 21:48:21.718824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-07 21:48:21.719098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-07 21:48:21.719108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-07 21:48:21.719270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-07 21:48:21.719279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-07 21:48:21.719501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-07 21:48:21.719510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-07 21:48:21.719758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-07 21:48:21.719768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-07 21:48:21.720064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-07 21:48:21.720074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-07 21:48:21.720314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-07 21:48:21.720324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-07 21:48:21.720539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-07 21:48:21.720548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-07 21:48:21.720818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.630 [2024-06-07 21:48:21.720827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.630 qpair failed and we were unable to recover it. 00:31:21.630 [2024-06-07 21:48:21.721112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-07 21:48:21.721122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-07 21:48:21.721338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-07 21:48:21.721347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-07 21:48:21.721648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-07 21:48:21.721657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-07 21:48:21.721932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-07 21:48:21.721941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-07 21:48:21.722157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-07 21:48:21.722167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-07 21:48:21.722336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-07 21:48:21.722345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-07 21:48:21.722544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-07 21:48:21.722553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-07 21:48:21.722715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-07 21:48:21.722724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-07 21:48:21.723051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-07 21:48:21.723077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-07 21:48:21.723252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-07 21:48:21.723262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-07 21:48:21.723418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-07 21:48:21.723427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-07 21:48:21.723656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-07 21:48:21.723666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-07 21:48:21.723903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-07 21:48:21.723913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-07 21:48:21.724235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-07 21:48:21.724245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-07 21:48:21.724474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-07 21:48:21.724483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-07 21:48:21.724681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-07 21:48:21.724690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-07 21:48:21.724931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-07 21:48:21.724941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-07 21:48:21.725147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-07 21:48:21.725159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-07 21:48:21.725452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-07 21:48:21.725461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-07 21:48:21.725663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-07 21:48:21.725672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-07 21:48:21.725908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-07 21:48:21.725917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-07 21:48:21.726194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-07 21:48:21.726204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-07 21:48:21.726503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-07 21:48:21.726512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-07 21:48:21.726745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-07 21:48:21.726753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-07 21:48:21.727052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-07 21:48:21.727079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-07 21:48:21.727308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-07 21:48:21.727319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-07 21:48:21.727464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-07 21:48:21.727474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-07 21:48:21.727757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-07 21:48:21.727767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-07 21:48:21.727975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-07 21:48:21.727985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-07 21:48:21.728252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-07 21:48:21.728263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-07 21:48:21.728577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-07 21:48:21.728586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-07 21:48:21.728802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-07 21:48:21.728811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-07 21:48:21.729007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-07 21:48:21.729016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-07 21:48:21.729257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-07 21:48:21.729266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-07 21:48:21.729509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-07 21:48:21.729518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-07 21:48:21.729739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-07 21:48:21.729748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-07 21:48:21.729899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-07 21:48:21.729909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-07 21:48:21.730200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-07 21:48:21.730209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.631 qpair failed and we were unable to recover it. 00:31:21.631 [2024-06-07 21:48:21.730431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.631 [2024-06-07 21:48:21.730441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-07 21:48:21.730593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-07 21:48:21.730602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-07 21:48:21.730839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-07 21:48:21.730848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-07 21:48:21.731093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-07 21:48:21.731102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-07 21:48:21.731437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-07 21:48:21.731446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-07 21:48:21.731606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-07 21:48:21.731615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-07 21:48:21.731898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-07 21:48:21.731907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-07 21:48:21.732177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-07 21:48:21.732186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-07 21:48:21.732399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-07 21:48:21.732408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-07 21:48:21.732684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-07 21:48:21.732693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-07 21:48:21.732982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-07 21:48:21.732991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-07 21:48:21.733220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-07 21:48:21.733230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-07 21:48:21.733438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-07 21:48:21.733447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-07 21:48:21.733713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-07 21:48:21.733723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-07 21:48:21.733863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-07 21:48:21.733872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-07 21:48:21.734173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-07 21:48:21.734183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-07 21:48:21.734447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-07 21:48:21.734457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-07 21:48:21.734741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-07 21:48:21.734751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-07 21:48:21.734949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-07 21:48:21.734959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-07 21:48:21.735234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-07 21:48:21.735246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-07 21:48:21.735553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-07 21:48:21.735563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-07 21:48:21.735725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-07 21:48:21.735735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-07 21:48:21.736008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-07 21:48:21.736018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-07 21:48:21.736258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-07 21:48:21.736268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-07 21:48:21.736487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-07 21:48:21.736497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-07 21:48:21.736799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-07 21:48:21.736809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-07 21:48:21.737007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-07 21:48:21.737017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-07 21:48:21.737295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-07 21:48:21.737304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-07 21:48:21.737507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-07 21:48:21.737516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-07 21:48:21.737659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-07 21:48:21.737669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-07 21:48:21.737895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-07 21:48:21.737905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-07 21:48:21.738054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-07 21:48:21.738064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-07 21:48:21.738339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-07 21:48:21.738349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-07 21:48:21.738610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-07 21:48:21.738620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-07 21:48:21.738826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-07 21:48:21.738835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-07 21:48:21.739036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-07 21:48:21.739046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-07 21:48:21.739295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-07 21:48:21.739304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-07 21:48:21.739519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-07 21:48:21.739528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.632 [2024-06-07 21:48:21.739688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.632 [2024-06-07 21:48:21.739697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.632 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-07 21:48:21.739967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-07 21:48:21.739976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-07 21:48:21.740186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-07 21:48:21.740195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-07 21:48:21.740405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-07 21:48:21.740414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-07 21:48:21.740712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-07 21:48:21.740722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-07 21:48:21.740952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-07 21:48:21.740961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-07 21:48:21.741179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-07 21:48:21.741189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-07 21:48:21.741402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-07 21:48:21.741411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-07 21:48:21.741652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-07 21:48:21.741662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-07 21:48:21.741892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-07 21:48:21.741902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-07 21:48:21.742143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-07 21:48:21.742153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-07 21:48:21.742433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-07 21:48:21.742442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-07 21:48:21.742732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-07 21:48:21.742741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-07 21:48:21.743030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-07 21:48:21.743040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-07 21:48:21.743348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-07 21:48:21.743357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-07 21:48:21.743518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-07 21:48:21.743527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-07 21:48:21.743724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-07 21:48:21.743734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-07 21:48:21.743966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-07 21:48:21.743975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-07 21:48:21.744294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-07 21:48:21.744304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-07 21:48:21.744597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-07 21:48:21.744607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-07 21:48:21.744850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-07 21:48:21.744859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-07 21:48:21.745126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-07 21:48:21.745137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-07 21:48:21.745343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-07 21:48:21.745352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-07 21:48:21.745568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-07 21:48:21.745577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-07 21:48:21.745829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-07 21:48:21.745838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-07 21:48:21.746152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-07 21:48:21.746162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-07 21:48:21.746459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-07 21:48:21.746469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-07 21:48:21.746730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-07 21:48:21.746740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-07 21:48:21.746968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-07 21:48:21.746978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-07 21:48:21.747242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-07 21:48:21.747252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-07 21:48:21.747545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-07 21:48:21.747554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-07 21:48:21.747879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-07 21:48:21.747888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-07 21:48:21.748119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-07 21:48:21.748129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-07 21:48:21.748325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-07 21:48:21.748334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-07 21:48:21.748545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-07 21:48:21.748554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-07 21:48:21.748822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-07 21:48:21.748831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-07 21:48:21.749149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-07 21:48:21.749159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-07 21:48:21.749427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-07 21:48:21.749436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.633 [2024-06-07 21:48:21.749592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.633 [2024-06-07 21:48:21.749601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.633 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-07 21:48:21.749813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-07 21:48:21.749823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-07 21:48:21.749959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-07 21:48:21.749968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-07 21:48:21.750263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-07 21:48:21.750273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-07 21:48:21.750562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-07 21:48:21.750572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-07 21:48:21.750894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-07 21:48:21.750904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-07 21:48:21.751206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-07 21:48:21.751215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-07 21:48:21.751441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-07 21:48:21.751450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-07 21:48:21.751756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-07 21:48:21.751766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-07 21:48:21.751964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-07 21:48:21.751973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-07 21:48:21.752263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-07 21:48:21.752273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-07 21:48:21.752485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-07 21:48:21.752494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-07 21:48:21.752694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-07 21:48:21.752704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-07 21:48:21.753022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-07 21:48:21.753046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-07 21:48:21.753356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-07 21:48:21.753365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-07 21:48:21.753535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-07 21:48:21.753544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-07 21:48:21.753686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-07 21:48:21.753696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-07 21:48:21.753896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-07 21:48:21.753905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-07 21:48:21.754187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-07 21:48:21.754207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-07 21:48:21.754452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-07 21:48:21.754462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-07 21:48:21.754739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-07 21:48:21.754748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-07 21:48:21.754909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-07 21:48:21.754919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-07 21:48:21.755144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-07 21:48:21.755154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-07 21:48:21.755376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-07 21:48:21.755387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-07 21:48:21.755657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-07 21:48:21.755667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-07 21:48:21.755962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-07 21:48:21.755971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-07 21:48:21.756183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-07 21:48:21.756193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-07 21:48:21.756398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-07 21:48:21.756408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-07 21:48:21.756645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-07 21:48:21.756654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-07 21:48:21.756884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-07 21:48:21.756893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-07 21:48:21.757187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-07 21:48:21.757196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-07 21:48:21.757489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-07 21:48:21.757498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-07 21:48:21.757818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-07 21:48:21.757827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.634 [2024-06-07 21:48:21.758063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.634 [2024-06-07 21:48:21.758073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.634 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-07 21:48:21.758344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-07 21:48:21.758354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-07 21:48:21.758693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-07 21:48:21.758703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-07 21:48:21.759029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-07 21:48:21.759038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-07 21:48:21.759239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-07 21:48:21.759248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-07 21:48:21.759516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-07 21:48:21.759525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-07 21:48:21.759751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-07 21:48:21.759760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-07 21:48:21.760054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-07 21:48:21.760064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-07 21:48:21.760333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-07 21:48:21.760342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-07 21:48:21.760633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-07 21:48:21.760642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-07 21:48:21.760936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-07 21:48:21.760946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-07 21:48:21.761190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-07 21:48:21.761200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-07 21:48:21.761469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-07 21:48:21.761478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-07 21:48:21.761714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-07 21:48:21.761723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-07 21:48:21.762013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-07 21:48:21.762023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-07 21:48:21.762189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-07 21:48:21.762198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-07 21:48:21.762452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-07 21:48:21.762461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-07 21:48:21.762677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-07 21:48:21.762686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-07 21:48:21.762997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-07 21:48:21.763006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-07 21:48:21.763241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-07 21:48:21.763251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-07 21:48:21.763574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-07 21:48:21.763583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-07 21:48:21.763875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-07 21:48:21.763884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-07 21:48:21.764154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-07 21:48:21.764163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-07 21:48:21.764391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-07 21:48:21.764401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-07 21:48:21.764689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-07 21:48:21.764699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-07 21:48:21.764943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-07 21:48:21.764952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-07 21:48:21.765236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-07 21:48:21.765245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-07 21:48:21.765543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-07 21:48:21.765553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-07 21:48:21.765790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-07 21:48:21.765799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-07 21:48:21.766097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-07 21:48:21.766106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-07 21:48:21.766395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-07 21:48:21.766405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-07 21:48:21.766649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-07 21:48:21.766658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-07 21:48:21.766953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-07 21:48:21.766962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-07 21:48:21.767221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-07 21:48:21.767231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-07 21:48:21.767528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-07 21:48:21.767537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-07 21:48:21.767773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-07 21:48:21.767782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-07 21:48:21.768078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-07 21:48:21.768088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-07 21:48:21.768320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-07 21:48:21.768329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-07 21:48:21.768460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.635 [2024-06-07 21:48:21.768469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.635 qpair failed and we were unable to recover it. 00:31:21.635 [2024-06-07 21:48:21.768589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.636 [2024-06-07 21:48:21.768599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.636 qpair failed and we were unable to recover it. 00:31:21.636 [2024-06-07 21:48:21.768939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.636 [2024-06-07 21:48:21.768948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.636 qpair failed and we were unable to recover it. 00:31:21.636 [2024-06-07 21:48:21.769286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.636 [2024-06-07 21:48:21.769295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.636 qpair failed and we were unable to recover it. 00:31:21.636 [2024-06-07 21:48:21.769590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.636 [2024-06-07 21:48:21.769599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.636 qpair failed and we were unable to recover it. 00:31:21.636 [2024-06-07 21:48:21.769828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.636 [2024-06-07 21:48:21.769837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.636 qpair failed and we were unable to recover it. 00:31:21.636 [2024-06-07 21:48:21.769995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.636 [2024-06-07 21:48:21.770005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.636 qpair failed and we were unable to recover it. 00:31:21.636 [2024-06-07 21:48:21.770330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.636 [2024-06-07 21:48:21.770340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.636 qpair failed and we were unable to recover it. 00:31:21.636 [2024-06-07 21:48:21.770617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.636 [2024-06-07 21:48:21.770626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.636 qpair failed and we were unable to recover it. 00:31:21.636 [2024-06-07 21:48:21.770807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.636 [2024-06-07 21:48:21.770816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.636 qpair failed and we were unable to recover it. 00:31:21.636 [2024-06-07 21:48:21.771053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.636 [2024-06-07 21:48:21.771063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.636 qpair failed and we were unable to recover it. 00:31:21.636 [2024-06-07 21:48:21.771280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.636 [2024-06-07 21:48:21.771290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.636 qpair failed and we were unable to recover it. 00:31:21.636 [2024-06-07 21:48:21.771506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.636 [2024-06-07 21:48:21.771516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.636 qpair failed and we were unable to recover it. 00:31:21.636 [2024-06-07 21:48:21.771793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.636 [2024-06-07 21:48:21.771802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.636 qpair failed and we were unable to recover it. 00:31:21.636 [2024-06-07 21:48:21.772100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.636 [2024-06-07 21:48:21.772109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.636 qpair failed and we were unable to recover it. 00:31:21.636 [2024-06-07 21:48:21.772412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.636 [2024-06-07 21:48:21.772421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.636 qpair failed and we were unable to recover it. 00:31:21.636 [2024-06-07 21:48:21.772644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.636 [2024-06-07 21:48:21.772653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.636 qpair failed and we were unable to recover it. 00:31:21.636 [2024-06-07 21:48:21.772926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.636 [2024-06-07 21:48:21.772935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.636 qpair failed and we were unable to recover it. 00:31:21.636 [2024-06-07 21:48:21.773206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.636 [2024-06-07 21:48:21.773216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.636 qpair failed and we were unable to recover it. 00:31:21.636 [2024-06-07 21:48:21.773446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.636 [2024-06-07 21:48:21.773455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.636 qpair failed and we were unable to recover it. 00:31:21.636 [2024-06-07 21:48:21.773694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.636 [2024-06-07 21:48:21.773703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.636 qpair failed and we were unable to recover it. 00:31:21.636 [2024-06-07 21:48:21.774021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.636 [2024-06-07 21:48:21.774034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.636 qpair failed and we were unable to recover it. 00:31:21.636 [2024-06-07 21:48:21.774267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.636 [2024-06-07 21:48:21.774276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.636 qpair failed and we were unable to recover it. 00:31:21.636 [2024-06-07 21:48:21.774597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.636 [2024-06-07 21:48:21.774606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.636 qpair failed and we were unable to recover it. 00:31:21.636 [2024-06-07 21:48:21.774842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.636 [2024-06-07 21:48:21.774851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.636 qpair failed and we were unable to recover it. 00:31:21.636 [2024-06-07 21:48:21.775084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.636 [2024-06-07 21:48:21.775094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.636 qpair failed and we were unable to recover it. 00:31:21.636 [2024-06-07 21:48:21.775416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.636 [2024-06-07 21:48:21.775425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.636 qpair failed and we were unable to recover it. 00:31:21.636 [2024-06-07 21:48:21.775697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.636 [2024-06-07 21:48:21.775707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.636 qpair failed and we were unable to recover it. 00:31:21.636 [2024-06-07 21:48:21.775909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.636 [2024-06-07 21:48:21.775918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.636 qpair failed and we were unable to recover it. 00:31:21.636 [2024-06-07 21:48:21.776216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.636 [2024-06-07 21:48:21.776226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.636 qpair failed and we were unable to recover it. 00:31:21.636 [2024-06-07 21:48:21.776495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.636 [2024-06-07 21:48:21.776505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.636 qpair failed and we were unable to recover it. 00:31:21.636 [2024-06-07 21:48:21.776788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.636 [2024-06-07 21:48:21.776798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.636 qpair failed and we were unable to recover it. 00:31:21.636 [2024-06-07 21:48:21.777036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.636 [2024-06-07 21:48:21.777048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.636 qpair failed and we were unable to recover it. 00:31:21.636 [2024-06-07 21:48:21.777227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.636 [2024-06-07 21:48:21.777237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.636 qpair failed and we were unable to recover it. 00:31:21.636 [2024-06-07 21:48:21.777536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.636 [2024-06-07 21:48:21.777546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.636 qpair failed and we were unable to recover it. 00:31:21.636 [2024-06-07 21:48:21.777782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.636 [2024-06-07 21:48:21.777791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.636 qpair failed and we were unable to recover it. 00:31:21.636 [2024-06-07 21:48:21.778090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.636 [2024-06-07 21:48:21.778100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.636 qpair failed and we were unable to recover it. 00:31:21.636 [2024-06-07 21:48:21.778397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.636 [2024-06-07 21:48:21.778407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.636 qpair failed and we were unable to recover it. 00:31:21.636 [2024-06-07 21:48:21.778712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.636 [2024-06-07 21:48:21.778721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.636 qpair failed and we were unable to recover it. 00:31:21.636 [2024-06-07 21:48:21.779000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.637 [2024-06-07 21:48:21.779009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.637 qpair failed and we were unable to recover it. 00:31:21.637 [2024-06-07 21:48:21.779250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.637 [2024-06-07 21:48:21.779260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.637 qpair failed and we were unable to recover it. 00:31:21.637 [2024-06-07 21:48:21.779418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.637 [2024-06-07 21:48:21.779427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.637 qpair failed and we were unable to recover it. 00:31:21.637 [2024-06-07 21:48:21.779717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.637 [2024-06-07 21:48:21.779726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.637 qpair failed and we were unable to recover it. 00:31:21.637 [2024-06-07 21:48:21.779937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.637 [2024-06-07 21:48:21.779946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.637 qpair failed and we were unable to recover it. 00:31:21.637 [2024-06-07 21:48:21.780237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.637 [2024-06-07 21:48:21.780247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.637 qpair failed and we were unable to recover it. 00:31:21.637 [2024-06-07 21:48:21.780482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.637 [2024-06-07 21:48:21.780491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.637 qpair failed and we were unable to recover it. 00:31:21.637 [2024-06-07 21:48:21.780761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.637 [2024-06-07 21:48:21.780770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.637 qpair failed and we were unable to recover it. 00:31:21.637 [2024-06-07 21:48:21.781069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.637 [2024-06-07 21:48:21.781080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.637 qpair failed and we were unable to recover it. 00:31:21.637 [2024-06-07 21:48:21.781376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.637 [2024-06-07 21:48:21.781385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.637 qpair failed and we were unable to recover it. 00:31:21.637 [2024-06-07 21:48:21.781612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.637 [2024-06-07 21:48:21.781621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.637 qpair failed and we were unable to recover it. 00:31:21.637 [2024-06-07 21:48:21.781893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.637 [2024-06-07 21:48:21.781902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.637 qpair failed and we were unable to recover it. 00:31:21.637 [2024-06-07 21:48:21.782158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.637 [2024-06-07 21:48:21.782167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.637 qpair failed and we were unable to recover it. 00:31:21.637 [2024-06-07 21:48:21.782492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.637 [2024-06-07 21:48:21.782502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.637 qpair failed and we were unable to recover it. 00:31:21.637 [2024-06-07 21:48:21.782851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.637 [2024-06-07 21:48:21.782860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.637 qpair failed and we were unable to recover it. 00:31:21.637 [2024-06-07 21:48:21.783153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.637 [2024-06-07 21:48:21.783162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.637 qpair failed and we were unable to recover it. 00:31:21.637 [2024-06-07 21:48:21.783374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.637 [2024-06-07 21:48:21.783383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.637 qpair failed and we were unable to recover it. 00:31:21.637 [2024-06-07 21:48:21.783670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.637 [2024-06-07 21:48:21.783679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.637 qpair failed and we were unable to recover it. 00:31:21.637 [2024-06-07 21:48:21.783875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.637 [2024-06-07 21:48:21.783885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.637 qpair failed and we were unable to recover it. 00:31:21.637 [2024-06-07 21:48:21.784094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.637 [2024-06-07 21:48:21.784104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.637 qpair failed and we were unable to recover it. 00:31:21.637 [2024-06-07 21:48:21.784393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.637 [2024-06-07 21:48:21.784403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.637 qpair failed and we were unable to recover it. 00:31:21.637 [2024-06-07 21:48:21.784720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.637 [2024-06-07 21:48:21.784731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.637 qpair failed and we were unable to recover it. 00:31:21.637 [2024-06-07 21:48:21.785077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.637 [2024-06-07 21:48:21.785087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.637 qpair failed and we were unable to recover it. 00:31:21.637 [2024-06-07 21:48:21.785408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.637 [2024-06-07 21:48:21.785417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.637 qpair failed and we were unable to recover it. 00:31:21.637 [2024-06-07 21:48:21.785714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.637 [2024-06-07 21:48:21.785723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.637 qpair failed and we were unable to recover it. 00:31:21.637 [2024-06-07 21:48:21.785922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.637 [2024-06-07 21:48:21.785931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.637 qpair failed and we were unable to recover it. 00:31:21.637 [2024-06-07 21:48:21.786217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.637 [2024-06-07 21:48:21.786227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.637 qpair failed and we were unable to recover it. 00:31:21.637 [2024-06-07 21:48:21.786439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.637 [2024-06-07 21:48:21.786448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.637 qpair failed and we were unable to recover it. 00:31:21.637 [2024-06-07 21:48:21.786714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.637 [2024-06-07 21:48:21.786724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.637 qpair failed and we were unable to recover it. 00:31:21.637 [2024-06-07 21:48:21.787061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.637 [2024-06-07 21:48:21.787071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.637 qpair failed and we were unable to recover it. 00:31:21.637 [2024-06-07 21:48:21.787398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.637 [2024-06-07 21:48:21.787407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.637 qpair failed and we were unable to recover it. 00:31:21.637 [2024-06-07 21:48:21.787639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.637 [2024-06-07 21:48:21.787648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.637 qpair failed and we were unable to recover it. 00:31:21.637 [2024-06-07 21:48:21.787861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.637 [2024-06-07 21:48:21.787870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.637 qpair failed and we were unable to recover it. 00:31:21.637 [2024-06-07 21:48:21.788168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.637 [2024-06-07 21:48:21.788180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.637 qpair failed and we were unable to recover it. 00:31:21.637 [2024-06-07 21:48:21.788463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.637 [2024-06-07 21:48:21.788473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.637 qpair failed and we were unable to recover it. 00:31:21.637 [2024-06-07 21:48:21.788791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.637 [2024-06-07 21:48:21.788801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.637 qpair failed and we were unable to recover it. 00:31:21.637 [2024-06-07 21:48:21.788957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.637 [2024-06-07 21:48:21.788967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.637 qpair failed and we were unable to recover it. 00:31:21.637 [2024-06-07 21:48:21.789274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.637 [2024-06-07 21:48:21.789294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.637 qpair failed and we were unable to recover it. 00:31:21.637 [2024-06-07 21:48:21.789547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.637 [2024-06-07 21:48:21.789557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.637 qpair failed and we were unable to recover it. 00:31:21.638 [2024-06-07 21:48:21.789792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.638 [2024-06-07 21:48:21.789802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.638 qpair failed and we were unable to recover it. 00:31:21.638 [2024-06-07 21:48:21.790095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.638 [2024-06-07 21:48:21.790104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.638 qpair failed and we were unable to recover it. 00:31:21.638 [2024-06-07 21:48:21.790394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.638 [2024-06-07 21:48:21.790404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.638 qpair failed and we were unable to recover it. 00:31:21.638 [2024-06-07 21:48:21.790697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.638 [2024-06-07 21:48:21.790706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.638 qpair failed and we were unable to recover it. 00:31:21.638 [2024-06-07 21:48:21.790916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.638 [2024-06-07 21:48:21.790925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.638 qpair failed and we were unable to recover it. 00:31:21.638 [2024-06-07 21:48:21.791217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.638 [2024-06-07 21:48:21.791227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.638 qpair failed and we were unable to recover it. 00:31:21.638 [2024-06-07 21:48:21.791457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.638 [2024-06-07 21:48:21.791466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.638 qpair failed and we were unable to recover it. 00:31:21.638 [2024-06-07 21:48:21.791674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.638 [2024-06-07 21:48:21.791684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.638 qpair failed and we were unable to recover it. 00:31:21.638 [2024-06-07 21:48:21.791950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.638 [2024-06-07 21:48:21.791959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.638 qpair failed and we were unable to recover it. 00:31:21.638 [2024-06-07 21:48:21.792196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.638 [2024-06-07 21:48:21.792205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.638 qpair failed and we were unable to recover it. 00:31:21.638 [2024-06-07 21:48:21.792497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.638 [2024-06-07 21:48:21.792506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.638 qpair failed and we were unable to recover it. 00:31:21.638 [2024-06-07 21:48:21.792731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.638 [2024-06-07 21:48:21.792740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.638 qpair failed and we were unable to recover it. 00:31:21.638 [2024-06-07 21:48:21.792951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.638 [2024-06-07 21:48:21.792960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.638 qpair failed and we were unable to recover it. 00:31:21.638 [2024-06-07 21:48:21.793256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.638 [2024-06-07 21:48:21.793266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.638 qpair failed and we were unable to recover it. 00:31:21.638 [2024-06-07 21:48:21.793547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.638 [2024-06-07 21:48:21.793557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.638 qpair failed and we were unable to recover it. 00:31:21.638 [2024-06-07 21:48:21.793772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.638 [2024-06-07 21:48:21.793781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.638 qpair failed and we were unable to recover it. 00:31:21.638 [2024-06-07 21:48:21.794047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.638 [2024-06-07 21:48:21.794057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.638 qpair failed and we were unable to recover it. 00:31:21.638 [2024-06-07 21:48:21.794264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.638 [2024-06-07 21:48:21.794274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.638 qpair failed and we were unable to recover it. 00:31:21.638 [2024-06-07 21:48:21.794484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.638 [2024-06-07 21:48:21.794493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.638 qpair failed and we were unable to recover it. 00:31:21.638 [2024-06-07 21:48:21.794651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.638 [2024-06-07 21:48:21.794660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.638 qpair failed and we were unable to recover it. 00:31:21.638 [2024-06-07 21:48:21.794870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.638 [2024-06-07 21:48:21.794880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.638 qpair failed and we were unable to recover it. 00:31:21.638 [2024-06-07 21:48:21.795174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.638 [2024-06-07 21:48:21.795184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.638 qpair failed and we were unable to recover it. 00:31:21.638 [2024-06-07 21:48:21.795471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.638 [2024-06-07 21:48:21.795480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.638 qpair failed and we were unable to recover it. 00:31:21.638 [2024-06-07 21:48:21.795636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.638 [2024-06-07 21:48:21.795645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.638 qpair failed and we were unable to recover it. 00:31:21.638 [2024-06-07 21:48:21.795838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.638 [2024-06-07 21:48:21.795848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.638 qpair failed and we were unable to recover it. 00:31:21.638 [2024-06-07 21:48:21.796060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.638 [2024-06-07 21:48:21.796070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.638 qpair failed and we were unable to recover it. 00:31:21.638 [2024-06-07 21:48:21.796360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.638 [2024-06-07 21:48:21.796369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.638 qpair failed and we were unable to recover it. 00:31:21.638 [2024-06-07 21:48:21.796660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.638 [2024-06-07 21:48:21.796669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.638 qpair failed and we were unable to recover it. 00:31:21.638 [2024-06-07 21:48:21.796995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.638 [2024-06-07 21:48:21.797005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.638 qpair failed and we were unable to recover it. 00:31:21.638 [2024-06-07 21:48:21.797159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.638 [2024-06-07 21:48:21.797168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.638 qpair failed and we were unable to recover it. 00:31:21.638 [2024-06-07 21:48:21.797476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.638 [2024-06-07 21:48:21.797485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.638 qpair failed and we were unable to recover it. 00:31:21.638 [2024-06-07 21:48:21.797781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.638 [2024-06-07 21:48:21.797790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.638 qpair failed and we were unable to recover it. 00:31:21.638 [2024-06-07 21:48:21.798105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.638 [2024-06-07 21:48:21.798115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.638 qpair failed and we were unable to recover it. 00:31:21.639 [2024-06-07 21:48:21.798330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.639 [2024-06-07 21:48:21.798340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.639 qpair failed and we were unable to recover it. 00:31:21.639 [2024-06-07 21:48:21.798627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.639 [2024-06-07 21:48:21.798637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.639 qpair failed and we were unable to recover it. 00:31:21.639 [2024-06-07 21:48:21.798864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.639 [2024-06-07 21:48:21.798874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.639 qpair failed and we were unable to recover it. 00:31:21.639 [2024-06-07 21:48:21.799150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.639 [2024-06-07 21:48:21.799159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.639 qpair failed and we were unable to recover it. 00:31:21.639 [2024-06-07 21:48:21.799456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.639 [2024-06-07 21:48:21.799465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.639 qpair failed and we were unable to recover it. 00:31:21.639 [2024-06-07 21:48:21.799843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.639 [2024-06-07 21:48:21.799852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.639 qpair failed and we were unable to recover it. 00:31:21.639 [2024-06-07 21:48:21.800073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.639 [2024-06-07 21:48:21.800083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.639 qpair failed and we were unable to recover it. 00:31:21.639 [2024-06-07 21:48:21.800379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.639 [2024-06-07 21:48:21.800389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.639 qpair failed and we were unable to recover it. 00:31:21.639 [2024-06-07 21:48:21.800565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.639 [2024-06-07 21:48:21.800575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.639 qpair failed and we were unable to recover it. 00:31:21.639 [2024-06-07 21:48:21.800788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.639 [2024-06-07 21:48:21.800798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.639 qpair failed and we were unable to recover it. 00:31:21.639 [2024-06-07 21:48:21.801032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.639 [2024-06-07 21:48:21.801042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.639 qpair failed and we were unable to recover it. 00:31:21.639 [2024-06-07 21:48:21.801308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.639 [2024-06-07 21:48:21.801317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.639 qpair failed and we were unable to recover it. 00:31:21.639 [2024-06-07 21:48:21.801596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.639 [2024-06-07 21:48:21.801606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.639 qpair failed and we were unable to recover it. 00:31:21.639 [2024-06-07 21:48:21.801805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.639 [2024-06-07 21:48:21.801815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.639 qpair failed and we were unable to recover it. 00:31:21.639 [2024-06-07 21:48:21.801953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.639 [2024-06-07 21:48:21.801962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.639 qpair failed and we were unable to recover it. 00:31:21.639 [2024-06-07 21:48:21.802283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.639 [2024-06-07 21:48:21.802293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.639 qpair failed and we were unable to recover it. 00:31:21.639 [2024-06-07 21:48:21.802590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.639 [2024-06-07 21:48:21.802599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.639 qpair failed and we were unable to recover it. 00:31:21.639 [2024-06-07 21:48:21.802810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.639 [2024-06-07 21:48:21.802820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.639 qpair failed and we were unable to recover it. 00:31:21.639 [2024-06-07 21:48:21.803116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.639 [2024-06-07 21:48:21.803125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.639 qpair failed and we were unable to recover it. 00:31:21.639 [2024-06-07 21:48:21.803394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.639 [2024-06-07 21:48:21.803403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.639 qpair failed and we were unable to recover it. 00:31:21.639 [2024-06-07 21:48:21.803715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.639 [2024-06-07 21:48:21.803725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.639 qpair failed and we were unable to recover it. 00:31:21.639 [2024-06-07 21:48:21.804083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.639 [2024-06-07 21:48:21.804102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.639 qpair failed and we were unable to recover it. 00:31:21.639 [2024-06-07 21:48:21.804376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.639 [2024-06-07 21:48:21.804386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.639 qpair failed and we were unable to recover it. 00:31:21.639 [2024-06-07 21:48:21.804584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.639 [2024-06-07 21:48:21.804594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.639 qpair failed and we were unable to recover it. 00:31:21.639 [2024-06-07 21:48:21.804813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.639 [2024-06-07 21:48:21.804824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.639 qpair failed and we were unable to recover it. 00:31:21.639 [2024-06-07 21:48:21.805042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.639 [2024-06-07 21:48:21.805052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.639 qpair failed and we were unable to recover it. 00:31:21.639 [2024-06-07 21:48:21.805218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.639 [2024-06-07 21:48:21.805228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.639 qpair failed and we were unable to recover it. 00:31:21.639 [2024-06-07 21:48:21.805386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.639 [2024-06-07 21:48:21.805396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.639 qpair failed and we were unable to recover it. 00:31:21.639 [2024-06-07 21:48:21.805703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.639 [2024-06-07 21:48:21.805713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.639 qpair failed and we were unable to recover it. 00:31:21.639 [2024-06-07 21:48:21.806001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.639 [2024-06-07 21:48:21.806011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.639 qpair failed and we were unable to recover it. 00:31:21.639 [2024-06-07 21:48:21.806156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.639 [2024-06-07 21:48:21.806166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.639 qpair failed and we were unable to recover it. 00:31:21.639 [2024-06-07 21:48:21.806391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.639 [2024-06-07 21:48:21.806401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.639 qpair failed and we were unable to recover it. 00:31:21.639 [2024-06-07 21:48:21.806548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.639 [2024-06-07 21:48:21.806558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.639 qpair failed and we were unable to recover it. 00:31:21.639 [2024-06-07 21:48:21.806882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.639 [2024-06-07 21:48:21.806892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.639 qpair failed and we were unable to recover it. 00:31:21.639 [2024-06-07 21:48:21.807210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.639 [2024-06-07 21:48:21.807221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.639 qpair failed and we were unable to recover it. 00:31:21.639 [2024-06-07 21:48:21.807438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.639 [2024-06-07 21:48:21.807448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.639 qpair failed and we were unable to recover it. 00:31:21.639 [2024-06-07 21:48:21.807743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.639 [2024-06-07 21:48:21.807753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.639 qpair failed and we were unable to recover it. 00:31:21.639 [2024-06-07 21:48:21.808036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.639 [2024-06-07 21:48:21.808046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.639 qpair failed and we were unable to recover it. 00:31:21.640 [2024-06-07 21:48:21.808213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.640 [2024-06-07 21:48:21.808223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.640 qpair failed and we were unable to recover it. 00:31:21.640 [2024-06-07 21:48:21.808428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.640 [2024-06-07 21:48:21.808438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.640 qpair failed and we were unable to recover it. 00:31:21.640 [2024-06-07 21:48:21.808733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.640 [2024-06-07 21:48:21.808743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.640 qpair failed and we were unable to recover it. 00:31:21.640 [2024-06-07 21:48:21.809011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.640 [2024-06-07 21:48:21.809032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.640 qpair failed and we were unable to recover it. 00:31:21.640 [2024-06-07 21:48:21.809189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.640 [2024-06-07 21:48:21.809199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.640 qpair failed and we were unable to recover it. 00:31:21.640 [2024-06-07 21:48:21.809465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.640 [2024-06-07 21:48:21.809475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.640 qpair failed and we were unable to recover it. 00:31:21.640 [2024-06-07 21:48:21.809758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.640 [2024-06-07 21:48:21.809768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.640 qpair failed and we were unable to recover it. 00:31:21.640 [2024-06-07 21:48:21.810061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.640 [2024-06-07 21:48:21.810071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.640 qpair failed and we were unable to recover it. 00:31:21.640 [2024-06-07 21:48:21.810271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.640 [2024-06-07 21:48:21.810281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.640 qpair failed and we were unable to recover it. 00:31:21.640 [2024-06-07 21:48:21.810575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.640 [2024-06-07 21:48:21.810585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.640 qpair failed and we were unable to recover it. 00:31:21.640 [2024-06-07 21:48:21.810880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.640 [2024-06-07 21:48:21.810890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.640 qpair failed and we were unable to recover it. 00:31:21.640 [2024-06-07 21:48:21.811173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.640 [2024-06-07 21:48:21.811184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.640 qpair failed and we were unable to recover it. 00:31:21.640 [2024-06-07 21:48:21.811479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.640 [2024-06-07 21:48:21.811489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.640 qpair failed and we were unable to recover it. 00:31:21.640 [2024-06-07 21:48:21.811716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.640 [2024-06-07 21:48:21.811725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.640 qpair failed and we were unable to recover it. 00:31:21.640 [2024-06-07 21:48:21.811945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.640 [2024-06-07 21:48:21.811954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.640 qpair failed and we were unable to recover it. 00:31:21.640 [2024-06-07 21:48:21.812244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.640 [2024-06-07 21:48:21.812255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.640 qpair failed and we were unable to recover it. 00:31:21.640 [2024-06-07 21:48:21.812484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.640 [2024-06-07 21:48:21.812494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.640 qpair failed and we were unable to recover it. 00:31:21.640 [2024-06-07 21:48:21.812799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.640 [2024-06-07 21:48:21.812809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.640 qpair failed and we were unable to recover it. 00:31:21.640 [2024-06-07 21:48:21.813122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.640 [2024-06-07 21:48:21.813133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.640 qpair failed and we were unable to recover it. 00:31:21.640 [2024-06-07 21:48:21.813291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.640 [2024-06-07 21:48:21.813300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.640 qpair failed and we were unable to recover it. 00:31:21.640 [2024-06-07 21:48:21.813508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.640 [2024-06-07 21:48:21.813518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.640 qpair failed and we were unable to recover it. 00:31:21.640 [2024-06-07 21:48:21.813732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.640 [2024-06-07 21:48:21.813742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.640 qpair failed and we were unable to recover it. 00:31:21.640 [2024-06-07 21:48:21.814033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.640 [2024-06-07 21:48:21.814044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.640 qpair failed and we were unable to recover it. 00:31:21.640 [2024-06-07 21:48:21.814327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.640 [2024-06-07 21:48:21.814338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.640 qpair failed and we were unable to recover it. 00:31:21.640 [2024-06-07 21:48:21.814580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.640 [2024-06-07 21:48:21.814591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.640 qpair failed and we were unable to recover it. 00:31:21.640 [2024-06-07 21:48:21.814808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.640 [2024-06-07 21:48:21.814818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.640 qpair failed and we were unable to recover it. 00:31:21.640 [2024-06-07 21:48:21.815038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.640 [2024-06-07 21:48:21.815049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.640 qpair failed and we were unable to recover it. 00:31:21.640 [2024-06-07 21:48:21.815288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.640 [2024-06-07 21:48:21.815298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.640 qpair failed and we were unable to recover it. 00:31:21.640 [2024-06-07 21:48:21.815459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.640 [2024-06-07 21:48:21.815470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.640 qpair failed and we were unable to recover it. 00:31:21.640 [2024-06-07 21:48:21.815762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.640 [2024-06-07 21:48:21.815772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.640 qpair failed and we were unable to recover it. 00:31:21.640 [2024-06-07 21:48:21.816055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.640 [2024-06-07 21:48:21.816065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.640 qpair failed and we were unable to recover it. 00:31:21.640 [2024-06-07 21:48:21.816276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.640 [2024-06-07 21:48:21.816286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.640 qpair failed and we were unable to recover it. 00:31:21.640 [2024-06-07 21:48:21.816507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.640 [2024-06-07 21:48:21.816517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.640 qpair failed and we were unable to recover it. 00:31:21.640 [2024-06-07 21:48:21.816810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.640 [2024-06-07 21:48:21.816820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.640 qpair failed and we were unable to recover it. 00:31:21.640 [2024-06-07 21:48:21.817041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.640 [2024-06-07 21:48:21.817051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.640 qpair failed and we were unable to recover it. 00:31:21.640 [2024-06-07 21:48:21.817315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.640 [2024-06-07 21:48:21.817325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.640 qpair failed and we were unable to recover it. 00:31:21.640 [2024-06-07 21:48:21.817592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.640 [2024-06-07 21:48:21.817602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.640 qpair failed and we were unable to recover it. 00:31:21.640 [2024-06-07 21:48:21.817918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.640 [2024-06-07 21:48:21.817928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.640 qpair failed and we were unable to recover it. 00:31:21.641 [2024-06-07 21:48:21.818195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.641 [2024-06-07 21:48:21.818205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.641 qpair failed and we were unable to recover it. 00:31:21.641 [2024-06-07 21:48:21.818478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.641 [2024-06-07 21:48:21.818488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.641 qpair failed and we were unable to recover it. 00:31:21.641 [2024-06-07 21:48:21.818701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.641 [2024-06-07 21:48:21.818711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.641 qpair failed and we were unable to recover it. 00:31:21.641 [2024-06-07 21:48:21.818989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.641 [2024-06-07 21:48:21.818998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.641 qpair failed and we were unable to recover it. 00:31:21.641 [2024-06-07 21:48:21.819217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.641 [2024-06-07 21:48:21.819228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.641 qpair failed and we were unable to recover it. 00:31:21.641 [2024-06-07 21:48:21.819443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.641 [2024-06-07 21:48:21.819453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.641 qpair failed and we were unable to recover it. 00:31:21.641 [2024-06-07 21:48:21.819692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.641 [2024-06-07 21:48:21.819701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.641 qpair failed and we were unable to recover it. 00:31:21.641 [2024-06-07 21:48:21.820019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.641 [2024-06-07 21:48:21.820033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.641 qpair failed and we were unable to recover it. 00:31:21.641 [2024-06-07 21:48:21.820280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.641 [2024-06-07 21:48:21.820289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.641 qpair failed and we were unable to recover it. 00:31:21.641 [2024-06-07 21:48:21.820497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.641 [2024-06-07 21:48:21.820506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.641 qpair failed and we were unable to recover it. 00:31:21.641 [2024-06-07 21:48:21.820800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.641 [2024-06-07 21:48:21.820809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.641 qpair failed and we were unable to recover it. 00:31:21.641 [2024-06-07 21:48:21.821035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.641 [2024-06-07 21:48:21.821046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.641 qpair failed and we were unable to recover it. 00:31:21.641 [2024-06-07 21:48:21.821337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.641 [2024-06-07 21:48:21.821347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.641 qpair failed and we were unable to recover it. 00:31:21.641 [2024-06-07 21:48:21.821640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.641 [2024-06-07 21:48:21.821650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.641 qpair failed and we were unable to recover it. 00:31:21.641 [2024-06-07 21:48:21.821863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.641 [2024-06-07 21:48:21.821873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.641 qpair failed and we were unable to recover it. 00:31:21.641 [2024-06-07 21:48:21.822158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.641 [2024-06-07 21:48:21.822168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.641 qpair failed and we were unable to recover it. 00:31:21.641 [2024-06-07 21:48:21.822438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.641 [2024-06-07 21:48:21.822447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.641 qpair failed and we were unable to recover it. 00:31:21.641 [2024-06-07 21:48:21.822741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.641 [2024-06-07 21:48:21.822751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.641 qpair failed and we were unable to recover it. 00:31:21.641 [2024-06-07 21:48:21.823045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.641 [2024-06-07 21:48:21.823055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.641 qpair failed and we were unable to recover it. 00:31:21.641 [2024-06-07 21:48:21.823212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.641 [2024-06-07 21:48:21.823222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.641 qpair failed and we were unable to recover it. 00:31:21.641 [2024-06-07 21:48:21.823518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.641 [2024-06-07 21:48:21.823528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.641 qpair failed and we were unable to recover it. 00:31:21.641 [2024-06-07 21:48:21.823800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.641 [2024-06-07 21:48:21.823810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.641 qpair failed and we were unable to recover it. 00:31:21.641 [2024-06-07 21:48:21.824038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.641 [2024-06-07 21:48:21.824048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.641 qpair failed and we were unable to recover it. 00:31:21.641 [2024-06-07 21:48:21.824315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.641 [2024-06-07 21:48:21.824325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.641 qpair failed and we were unable to recover it. 00:31:21.641 [2024-06-07 21:48:21.824550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.641 [2024-06-07 21:48:21.824560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.641 qpair failed and we were unable to recover it. 00:31:21.641 [2024-06-07 21:48:21.824858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.641 [2024-06-07 21:48:21.824868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.641 qpair failed and we were unable to recover it. 00:31:21.641 [2024-06-07 21:48:21.825147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.641 [2024-06-07 21:48:21.825158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.641 qpair failed and we were unable to recover it. 00:31:21.641 [2024-06-07 21:48:21.825372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.641 [2024-06-07 21:48:21.825382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.641 qpair failed and we were unable to recover it. 00:31:21.641 [2024-06-07 21:48:21.825649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.641 [2024-06-07 21:48:21.825659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.641 qpair failed and we were unable to recover it. 00:31:21.641 [2024-06-07 21:48:21.825921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.641 [2024-06-07 21:48:21.825931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.641 qpair failed and we were unable to recover it. 00:31:21.641 [2024-06-07 21:48:21.826224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.641 [2024-06-07 21:48:21.826234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.641 qpair failed and we were unable to recover it. 00:31:21.641 [2024-06-07 21:48:21.826529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.641 [2024-06-07 21:48:21.826539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.641 qpair failed and we were unable to recover it. 00:31:21.641 [2024-06-07 21:48:21.826852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.641 [2024-06-07 21:48:21.826864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.641 qpair failed and we were unable to recover it. 00:31:21.641 [2024-06-07 21:48:21.827185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.641 [2024-06-07 21:48:21.827195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.641 qpair failed and we were unable to recover it. 00:31:21.641 [2024-06-07 21:48:21.827488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.641 [2024-06-07 21:48:21.827498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.641 qpair failed and we were unable to recover it. 00:31:21.641 [2024-06-07 21:48:21.827736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.641 [2024-06-07 21:48:21.827746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.641 qpair failed and we were unable to recover it. 00:31:21.641 [2024-06-07 21:48:21.828013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.641 [2024-06-07 21:48:21.828023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.641 qpair failed and we were unable to recover it. 00:31:21.641 [2024-06-07 21:48:21.828266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.641 [2024-06-07 21:48:21.828276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.641 qpair failed and we were unable to recover it. 00:31:21.641 [2024-06-07 21:48:21.828551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.642 [2024-06-07 21:48:21.828561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.642 qpair failed and we were unable to recover it. 00:31:21.642 [2024-06-07 21:48:21.828778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.642 [2024-06-07 21:48:21.828788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.642 qpair failed and we were unable to recover it. 00:31:21.642 [2024-06-07 21:48:21.829084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.642 [2024-06-07 21:48:21.829094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.642 qpair failed and we were unable to recover it. 00:31:21.642 [2024-06-07 21:48:21.829383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.642 [2024-06-07 21:48:21.829393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.642 qpair failed and we were unable to recover it. 00:31:21.642 [2024-06-07 21:48:21.829690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.642 [2024-06-07 21:48:21.829700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.642 qpair failed and we were unable to recover it. 00:31:21.642 [2024-06-07 21:48:21.830011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.642 [2024-06-07 21:48:21.830021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.642 qpair failed and we were unable to recover it. 00:31:21.642 [2024-06-07 21:48:21.830369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.642 [2024-06-07 21:48:21.830379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.642 qpair failed and we were unable to recover it. 00:31:21.642 [2024-06-07 21:48:21.830595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.642 [2024-06-07 21:48:21.830606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.642 qpair failed and we were unable to recover it. 00:31:21.642 [2024-06-07 21:48:21.830903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.642 [2024-06-07 21:48:21.830913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.642 qpair failed and we were unable to recover it. 00:31:21.642 [2024-06-07 21:48:21.831130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.642 [2024-06-07 21:48:21.831140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.642 qpair failed and we were unable to recover it. 00:31:21.642 [2024-06-07 21:48:21.831316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.642 [2024-06-07 21:48:21.831327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.642 qpair failed and we were unable to recover it. 00:31:21.642 [2024-06-07 21:48:21.831629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.642 [2024-06-07 21:48:21.831659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.642 qpair failed and we were unable to recover it. 00:31:21.642 [2024-06-07 21:48:21.831971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.642 [2024-06-07 21:48:21.832002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.642 qpair failed and we were unable to recover it. 00:31:21.642 [2024-06-07 21:48:21.832332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.642 [2024-06-07 21:48:21.832362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.642 qpair failed and we were unable to recover it. 00:31:21.642 [2024-06-07 21:48:21.832700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.642 [2024-06-07 21:48:21.832731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.642 qpair failed and we were unable to recover it. 00:31:21.642 [2024-06-07 21:48:21.832981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.642 [2024-06-07 21:48:21.833011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.642 qpair failed and we were unable to recover it. 00:31:21.642 [2024-06-07 21:48:21.833305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.642 [2024-06-07 21:48:21.833335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.642 qpair failed and we were unable to recover it. 00:31:21.642 [2024-06-07 21:48:21.833674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.642 [2024-06-07 21:48:21.833705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.642 qpair failed and we were unable to recover it. 00:31:21.642 [2024-06-07 21:48:21.833970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.642 [2024-06-07 21:48:21.834001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.642 qpair failed and we were unable to recover it. 00:31:21.642 [2024-06-07 21:48:21.834319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.642 [2024-06-07 21:48:21.834388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b4d60 with addr=10.0.0.2, port=4420 00:31:21.642 qpair failed and we were unable to recover it. 00:31:21.642 [2024-06-07 21:48:21.834817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.642 [2024-06-07 21:48:21.834883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:21.642 qpair failed and we were unable to recover it. 00:31:21.642 [2024-06-07 21:48:21.835228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.642 [2024-06-07 21:48:21.835295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7b4000b90 with addr=10.0.0.2, port=4420 00:31:21.642 qpair failed and we were unable to recover it. 00:31:21.642 [2024-06-07 21:48:21.835647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.642 [2024-06-07 21:48:21.835680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.642 qpair failed and we were unable to recover it. 00:31:21.642 [2024-06-07 21:48:21.836054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.642 [2024-06-07 21:48:21.836085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.642 qpair failed and we were unable to recover it. 00:31:21.642 [2024-06-07 21:48:21.836379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.642 [2024-06-07 21:48:21.836409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.642 qpair failed and we were unable to recover it. 00:31:21.642 [2024-06-07 21:48:21.836740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.642 [2024-06-07 21:48:21.836768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.642 qpair failed and we were unable to recover it. 00:31:21.642 [2024-06-07 21:48:21.837047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.642 [2024-06-07 21:48:21.837079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.642 qpair failed and we were unable to recover it. 00:31:21.642 [2024-06-07 21:48:21.837334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.642 [2024-06-07 21:48:21.837364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.642 qpair failed and we were unable to recover it. 00:31:21.642 [2024-06-07 21:48:21.837678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.642 [2024-06-07 21:48:21.837716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.642 qpair failed and we were unable to recover it. 00:31:21.642 [2024-06-07 21:48:21.837930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.642 [2024-06-07 21:48:21.837939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.642 qpair failed and we were unable to recover it. 00:31:21.642 [2024-06-07 21:48:21.838217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.642 [2024-06-07 21:48:21.838226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.642 qpair failed and we were unable to recover it. 00:31:21.642 [2024-06-07 21:48:21.838492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.642 [2024-06-07 21:48:21.838502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.642 qpair failed and we were unable to recover it. 00:31:21.642 [2024-06-07 21:48:21.838796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.642 [2024-06-07 21:48:21.838806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.642 qpair failed and we were unable to recover it. 00:31:21.642 [2024-06-07 21:48:21.839019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.642 [2024-06-07 21:48:21.839034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.642 qpair failed and we were unable to recover it. 00:31:21.642 [2024-06-07 21:48:21.839339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.642 [2024-06-07 21:48:21.839351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.642 qpair failed and we were unable to recover it. 00:31:21.642 [2024-06-07 21:48:21.839636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.642 [2024-06-07 21:48:21.839645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.642 qpair failed and we were unable to recover it. 00:31:21.642 [2024-06-07 21:48:21.839941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.642 [2024-06-07 21:48:21.839950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.642 qpair failed and we were unable to recover it. 00:31:21.642 [2024-06-07 21:48:21.840154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.642 [2024-06-07 21:48:21.840164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.642 qpair failed and we were unable to recover it. 00:31:21.642 [2024-06-07 21:48:21.840360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.642 [2024-06-07 21:48:21.840369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.643 qpair failed and we were unable to recover it. 00:31:21.643 [2024-06-07 21:48:21.840638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.643 [2024-06-07 21:48:21.840647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.643 qpair failed and we were unable to recover it. 00:31:21.643 [2024-06-07 21:48:21.840927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.643 [2024-06-07 21:48:21.840957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.643 qpair failed and we were unable to recover it. 00:31:21.643 [2024-06-07 21:48:21.841256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.643 [2024-06-07 21:48:21.841287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.643 qpair failed and we were unable to recover it. 00:31:21.643 [2024-06-07 21:48:21.841552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.643 [2024-06-07 21:48:21.841582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.643 qpair failed and we were unable to recover it. 00:31:21.643 [2024-06-07 21:48:21.841841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.643 [2024-06-07 21:48:21.841872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.643 qpair failed and we were unable to recover it. 00:31:21.643 [2024-06-07 21:48:21.842234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.643 [2024-06-07 21:48:21.842266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.643 qpair failed and we were unable to recover it. 00:31:21.643 [2024-06-07 21:48:21.842467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.643 [2024-06-07 21:48:21.842497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.643 qpair failed and we were unable to recover it. 00:31:21.643 [2024-06-07 21:48:21.842855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.643 [2024-06-07 21:48:21.842885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.643 qpair failed and we were unable to recover it. 00:31:21.643 [2024-06-07 21:48:21.843220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.643 [2024-06-07 21:48:21.843252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.643 qpair failed and we were unable to recover it. 00:31:21.643 [2024-06-07 21:48:21.843573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.643 [2024-06-07 21:48:21.843583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.643 qpair failed and we were unable to recover it. 00:31:21.643 [2024-06-07 21:48:21.843880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.643 [2024-06-07 21:48:21.843889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.643 qpair failed and we were unable to recover it. 00:31:21.643 [2024-06-07 21:48:21.844177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.643 [2024-06-07 21:48:21.844187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.643 qpair failed and we were unable to recover it. 00:31:21.643 [2024-06-07 21:48:21.844330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.643 [2024-06-07 21:48:21.844340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.643 qpair failed and we were unable to recover it. 00:31:21.643 [2024-06-07 21:48:21.844505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.643 [2024-06-07 21:48:21.844514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.643 qpair failed and we were unable to recover it. 00:31:21.643 [2024-06-07 21:48:21.844724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.643 [2024-06-07 21:48:21.844733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.643 qpair failed and we were unable to recover it. 00:31:21.643 [2024-06-07 21:48:21.845017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.643 [2024-06-07 21:48:21.845071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.643 qpair failed and we were unable to recover it. 00:31:21.643 [2024-06-07 21:48:21.845338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.643 [2024-06-07 21:48:21.845368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.643 qpair failed and we were unable to recover it. 00:31:21.643 [2024-06-07 21:48:21.845657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.643 [2024-06-07 21:48:21.845688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.643 qpair failed and we were unable to recover it. 00:31:21.643 [2024-06-07 21:48:21.845956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.643 [2024-06-07 21:48:21.845987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.643 qpair failed and we were unable to recover it. 00:31:21.643 [2024-06-07 21:48:21.846268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.643 [2024-06-07 21:48:21.846300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.643 qpair failed and we were unable to recover it. 00:31:21.643 [2024-06-07 21:48:21.846557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.643 [2024-06-07 21:48:21.846587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.643 qpair failed and we were unable to recover it. 00:31:21.643 [2024-06-07 21:48:21.846862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.643 [2024-06-07 21:48:21.846893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.643 qpair failed and we were unable to recover it. 00:31:21.643 [2024-06-07 21:48:21.847251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.643 [2024-06-07 21:48:21.847283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.643 qpair failed and we were unable to recover it. 00:31:21.643 [2024-06-07 21:48:21.847593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.643 [2024-06-07 21:48:21.847622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.643 qpair failed and we were unable to recover it. 00:31:21.643 [2024-06-07 21:48:21.847982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.643 [2024-06-07 21:48:21.848012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.643 qpair failed and we were unable to recover it. 00:31:21.643 [2024-06-07 21:48:21.848223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.643 [2024-06-07 21:48:21.848254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.643 qpair failed and we were unable to recover it. 00:31:21.643 [2024-06-07 21:48:21.848594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.643 [2024-06-07 21:48:21.848624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.643 qpair failed and we were unable to recover it. 00:31:21.643 [2024-06-07 21:48:21.848962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.643 [2024-06-07 21:48:21.848992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.643 qpair failed and we were unable to recover it. 00:31:21.643 [2024-06-07 21:48:21.849365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.643 [2024-06-07 21:48:21.849397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.643 qpair failed and we were unable to recover it. 00:31:21.643 [2024-06-07 21:48:21.849590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.643 [2024-06-07 21:48:21.849620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.643 qpair failed and we were unable to recover it. 00:31:21.643 [2024-06-07 21:48:21.849997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.643 [2024-06-07 21:48:21.850124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.643 qpair failed and we were unable to recover it. 00:31:21.643 [2024-06-07 21:48:21.850426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.643 [2024-06-07 21:48:21.850458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.643 qpair failed and we were unable to recover it. 00:31:21.643 [2024-06-07 21:48:21.850762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.643 [2024-06-07 21:48:21.850792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.644 qpair failed and we were unable to recover it. 00:31:21.644 [2024-06-07 21:48:21.851055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.644 [2024-06-07 21:48:21.851087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.644 qpair failed and we were unable to recover it. 00:31:21.644 [2024-06-07 21:48:21.851361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.644 [2024-06-07 21:48:21.851392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.644 qpair failed and we were unable to recover it. 00:31:21.644 [2024-06-07 21:48:21.851700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.644 [2024-06-07 21:48:21.851711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.644 qpair failed and we were unable to recover it. 00:31:21.644 [2024-06-07 21:48:21.852007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.644 [2024-06-07 21:48:21.852016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.644 qpair failed and we were unable to recover it. 00:31:21.644 [2024-06-07 21:48:21.852231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.644 [2024-06-07 21:48:21.852240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.644 qpair failed and we were unable to recover it. 00:31:21.644 [2024-06-07 21:48:21.852485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.644 [2024-06-07 21:48:21.852494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.644 qpair failed and we were unable to recover it. 00:31:21.644 [2024-06-07 21:48:21.852821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.644 [2024-06-07 21:48:21.852830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.644 qpair failed and we were unable to recover it. 00:31:21.644 [2024-06-07 21:48:21.853086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.644 [2024-06-07 21:48:21.853096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.644 qpair failed and we were unable to recover it. 00:31:21.644 [2024-06-07 21:48:21.853334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.644 [2024-06-07 21:48:21.853343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.644 qpair failed and we were unable to recover it. 00:31:21.644 [2024-06-07 21:48:21.853581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.644 [2024-06-07 21:48:21.853590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.644 qpair failed and we were unable to recover it. 00:31:21.644 [2024-06-07 21:48:21.853870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.644 [2024-06-07 21:48:21.853879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.644 qpair failed and we were unable to recover it. 00:31:21.644 [2024-06-07 21:48:21.854211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.644 [2024-06-07 21:48:21.854220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.644 qpair failed and we were unable to recover it. 00:31:21.644 [2024-06-07 21:48:21.854444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.644 [2024-06-07 21:48:21.854475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.644 qpair failed and we were unable to recover it. 00:31:21.644 [2024-06-07 21:48:21.854752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.644 [2024-06-07 21:48:21.854791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.644 qpair failed and we were unable to recover it. 00:31:21.644 [2024-06-07 21:48:21.855079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.644 [2024-06-07 21:48:21.855089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.644 qpair failed and we were unable to recover it. 00:31:21.644 [2024-06-07 21:48:21.855323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.644 [2024-06-07 21:48:21.855333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.644 qpair failed and we were unable to recover it. 00:31:21.644 [2024-06-07 21:48:21.855603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.644 [2024-06-07 21:48:21.855613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.644 qpair failed and we were unable to recover it. 00:31:21.644 [2024-06-07 21:48:21.855961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.644 [2024-06-07 21:48:21.855970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.644 qpair failed and we were unable to recover it. 00:31:21.644 [2024-06-07 21:48:21.856270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.644 [2024-06-07 21:48:21.856280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.644 qpair failed and we were unable to recover it. 00:31:21.644 [2024-06-07 21:48:21.856525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.644 [2024-06-07 21:48:21.856534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.644 qpair failed and we were unable to recover it. 00:31:21.644 [2024-06-07 21:48:21.856807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.644 [2024-06-07 21:48:21.856816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.644 qpair failed and we were unable to recover it. 00:31:21.644 [2024-06-07 21:48:21.857036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.644 [2024-06-07 21:48:21.857047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.644 qpair failed and we were unable to recover it. 00:31:21.644 [2024-06-07 21:48:21.857353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.644 [2024-06-07 21:48:21.857363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.644 qpair failed and we were unable to recover it. 00:31:21.644 [2024-06-07 21:48:21.857560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.644 [2024-06-07 21:48:21.857569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.644 qpair failed and we were unable to recover it. 00:31:21.644 [2024-06-07 21:48:21.857840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.644 [2024-06-07 21:48:21.857849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.644 qpair failed and we were unable to recover it. 00:31:21.644 [2024-06-07 21:48:21.858116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.644 [2024-06-07 21:48:21.858125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.644 qpair failed and we were unable to recover it. 00:31:21.644 [2024-06-07 21:48:21.858352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.644 [2024-06-07 21:48:21.858362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.644 qpair failed and we were unable to recover it. 00:31:21.644 [2024-06-07 21:48:21.858574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.644 [2024-06-07 21:48:21.858583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.644 qpair failed and we were unable to recover it. 00:31:21.644 [2024-06-07 21:48:21.858813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.644 [2024-06-07 21:48:21.858845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.644 qpair failed and we were unable to recover it. 00:31:21.644 [2024-06-07 21:48:21.859138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.644 [2024-06-07 21:48:21.859171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.644 qpair failed and we were unable to recover it. 00:31:21.644 [2024-06-07 21:48:21.859378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.644 [2024-06-07 21:48:21.859408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.644 qpair failed and we were unable to recover it. 00:31:21.644 [2024-06-07 21:48:21.859767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.644 [2024-06-07 21:48:21.859797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.644 qpair failed and we were unable to recover it. 00:31:21.644 [2024-06-07 21:48:21.860145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.644 [2024-06-07 21:48:21.860177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.644 qpair failed and we were unable to recover it. 00:31:21.644 [2024-06-07 21:48:21.860514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.644 [2024-06-07 21:48:21.860523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.644 qpair failed and we were unable to recover it. 00:31:21.644 [2024-06-07 21:48:21.860742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.644 [2024-06-07 21:48:21.860773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.644 qpair failed and we were unable to recover it. 00:31:21.644 [2024-06-07 21:48:21.861133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.644 [2024-06-07 21:48:21.861164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.644 qpair failed and we were unable to recover it. 00:31:21.644 [2024-06-07 21:48:21.861475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.644 [2024-06-07 21:48:21.861484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.644 qpair failed and we were unable to recover it. 00:31:21.644 [2024-06-07 21:48:21.861694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.645 [2024-06-07 21:48:21.861703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.645 qpair failed and we were unable to recover it. 00:31:21.645 [2024-06-07 21:48:21.862023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.645 [2024-06-07 21:48:21.862044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.645 qpair failed and we were unable to recover it. 00:31:21.645 [2024-06-07 21:48:21.862338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.645 [2024-06-07 21:48:21.862347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.645 qpair failed and we were unable to recover it. 00:31:21.645 [2024-06-07 21:48:21.862638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.645 [2024-06-07 21:48:21.862648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.645 qpair failed and we were unable to recover it. 00:31:21.645 [2024-06-07 21:48:21.862869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.645 [2024-06-07 21:48:21.862879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.645 qpair failed and we were unable to recover it. 00:31:21.645 [2024-06-07 21:48:21.863038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.645 [2024-06-07 21:48:21.863049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.645 qpair failed and we were unable to recover it. 00:31:21.645 [2024-06-07 21:48:21.863292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.645 [2024-06-07 21:48:21.863302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.645 qpair failed and we were unable to recover it. 00:31:21.645 [2024-06-07 21:48:21.863449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.645 [2024-06-07 21:48:21.863458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.645 qpair failed and we were unable to recover it. 00:31:21.645 [2024-06-07 21:48:21.863755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.645 [2024-06-07 21:48:21.863785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.645 qpair failed and we were unable to recover it. 00:31:21.645 [2024-06-07 21:48:21.864070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.645 [2024-06-07 21:48:21.864101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.645 qpair failed and we were unable to recover it. 00:31:21.645 [2024-06-07 21:48:21.864354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.645 [2024-06-07 21:48:21.864385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.645 qpair failed and we were unable to recover it. 00:31:21.645 [2024-06-07 21:48:21.864677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.645 [2024-06-07 21:48:21.864687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.645 qpair failed and we were unable to recover it. 00:31:21.645 [2024-06-07 21:48:21.864925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.645 [2024-06-07 21:48:21.864935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.645 qpair failed and we were unable to recover it. 00:31:21.645 [2024-06-07 21:48:21.865162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.645 [2024-06-07 21:48:21.865171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.645 qpair failed and we were unable to recover it. 00:31:21.645 [2024-06-07 21:48:21.865375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.645 [2024-06-07 21:48:21.865385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.645 qpair failed and we were unable to recover it. 00:31:21.645 [2024-06-07 21:48:21.865612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.645 [2024-06-07 21:48:21.865622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.645 qpair failed and we were unable to recover it. 00:31:21.645 [2024-06-07 21:48:21.865865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.645 [2024-06-07 21:48:21.865875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.645 qpair failed and we were unable to recover it. 00:31:21.645 [2024-06-07 21:48:21.866035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.645 [2024-06-07 21:48:21.866045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.645 qpair failed and we were unable to recover it. 00:31:21.645 [2024-06-07 21:48:21.866279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.645 [2024-06-07 21:48:21.866288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.645 qpair failed and we were unable to recover it. 00:31:21.645 [2024-06-07 21:48:21.866505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.645 [2024-06-07 21:48:21.866514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.645 qpair failed and we were unable to recover it. 00:31:21.645 [2024-06-07 21:48:21.866678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.645 [2024-06-07 21:48:21.866687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.645 qpair failed and we were unable to recover it. 00:31:21.645 [2024-06-07 21:48:21.866922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.645 [2024-06-07 21:48:21.866931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.645 qpair failed and we were unable to recover it. 00:31:21.645 [2024-06-07 21:48:21.867144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.645 [2024-06-07 21:48:21.867154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.645 qpair failed and we were unable to recover it. 00:31:21.645 [2024-06-07 21:48:21.867365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.645 [2024-06-07 21:48:21.867375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.645 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-07 21:48:21.867584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-07 21:48:21.867594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-07 21:48:21.867903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-07 21:48:21.867913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-07 21:48:21.868181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-07 21:48:21.868190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-07 21:48:21.868403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-07 21:48:21.868412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-07 21:48:21.868679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-07 21:48:21.868688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-07 21:48:21.868962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-07 21:48:21.868972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-07 21:48:21.869263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-07 21:48:21.869274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-07 21:48:21.869489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-07 21:48:21.869499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-07 21:48:21.869710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-07 21:48:21.869719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-07 21:48:21.870022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-07 21:48:21.870036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-07 21:48:21.870357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-07 21:48:21.870367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-07 21:48:21.870537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-07 21:48:21.870546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-07 21:48:21.870686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-07 21:48:21.870696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-07 21:48:21.870990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-07 21:48:21.871000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-07 21:48:21.871165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-07 21:48:21.871174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-07 21:48:21.871453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-07 21:48:21.871483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-07 21:48:21.871871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-07 21:48:21.871901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-07 21:48:21.872184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-07 21:48:21.872215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-07 21:48:21.872398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-07 21:48:21.872428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-07 21:48:21.872693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-07 21:48:21.872723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-07 21:48:21.873030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-07 21:48:21.873040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-07 21:48:21.873345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-07 21:48:21.873359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-07 21:48:21.873692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-07 21:48:21.873702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-07 21:48:21.874021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-07 21:48:21.874038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-07 21:48:21.874348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-07 21:48:21.874357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-07 21:48:21.874562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-07 21:48:21.874572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-07 21:48:21.874841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-07 21:48:21.874850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.932 [2024-06-07 21:48:21.875067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.932 [2024-06-07 21:48:21.875077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.932 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-07 21:48:21.875219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-07 21:48:21.875229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-07 21:48:21.875432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-07 21:48:21.875441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-07 21:48:21.875740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-07 21:48:21.875749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-07 21:48:21.876043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-07 21:48:21.876068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-07 21:48:21.876285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-07 21:48:21.876295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-07 21:48:21.876511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-07 21:48:21.876520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-07 21:48:21.876842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-07 21:48:21.876852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-07 21:48:21.877003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-07 21:48:21.877013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-07 21:48:21.877191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-07 21:48:21.877201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-07 21:48:21.877493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-07 21:48:21.877523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-07 21:48:21.877795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-07 21:48:21.877825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-07 21:48:21.878078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-07 21:48:21.878110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-07 21:48:21.878445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-07 21:48:21.878475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-07 21:48:21.878812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-07 21:48:21.878843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-07 21:48:21.879183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-07 21:48:21.879214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-07 21:48:21.879428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-07 21:48:21.879459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-07 21:48:21.879799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-07 21:48:21.879830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-07 21:48:21.880167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-07 21:48:21.880197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-07 21:48:21.880462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-07 21:48:21.880493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-07 21:48:21.880820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-07 21:48:21.880829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-07 21:48:21.881159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-07 21:48:21.881169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-07 21:48:21.881524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-07 21:48:21.881554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-07 21:48:21.881914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-07 21:48:21.881945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-07 21:48:21.882285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-07 21:48:21.882317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-07 21:48:21.882577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-07 21:48:21.882607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-07 21:48:21.882863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-07 21:48:21.882892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-07 21:48:21.883209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-07 21:48:21.883241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-07 21:48:21.883527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-07 21:48:21.883558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-07 21:48:21.883951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-07 21:48:21.883982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-07 21:48:21.884322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-07 21:48:21.884354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-07 21:48:21.884683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-07 21:48:21.884692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-07 21:48:21.884901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-07 21:48:21.884910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-07 21:48:21.885107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-07 21:48:21.885117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-07 21:48:21.885362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-07 21:48:21.885374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-07 21:48:21.885623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-07 21:48:21.885632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-07 21:48:21.885953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-07 21:48:21.885983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-07 21:48:21.886201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-07 21:48:21.886232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-07 21:48:21.886542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.933 [2024-06-07 21:48:21.886572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.933 qpair failed and we were unable to recover it. 00:31:21.933 [2024-06-07 21:48:21.886832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-07 21:48:21.886863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-07 21:48:21.887136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-07 21:48:21.887168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-07 21:48:21.887514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-07 21:48:21.887544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-07 21:48:21.887963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-07 21:48:21.887994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-07 21:48:21.888283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-07 21:48:21.888315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-07 21:48:21.888531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-07 21:48:21.888540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-07 21:48:21.888819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-07 21:48:21.888828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-07 21:48:21.889050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-07 21:48:21.889076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-07 21:48:21.889277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-07 21:48:21.889287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-07 21:48:21.889504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-07 21:48:21.889513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-07 21:48:21.889767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-07 21:48:21.889777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-07 21:48:21.890009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-07 21:48:21.890018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-07 21:48:21.890218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-07 21:48:21.890227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-07 21:48:21.890495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-07 21:48:21.890504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-07 21:48:21.890733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-07 21:48:21.890743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-07 21:48:21.890950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-07 21:48:21.890960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-07 21:48:21.891102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-07 21:48:21.891112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-07 21:48:21.891408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-07 21:48:21.891439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-07 21:48:21.891650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-07 21:48:21.891680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-07 21:48:21.891953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-07 21:48:21.891983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-07 21:48:21.892263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-07 21:48:21.892295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-07 21:48:21.892562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-07 21:48:21.892592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-07 21:48:21.892920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-07 21:48:21.892951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-07 21:48:21.893288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-07 21:48:21.893319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-07 21:48:21.893516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-07 21:48:21.893546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-07 21:48:21.893831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-07 21:48:21.893840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-07 21:48:21.894136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-07 21:48:21.894145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-07 21:48:21.894436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-07 21:48:21.894446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-07 21:48:21.894702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-07 21:48:21.894712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-07 21:48:21.894969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-07 21:48:21.894978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-07 21:48:21.895215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-07 21:48:21.895225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-07 21:48:21.895387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-07 21:48:21.895396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-07 21:48:21.895619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-07 21:48:21.895645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-07 21:48:21.895919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-07 21:48:21.895949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-07 21:48:21.896314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-07 21:48:21.896346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-07 21:48:21.896608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-07 21:48:21.896654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-07 21:48:21.896916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.934 [2024-06-07 21:48:21.896947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.934 qpair failed and we were unable to recover it. 00:31:21.934 [2024-06-07 21:48:21.897225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-07 21:48:21.897256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-07 21:48:21.897457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-07 21:48:21.897487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-07 21:48:21.897711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-07 21:48:21.897743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-07 21:48:21.898059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-07 21:48:21.898085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-07 21:48:21.898323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-07 21:48:21.898332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-07 21:48:21.898499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-07 21:48:21.898508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-07 21:48:21.898810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-07 21:48:21.898840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-07 21:48:21.899123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-07 21:48:21.899156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-07 21:48:21.899445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-07 21:48:21.899476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-07 21:48:21.899822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-07 21:48:21.899852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-07 21:48:21.900159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-07 21:48:21.900190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-07 21:48:21.900524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-07 21:48:21.900554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-07 21:48:21.900910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-07 21:48:21.900941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-07 21:48:21.901204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-07 21:48:21.901236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-07 21:48:21.901521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-07 21:48:21.901552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-07 21:48:21.901944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-07 21:48:21.901974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-07 21:48:21.902258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-07 21:48:21.902290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-07 21:48:21.902603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-07 21:48:21.902633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-07 21:48:21.902909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-07 21:48:21.902919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-07 21:48:21.903189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-07 21:48:21.903200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-07 21:48:21.903424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-07 21:48:21.903433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-07 21:48:21.903713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-07 21:48:21.903722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-07 21:48:21.904043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-07 21:48:21.904063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-07 21:48:21.904402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-07 21:48:21.904411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-07 21:48:21.904738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-07 21:48:21.904748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-07 21:48:21.905039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-07 21:48:21.905071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-07 21:48:21.905348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-07 21:48:21.905379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-07 21:48:21.905650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-07 21:48:21.905680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-07 21:48:21.906040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-07 21:48:21.906074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-07 21:48:21.906291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-07 21:48:21.906322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-07 21:48:21.906577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-07 21:48:21.906608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-07 21:48:21.906882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-07 21:48:21.906913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-07 21:48:21.907256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-07 21:48:21.907288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-07 21:48:21.907631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-07 21:48:21.907661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-07 21:48:21.908039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-07 21:48:21.908071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-07 21:48:21.908356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-07 21:48:21.908398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-07 21:48:21.908606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-07 21:48:21.908615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-07 21:48:21.908918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.935 [2024-06-07 21:48:21.908927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.935 qpair failed and we were unable to recover it. 00:31:21.935 [2024-06-07 21:48:21.909127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-07 21:48:21.909139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-07 21:48:21.909303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-07 21:48:21.909312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-07 21:48:21.909583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-07 21:48:21.909592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-07 21:48:21.909810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-07 21:48:21.909840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-07 21:48:21.910179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-07 21:48:21.910210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-07 21:48:21.910455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-07 21:48:21.910485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-07 21:48:21.910753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-07 21:48:21.910783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-07 21:48:21.911047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-07 21:48:21.911079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-07 21:48:21.911345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-07 21:48:21.911374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-07 21:48:21.911660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-07 21:48:21.911690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-07 21:48:21.912041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-07 21:48:21.912072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-07 21:48:21.912389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-07 21:48:21.912419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-07 21:48:21.912609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-07 21:48:21.912618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-07 21:48:21.912927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-07 21:48:21.912936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-07 21:48:21.913151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-07 21:48:21.913161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-07 21:48:21.913371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-07 21:48:21.913381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-07 21:48:21.913545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-07 21:48:21.913554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-07 21:48:21.913789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-07 21:48:21.913799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-07 21:48:21.914138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-07 21:48:21.914170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-07 21:48:21.914431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-07 21:48:21.914461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-07 21:48:21.914669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-07 21:48:21.914678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-07 21:48:21.914916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-07 21:48:21.914925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-07 21:48:21.915242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-07 21:48:21.915252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-07 21:48:21.915472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-07 21:48:21.915502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-07 21:48:21.915695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-07 21:48:21.915726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-07 21:48:21.915998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-07 21:48:21.916007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-07 21:48:21.916234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-07 21:48:21.916244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-07 21:48:21.916466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-07 21:48:21.916475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-07 21:48:21.916638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-07 21:48:21.916648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-07 21:48:21.916918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-07 21:48:21.916928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-07 21:48:21.917168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-07 21:48:21.917178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.936 [2024-06-07 21:48:21.917351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.936 [2024-06-07 21:48:21.917361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.936 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-07 21:48:21.917658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-07 21:48:21.917689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-07 21:48:21.918064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-07 21:48:21.918107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-07 21:48:21.918326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-07 21:48:21.918335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-07 21:48:21.918493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-07 21:48:21.918502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-07 21:48:21.918814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-07 21:48:21.918844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-07 21:48:21.919166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-07 21:48:21.919198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-07 21:48:21.919519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-07 21:48:21.919560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-07 21:48:21.919864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-07 21:48:21.919874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-07 21:48:21.920201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-07 21:48:21.920212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-07 21:48:21.920437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-07 21:48:21.920468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-07 21:48:21.920798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-07 21:48:21.920829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-07 21:48:21.921100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-07 21:48:21.921131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-07 21:48:21.921445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-07 21:48:21.921476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-07 21:48:21.921794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-07 21:48:21.921824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-07 21:48:21.922148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-07 21:48:21.922180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-07 21:48:21.922501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-07 21:48:21.922531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-07 21:48:21.922837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-07 21:48:21.922868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-07 21:48:21.923120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-07 21:48:21.923131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-07 21:48:21.923400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-07 21:48:21.923408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-07 21:48:21.923623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-07 21:48:21.923633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-07 21:48:21.923949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-07 21:48:21.923959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-07 21:48:21.924249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-07 21:48:21.924259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-07 21:48:21.924556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-07 21:48:21.924565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-07 21:48:21.924765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-07 21:48:21.924774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-07 21:48:21.924915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-07 21:48:21.924924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-07 21:48:21.925141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-07 21:48:21.925151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-07 21:48:21.925456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-07 21:48:21.925486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-07 21:48:21.925753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-07 21:48:21.925784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-07 21:48:21.926161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-07 21:48:21.926193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-07 21:48:21.926482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-07 21:48:21.926512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-07 21:48:21.926800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-07 21:48:21.926831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-07 21:48:21.927097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-07 21:48:21.927128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-07 21:48:21.927472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-07 21:48:21.927503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-07 21:48:21.927852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-07 21:48:21.927861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-07 21:48:21.928200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-07 21:48:21.928231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-07 21:48:21.928481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-07 21:48:21.928512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-07 21:48:21.928849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-07 21:48:21.928859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.937 [2024-06-07 21:48:21.929197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.937 [2024-06-07 21:48:21.929229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.937 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-07 21:48:21.929562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-07 21:48:21.929593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-07 21:48:21.929966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-07 21:48:21.929997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-07 21:48:21.930383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-07 21:48:21.930415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-07 21:48:21.930787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-07 21:48:21.930818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-07 21:48:21.931013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-07 21:48:21.931022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-07 21:48:21.931332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-07 21:48:21.931342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-07 21:48:21.931651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-07 21:48:21.931661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-07 21:48:21.931948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-07 21:48:21.931958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-07 21:48:21.932224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-07 21:48:21.932234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-07 21:48:21.932510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-07 21:48:21.932519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-07 21:48:21.932827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-07 21:48:21.932839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-07 21:48:21.933148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-07 21:48:21.933179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-07 21:48:21.933439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-07 21:48:21.933469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-07 21:48:21.933686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-07 21:48:21.933717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-07 21:48:21.934057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-07 21:48:21.934088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-07 21:48:21.934405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-07 21:48:21.934436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-07 21:48:21.934774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-07 21:48:21.934805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-07 21:48:21.935121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-07 21:48:21.935153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-07 21:48:21.935423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-07 21:48:21.935454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-07 21:48:21.935797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-07 21:48:21.935827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-07 21:48:21.936173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-07 21:48:21.936204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-07 21:48:21.936522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-07 21:48:21.936552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-07 21:48:21.936926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-07 21:48:21.936956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-07 21:48:21.937292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-07 21:48:21.937324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-07 21:48:21.937680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-07 21:48:21.937710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-07 21:48:21.938046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-07 21:48:21.938056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-07 21:48:21.938340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-07 21:48:21.938350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-07 21:48:21.938501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-07 21:48:21.938511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-07 21:48:21.938812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-07 21:48:21.938843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-07 21:48:21.939153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-07 21:48:21.939185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-07 21:48:21.939472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-07 21:48:21.939503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-07 21:48:21.939819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-07 21:48:21.939850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-07 21:48:21.940195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-07 21:48:21.940205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-07 21:48:21.940479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-07 21:48:21.940489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-07 21:48:21.940702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-07 21:48:21.940711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-07 21:48:21.941011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-07 21:48:21.941021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-07 21:48:21.941187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-07 21:48:21.941196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-07 21:48:21.941528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.938 [2024-06-07 21:48:21.941560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.938 qpair failed and we were unable to recover it. 00:31:21.938 [2024-06-07 21:48:21.941902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-07 21:48:21.941940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-07 21:48:21.942223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-07 21:48:21.942233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-07 21:48:21.942603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-07 21:48:21.942613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-07 21:48:21.942936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-07 21:48:21.942968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-07 21:48:21.943312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-07 21:48:21.943344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-07 21:48:21.943675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-07 21:48:21.943704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-07 21:48:21.943982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-07 21:48:21.943992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-07 21:48:21.944233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-07 21:48:21.944243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-07 21:48:21.944540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-07 21:48:21.944549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-07 21:48:21.944822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-07 21:48:21.944832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-07 21:48:21.945044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-07 21:48:21.945054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-07 21:48:21.945292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-07 21:48:21.945301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-07 21:48:21.945576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-07 21:48:21.945588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-07 21:48:21.945794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-07 21:48:21.945805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-07 21:48:21.946105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-07 21:48:21.946116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-07 21:48:21.946340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-07 21:48:21.946349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-07 21:48:21.946641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-07 21:48:21.946666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-07 21:48:21.946869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-07 21:48:21.946878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-07 21:48:21.947035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-07 21:48:21.947045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-07 21:48:21.947361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-07 21:48:21.947392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-07 21:48:21.947727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-07 21:48:21.947757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-07 21:48:21.948096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-07 21:48:21.948106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-07 21:48:21.948416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-07 21:48:21.948425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-07 21:48:21.948637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-07 21:48:21.948646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-07 21:48:21.948960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-07 21:48:21.948970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-07 21:48:21.949253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-07 21:48:21.949263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-07 21:48:21.949560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-07 21:48:21.949570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-07 21:48:21.949869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-07 21:48:21.949878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-07 21:48:21.950175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-07 21:48:21.950185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-07 21:48:21.950420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-07 21:48:21.950430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-07 21:48:21.950730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-07 21:48:21.950740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-07 21:48:21.950967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-07 21:48:21.950976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-07 21:48:21.951272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-07 21:48:21.951282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-07 21:48:21.951572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-07 21:48:21.951582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-07 21:48:21.951881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-07 21:48:21.951891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-07 21:48:21.952098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-07 21:48:21.952108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-07 21:48:21.952248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-07 21:48:21.952258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-07 21:48:21.952494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.939 [2024-06-07 21:48:21.952503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.939 qpair failed and we were unable to recover it. 00:31:21.939 [2024-06-07 21:48:21.952808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-07 21:48:21.952818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-07 21:48:21.953136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-07 21:48:21.953146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-07 21:48:21.953360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-07 21:48:21.953370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-07 21:48:21.953663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-07 21:48:21.953672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-07 21:48:21.953958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-07 21:48:21.953967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-07 21:48:21.954285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-07 21:48:21.954316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-07 21:48:21.954634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-07 21:48:21.954666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-07 21:48:21.954923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-07 21:48:21.954953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-07 21:48:21.955312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-07 21:48:21.955322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-07 21:48:21.955629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-07 21:48:21.955660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-07 21:48:21.955862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-07 21:48:21.955892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-07 21:48:21.956237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-07 21:48:21.956268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-07 21:48:21.956607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-07 21:48:21.956637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-07 21:48:21.956980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-07 21:48:21.957011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-07 21:48:21.957370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-07 21:48:21.957401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-07 21:48:21.957758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-07 21:48:21.957789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-07 21:48:21.958128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-07 21:48:21.958138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-07 21:48:21.958289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-07 21:48:21.958298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-07 21:48:21.958597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-07 21:48:21.958606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-07 21:48:21.958842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-07 21:48:21.958852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-07 21:48:21.959099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-07 21:48:21.959109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-07 21:48:21.959339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-07 21:48:21.959348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-07 21:48:21.959655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-07 21:48:21.959664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-07 21:48:21.959962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-07 21:48:21.959971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-07 21:48:21.960255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-07 21:48:21.960264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-07 21:48:21.960505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-07 21:48:21.960514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-07 21:48:21.960800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-07 21:48:21.960809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-07 21:48:21.961131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-07 21:48:21.961141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-07 21:48:21.961359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-07 21:48:21.961368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-07 21:48:21.961597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-07 21:48:21.961607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-07 21:48:21.961824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-07 21:48:21.961833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-07 21:48:21.962173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-07 21:48:21.962183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-07 21:48:21.962532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-07 21:48:21.962562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-07 21:48:21.962895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-07 21:48:21.962926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-07 21:48:21.963273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-07 21:48:21.963304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-07 21:48:21.963646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-07 21:48:21.963677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-07 21:48:21.963962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-07 21:48:21.963993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.940 [2024-06-07 21:48:21.964353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.940 [2024-06-07 21:48:21.964385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.940 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-07 21:48:21.964667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-07 21:48:21.964697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-07 21:48:21.965045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-07 21:48:21.965076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-07 21:48:21.965447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-07 21:48:21.965478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-07 21:48:21.965799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-07 21:48:21.965834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-07 21:48:21.966169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-07 21:48:21.966178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-07 21:48:21.966490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-07 21:48:21.966521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-07 21:48:21.966880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-07 21:48:21.966911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-07 21:48:21.967208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-07 21:48:21.967218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-07 21:48:21.967515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-07 21:48:21.967525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-07 21:48:21.967746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-07 21:48:21.967756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-07 21:48:21.967959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-07 21:48:21.967968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-07 21:48:21.968167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-07 21:48:21.968177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-07 21:48:21.968457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-07 21:48:21.968488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-07 21:48:21.968875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-07 21:48:21.968906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-07 21:48:21.969218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-07 21:48:21.969229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-07 21:48:21.969437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-07 21:48:21.969447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-07 21:48:21.969691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-07 21:48:21.969701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-07 21:48:21.969923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-07 21:48:21.969933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-07 21:48:21.970204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-07 21:48:21.970215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-07 21:48:21.970519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-07 21:48:21.970529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-07 21:48:21.970851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-07 21:48:21.970861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-07 21:48:21.971139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-07 21:48:21.971149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-07 21:48:21.971456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-07 21:48:21.971482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-07 21:48:21.971755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-07 21:48:21.971786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-07 21:48:21.972156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-07 21:48:21.972187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-07 21:48:21.972460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-07 21:48:21.972490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-07 21:48:21.972737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-07 21:48:21.972768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-07 21:48:21.973108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-07 21:48:21.973138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-07 21:48:21.973440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-07 21:48:21.973449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-07 21:48:21.973669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-07 21:48:21.973679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-07 21:48:21.973976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-07 21:48:21.973986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-07 21:48:21.974300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-07 21:48:21.974311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.941 [2024-06-07 21:48:21.974557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.941 [2024-06-07 21:48:21.974566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.941 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-07 21:48:21.974897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-07 21:48:21.974907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-07 21:48:21.975208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-07 21:48:21.975218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-07 21:48:21.975519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-07 21:48:21.975529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-07 21:48:21.975693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-07 21:48:21.975702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-07 21:48:21.976033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-07 21:48:21.976043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-07 21:48:21.976348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-07 21:48:21.976358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-07 21:48:21.976658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-07 21:48:21.976667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-07 21:48:21.976964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-07 21:48:21.976995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-07 21:48:21.977254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-07 21:48:21.977286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-07 21:48:21.977529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-07 21:48:21.977559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-07 21:48:21.977902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-07 21:48:21.977938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-07 21:48:21.978278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-07 21:48:21.978310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-07 21:48:21.978654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-07 21:48:21.978684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-07 21:48:21.978999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-07 21:48:21.979041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-07 21:48:21.979324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-07 21:48:21.979334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-07 21:48:21.979642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-07 21:48:21.979651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-07 21:48:21.979783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-07 21:48:21.979792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-07 21:48:21.980013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-07 21:48:21.980022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-07 21:48:21.980341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-07 21:48:21.980351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-07 21:48:21.980555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-07 21:48:21.980565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-07 21:48:21.980860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-07 21:48:21.980870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-07 21:48:21.981160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-07 21:48:21.981170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-07 21:48:21.981401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-07 21:48:21.981411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-07 21:48:21.981686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-07 21:48:21.981696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-07 21:48:21.981917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-07 21:48:21.981926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-07 21:48:21.982228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-07 21:48:21.982238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-07 21:48:21.982512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-07 21:48:21.982521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-07 21:48:21.982819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-07 21:48:21.982829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-07 21:48:21.983106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-07 21:48:21.983116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-07 21:48:21.983425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-07 21:48:21.983455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-07 21:48:21.983739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-07 21:48:21.983769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-07 21:48:21.984049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-07 21:48:21.984059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-07 21:48:21.984335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-07 21:48:21.984344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-07 21:48:21.984519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-07 21:48:21.984529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-07 21:48:21.984859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-07 21:48:21.984890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-07 21:48:21.985177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-07 21:48:21.985208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-07 21:48:21.985549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-07 21:48:21.985579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-07 21:48:21.985910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.942 [2024-06-07 21:48:21.985919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.942 qpair failed and we were unable to recover it. 00:31:21.942 [2024-06-07 21:48:21.986156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-07 21:48:21.986166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-07 21:48:21.986440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-07 21:48:21.986449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-07 21:48:21.986722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-07 21:48:21.986731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-07 21:48:21.986965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-07 21:48:21.986974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-07 21:48:21.987260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-07 21:48:21.987270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-07 21:48:21.987548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-07 21:48:21.987558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-07 21:48:21.987856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-07 21:48:21.987865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-07 21:48:21.988034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-07 21:48:21.988044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-07 21:48:21.988283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-07 21:48:21.988292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-07 21:48:21.988592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-07 21:48:21.988601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-07 21:48:21.988748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-07 21:48:21.988757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-07 21:48:21.989053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-07 21:48:21.989063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-07 21:48:21.989300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-07 21:48:21.989311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-07 21:48:21.989530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-07 21:48:21.989540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-07 21:48:21.989817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-07 21:48:21.989826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-07 21:48:21.990128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-07 21:48:21.990138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-07 21:48:21.990436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-07 21:48:21.990447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-07 21:48:21.990603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-07 21:48:21.990613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-07 21:48:21.990819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-07 21:48:21.990828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-07 21:48:21.991064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-07 21:48:21.991074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-07 21:48:21.991353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-07 21:48:21.991363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-07 21:48:21.991664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-07 21:48:21.991673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-07 21:48:21.991900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-07 21:48:21.991909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-07 21:48:21.992194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-07 21:48:21.992204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-07 21:48:21.992476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-07 21:48:21.992485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-07 21:48:21.992792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-07 21:48:21.992802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-07 21:48:21.993153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-07 21:48:21.993184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-07 21:48:21.993499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-07 21:48:21.993529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-07 21:48:21.993902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-07 21:48:21.993933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-07 21:48:21.994194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-07 21:48:21.994226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-07 21:48:21.994595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-07 21:48:21.994625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-07 21:48:21.994967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-07 21:48:21.994998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-07 21:48:21.995274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-07 21:48:21.995306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-07 21:48:21.995493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-07 21:48:21.995524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-07 21:48:21.995787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-07 21:48:21.995817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-07 21:48:21.996132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-07 21:48:21.996164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-07 21:48:21.996483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-07 21:48:21.996493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-07 21:48:21.996642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.943 [2024-06-07 21:48:21.996652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.943 qpair failed and we were unable to recover it. 00:31:21.943 [2024-06-07 21:48:21.996882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-07 21:48:21.996912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-07 21:48:21.997263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-07 21:48:21.997295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-07 21:48:21.997559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-07 21:48:21.997568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-07 21:48:21.997804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-07 21:48:21.997814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-07 21:48:21.998073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-07 21:48:21.998083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-07 21:48:21.998388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-07 21:48:21.998397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-07 21:48:21.998628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-07 21:48:21.998637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-07 21:48:21.998848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-07 21:48:21.998858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-07 21:48:21.999075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-07 21:48:21.999085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-07 21:48:21.999288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-07 21:48:21.999298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-07 21:48:21.999534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-07 21:48:21.999544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-07 21:48:21.999772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-07 21:48:21.999781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-07 21:48:22.000077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-07 21:48:22.000087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-07 21:48:22.000391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-07 21:48:22.000401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-07 21:48:22.000678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-07 21:48:22.000690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-07 21:48:22.000996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-07 21:48:22.001006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-07 21:48:22.001311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-07 21:48:22.001321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-07 21:48:22.001609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-07 21:48:22.001639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-07 21:48:22.001976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-07 21:48:22.002006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-07 21:48:22.002351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-07 21:48:22.002382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-07 21:48:22.002642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-07 21:48:22.002673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-07 21:48:22.002958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-07 21:48:22.002988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-07 21:48:22.003316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-07 21:48:22.003325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-07 21:48:22.003558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-07 21:48:22.003568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-07 21:48:22.003869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-07 21:48:22.003878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-07 21:48:22.004078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-07 21:48:22.004088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-07 21:48:22.004305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-07 21:48:22.004314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-07 21:48:22.004594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-07 21:48:22.004603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-07 21:48:22.004817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-07 21:48:22.004826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-07 21:48:22.005063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-07 21:48:22.005073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-07 21:48:22.005297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-07 21:48:22.005306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-07 21:48:22.005506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-07 21:48:22.005516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-07 21:48:22.005796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-07 21:48:22.005805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-07 21:48:22.006074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-07 21:48:22.006085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-07 21:48:22.006389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-07 21:48:22.006398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-07 21:48:22.006641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-07 21:48:22.006650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-07 21:48:22.006969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-07 21:48:22.007000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.944 [2024-06-07 21:48:22.007290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.944 [2024-06-07 21:48:22.007321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.944 qpair failed and we were unable to recover it. 00:31:21.945 [2024-06-07 21:48:22.007668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.945 [2024-06-07 21:48:22.007698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.945 qpair failed and we were unable to recover it. 00:31:21.945 [2024-06-07 21:48:22.008045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.945 [2024-06-07 21:48:22.008077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.945 qpair failed and we were unable to recover it. 00:31:21.945 [2024-06-07 21:48:22.008388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.945 [2024-06-07 21:48:22.008397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.945 qpair failed and we were unable to recover it. 00:31:21.945 [2024-06-07 21:48:22.008615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.945 [2024-06-07 21:48:22.008625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.945 qpair failed and we were unable to recover it. 00:31:21.945 [2024-06-07 21:48:22.008928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.945 [2024-06-07 21:48:22.008938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.945 qpair failed and we were unable to recover it. 00:31:21.945 [2024-06-07 21:48:22.009218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.945 [2024-06-07 21:48:22.009228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.945 qpair failed and we were unable to recover it. 00:31:21.945 [2024-06-07 21:48:22.009429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.945 [2024-06-07 21:48:22.009439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.945 qpair failed and we were unable to recover it. 00:31:21.945 [2024-06-07 21:48:22.009649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.945 [2024-06-07 21:48:22.009658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.945 qpair failed and we were unable to recover it. 00:31:21.945 [2024-06-07 21:48:22.009977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.945 [2024-06-07 21:48:22.009986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.945 qpair failed and we were unable to recover it. 00:31:21.945 [2024-06-07 21:48:22.010187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.945 [2024-06-07 21:48:22.010197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.945 qpair failed and we were unable to recover it. 00:31:21.945 [2024-06-07 21:48:22.010414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.945 [2024-06-07 21:48:22.010423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.945 qpair failed and we were unable to recover it. 00:31:21.945 [2024-06-07 21:48:22.010714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.945 [2024-06-07 21:48:22.010723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.945 qpair failed and we were unable to recover it. 00:31:21.945 [2024-06-07 21:48:22.011048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.945 [2024-06-07 21:48:22.011058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.945 qpair failed and we were unable to recover it. 00:31:21.945 [2024-06-07 21:48:22.011292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.945 [2024-06-07 21:48:22.011302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.945 qpair failed and we were unable to recover it. 00:31:21.945 [2024-06-07 21:48:22.011505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.945 [2024-06-07 21:48:22.011515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.945 qpair failed and we were unable to recover it. 00:31:21.945 [2024-06-07 21:48:22.011750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.945 [2024-06-07 21:48:22.011780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.945 qpair failed and we were unable to recover it. 00:31:21.945 [2024-06-07 21:48:22.012150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.945 [2024-06-07 21:48:22.012204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.945 qpair failed and we were unable to recover it. 00:31:21.945 [2024-06-07 21:48:22.012472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.945 [2024-06-07 21:48:22.012502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.945 qpair failed and we were unable to recover it. 00:31:21.945 [2024-06-07 21:48:22.012843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.945 [2024-06-07 21:48:22.012886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.945 qpair failed and we were unable to recover it. 00:31:21.945 [2024-06-07 21:48:22.013178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.945 [2024-06-07 21:48:22.013187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.945 qpair failed and we were unable to recover it. 00:31:21.945 [2024-06-07 21:48:22.013489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.945 [2024-06-07 21:48:22.013498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.945 qpair failed and we were unable to recover it. 00:31:21.945 [2024-06-07 21:48:22.013740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.945 [2024-06-07 21:48:22.013750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.945 qpair failed and we were unable to recover it. 00:31:21.945 [2024-06-07 21:48:22.014053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.945 [2024-06-07 21:48:22.014063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.945 qpair failed and we were unable to recover it. 00:31:21.945 [2024-06-07 21:48:22.014346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.945 [2024-06-07 21:48:22.014356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.945 qpair failed and we were unable to recover it. 00:31:21.945 [2024-06-07 21:48:22.014684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.945 [2024-06-07 21:48:22.014694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.945 qpair failed and we were unable to recover it. 00:31:21.945 [2024-06-07 21:48:22.014955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.945 [2024-06-07 21:48:22.014986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.945 qpair failed and we were unable to recover it. 00:31:21.945 [2024-06-07 21:48:22.015266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.945 [2024-06-07 21:48:22.015297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.945 qpair failed and we were unable to recover it. 00:31:21.945 [2024-06-07 21:48:22.015642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.945 [2024-06-07 21:48:22.015674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.945 qpair failed and we were unable to recover it. 00:31:21.945 [2024-06-07 21:48:22.016042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.945 [2024-06-07 21:48:22.016073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.945 qpair failed and we were unable to recover it. 00:31:21.945 [2024-06-07 21:48:22.016349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.945 [2024-06-07 21:48:22.016381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.945 qpair failed and we were unable to recover it. 00:31:21.945 [2024-06-07 21:48:22.016766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.945 [2024-06-07 21:48:22.016796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.945 qpair failed and we were unable to recover it. 00:31:21.945 [2024-06-07 21:48:22.017136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.945 [2024-06-07 21:48:22.017167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.945 qpair failed and we were unable to recover it. 00:31:21.945 [2024-06-07 21:48:22.017418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.945 [2024-06-07 21:48:22.017449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.945 qpair failed and we were unable to recover it. 00:31:21.945 [2024-06-07 21:48:22.017714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.945 [2024-06-07 21:48:22.017745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.945 qpair failed and we were unable to recover it. 00:31:21.945 [2024-06-07 21:48:22.017914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.946 [2024-06-07 21:48:22.017944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.946 qpair failed and we were unable to recover it. 00:31:21.946 [2024-06-07 21:48:22.018283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.946 [2024-06-07 21:48:22.018293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.946 qpair failed and we were unable to recover it. 00:31:21.946 [2024-06-07 21:48:22.018512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.946 [2024-06-07 21:48:22.018522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.946 qpair failed and we were unable to recover it. 00:31:21.946 [2024-06-07 21:48:22.018706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.946 [2024-06-07 21:48:22.018716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.946 qpair failed and we were unable to recover it. 00:31:21.946 [2024-06-07 21:48:22.019014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.946 [2024-06-07 21:48:22.019055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.946 qpair failed and we were unable to recover it. 00:31:21.946 [2024-06-07 21:48:22.019369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.946 [2024-06-07 21:48:22.019400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.946 qpair failed and we were unable to recover it. 00:31:21.946 [2024-06-07 21:48:22.019719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.946 [2024-06-07 21:48:22.019750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.946 qpair failed and we were unable to recover it. 00:31:21.946 [2024-06-07 21:48:22.020060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.946 [2024-06-07 21:48:22.020092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.946 qpair failed and we were unable to recover it. 00:31:21.946 [2024-06-07 21:48:22.020340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.946 [2024-06-07 21:48:22.020370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.946 qpair failed and we were unable to recover it. 00:31:21.946 [2024-06-07 21:48:22.020725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.946 [2024-06-07 21:48:22.020755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.946 qpair failed and we were unable to recover it. 00:31:21.946 [2024-06-07 21:48:22.021098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.946 [2024-06-07 21:48:22.021130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.946 qpair failed and we were unable to recover it. 00:31:21.946 [2024-06-07 21:48:22.021329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.946 [2024-06-07 21:48:22.021359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.946 qpair failed and we were unable to recover it. 00:31:21.946 [2024-06-07 21:48:22.021633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.946 [2024-06-07 21:48:22.021663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.946 qpair failed and we were unable to recover it. 00:31:21.946 [2024-06-07 21:48:22.021995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.946 [2024-06-07 21:48:22.022004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.946 qpair failed and we were unable to recover it. 00:31:21.946 [2024-06-07 21:48:22.022163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.946 [2024-06-07 21:48:22.022195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.946 qpair failed and we were unable to recover it. 00:31:21.946 [2024-06-07 21:48:22.022510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.946 [2024-06-07 21:48:22.022541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.946 qpair failed and we were unable to recover it. 00:31:21.946 [2024-06-07 21:48:22.022873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.946 [2024-06-07 21:48:22.022903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.946 qpair failed and we were unable to recover it. 00:31:21.946 [2024-06-07 21:48:22.023139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.946 [2024-06-07 21:48:22.023148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.946 qpair failed and we were unable to recover it. 00:31:21.946 [2024-06-07 21:48:22.023419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.946 [2024-06-07 21:48:22.023428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.946 qpair failed and we were unable to recover it. 00:31:21.946 [2024-06-07 21:48:22.023631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.946 [2024-06-07 21:48:22.023641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.946 qpair failed and we were unable to recover it. 00:31:21.946 [2024-06-07 21:48:22.023934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.946 [2024-06-07 21:48:22.023965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.946 qpair failed and we were unable to recover it. 00:31:21.946 [2024-06-07 21:48:22.024139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.946 [2024-06-07 21:48:22.024170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.946 qpair failed and we were unable to recover it. 00:31:21.946 [2024-06-07 21:48:22.024483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.946 [2024-06-07 21:48:22.024518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.946 qpair failed and we were unable to recover it. 00:31:21.946 [2024-06-07 21:48:22.024767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.946 [2024-06-07 21:48:22.024798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.946 qpair failed and we were unable to recover it. 00:31:21.946 [2024-06-07 21:48:22.025055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.946 [2024-06-07 21:48:22.025087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.946 qpair failed and we were unable to recover it. 00:31:21.946 [2024-06-07 21:48:22.025406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.946 [2024-06-07 21:48:22.025437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.946 qpair failed and we were unable to recover it. 00:31:21.946 [2024-06-07 21:48:22.025641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.946 [2024-06-07 21:48:22.025672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.946 qpair failed and we were unable to recover it. 00:31:21.946 [2024-06-07 21:48:22.025929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.946 [2024-06-07 21:48:22.025938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.946 qpair failed and we were unable to recover it. 00:31:21.946 [2024-06-07 21:48:22.026233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.946 [2024-06-07 21:48:22.026243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.946 qpair failed and we were unable to recover it. 00:31:21.946 [2024-06-07 21:48:22.026444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.946 [2024-06-07 21:48:22.026453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.946 qpair failed and we were unable to recover it. 00:31:21.946 [2024-06-07 21:48:22.026753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.946 [2024-06-07 21:48:22.026762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.946 qpair failed and we were unable to recover it. 00:31:21.946 [2024-06-07 21:48:22.026994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.946 [2024-06-07 21:48:22.027004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.946 qpair failed and we were unable to recover it. 00:31:21.946 [2024-06-07 21:48:22.027156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.946 [2024-06-07 21:48:22.027166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.946 qpair failed and we were unable to recover it. 00:31:21.946 [2024-06-07 21:48:22.027450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.946 [2024-06-07 21:48:22.027460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.946 qpair failed and we were unable to recover it. 00:31:21.946 [2024-06-07 21:48:22.027609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.946 [2024-06-07 21:48:22.027619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.946 qpair failed and we were unable to recover it. 00:31:21.946 [2024-06-07 21:48:22.027896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.946 [2024-06-07 21:48:22.027927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.946 qpair failed and we were unable to recover it. 00:31:21.946 [2024-06-07 21:48:22.028201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.946 [2024-06-07 21:48:22.028233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.946 qpair failed and we were unable to recover it. 00:31:21.947 [2024-06-07 21:48:22.028550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.947 [2024-06-07 21:48:22.028581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.947 qpair failed and we were unable to recover it. 00:31:21.947 [2024-06-07 21:48:22.028912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.947 [2024-06-07 21:48:22.028943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.947 qpair failed and we were unable to recover it. 00:31:21.947 [2024-06-07 21:48:22.029256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.947 [2024-06-07 21:48:22.029265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.947 qpair failed and we were unable to recover it. 00:31:21.947 [2024-06-07 21:48:22.029484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.947 [2024-06-07 21:48:22.029493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.947 qpair failed and we were unable to recover it. 00:31:21.947 [2024-06-07 21:48:22.029720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.947 [2024-06-07 21:48:22.029729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.947 qpair failed and we were unable to recover it. 00:31:21.947 [2024-06-07 21:48:22.029887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.947 [2024-06-07 21:48:22.029917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.947 qpair failed and we were unable to recover it. 00:31:21.947 [2024-06-07 21:48:22.030163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.947 [2024-06-07 21:48:22.030194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.947 qpair failed and we were unable to recover it. 00:31:21.947 [2024-06-07 21:48:22.030519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.947 [2024-06-07 21:48:22.030550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.947 qpair failed and we were unable to recover it. 00:31:21.947 [2024-06-07 21:48:22.030864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.947 [2024-06-07 21:48:22.030894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.947 qpair failed and we were unable to recover it. 00:31:21.947 [2024-06-07 21:48:22.031102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.947 [2024-06-07 21:48:22.031133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.947 qpair failed and we were unable to recover it. 00:31:21.947 [2024-06-07 21:48:22.031400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.947 [2024-06-07 21:48:22.031409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.947 qpair failed and we were unable to recover it. 00:31:21.947 [2024-06-07 21:48:22.031691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.947 [2024-06-07 21:48:22.031700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.947 qpair failed and we were unable to recover it. 00:31:21.947 [2024-06-07 21:48:22.031973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.947 [2024-06-07 21:48:22.031981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.947 qpair failed and we were unable to recover it. 00:31:21.947 [2024-06-07 21:48:22.032394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.947 [2024-06-07 21:48:22.032425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.947 qpair failed and we were unable to recover it. 00:31:21.947 [2024-06-07 21:48:22.032684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.947 [2024-06-07 21:48:22.032715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.947 qpair failed and we were unable to recover it. 00:31:21.947 [2024-06-07 21:48:22.032960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.947 [2024-06-07 21:48:22.032990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.947 qpair failed and we were unable to recover it. 00:31:21.947 [2024-06-07 21:48:22.033363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.947 [2024-06-07 21:48:22.033372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.947 qpair failed and we were unable to recover it. 00:31:21.947 [2024-06-07 21:48:22.033583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.947 [2024-06-07 21:48:22.033592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.947 qpair failed and we were unable to recover it. 00:31:21.947 [2024-06-07 21:48:22.033894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.947 [2024-06-07 21:48:22.033904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.947 qpair failed and we were unable to recover it. 00:31:21.947 [2024-06-07 21:48:22.034196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.947 [2024-06-07 21:48:22.034206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.947 qpair failed and we were unable to recover it. 00:31:21.947 [2024-06-07 21:48:22.034496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.947 [2024-06-07 21:48:22.034505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.947 qpair failed and we were unable to recover it. 00:31:21.947 [2024-06-07 21:48:22.034768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.947 [2024-06-07 21:48:22.034778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.947 qpair failed and we were unable to recover it. 00:31:21.947 [2024-06-07 21:48:22.035102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.947 [2024-06-07 21:48:22.035111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.947 qpair failed and we were unable to recover it. 00:31:21.947 [2024-06-07 21:48:22.035405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.947 [2024-06-07 21:48:22.035414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.947 qpair failed and we were unable to recover it. 00:31:21.947 [2024-06-07 21:48:22.035700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.947 [2024-06-07 21:48:22.035709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.947 qpair failed and we were unable to recover it. 00:31:21.947 [2024-06-07 21:48:22.035981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.947 [2024-06-07 21:48:22.035993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.947 qpair failed and we were unable to recover it. 00:31:21.947 [2024-06-07 21:48:22.036287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.947 [2024-06-07 21:48:22.036297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.947 qpair failed and we were unable to recover it. 00:31:21.947 [2024-06-07 21:48:22.036587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.947 [2024-06-07 21:48:22.036597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.947 qpair failed and we were unable to recover it. 00:31:21.947 [2024-06-07 21:48:22.036848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.947 [2024-06-07 21:48:22.036857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.947 qpair failed and we were unable to recover it. 00:31:21.947 [2024-06-07 21:48:22.037112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.947 [2024-06-07 21:48:22.037122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.947 qpair failed and we were unable to recover it. 00:31:21.947 [2024-06-07 21:48:22.037399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.947 [2024-06-07 21:48:22.037408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.947 qpair failed and we were unable to recover it. 00:31:21.947 [2024-06-07 21:48:22.037734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.947 [2024-06-07 21:48:22.037743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.947 qpair failed and we were unable to recover it. 00:31:21.947 [2024-06-07 21:48:22.038044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.947 [2024-06-07 21:48:22.038054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.947 qpair failed and we were unable to recover it. 00:31:21.947 [2024-06-07 21:48:22.038259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.947 [2024-06-07 21:48:22.038269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.947 qpair failed and we were unable to recover it. 00:31:21.947 [2024-06-07 21:48:22.038487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.947 [2024-06-07 21:48:22.038497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.947 qpair failed and we were unable to recover it. 00:31:21.947 [2024-06-07 21:48:22.038829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.947 [2024-06-07 21:48:22.038860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.947 qpair failed and we were unable to recover it. 00:31:21.947 [2024-06-07 21:48:22.039197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.947 [2024-06-07 21:48:22.039232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.947 qpair failed and we were unable to recover it. 00:31:21.947 [2024-06-07 21:48:22.039432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.947 [2024-06-07 21:48:22.039441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.947 qpair failed and we were unable to recover it. 00:31:21.948 [2024-06-07 21:48:22.039741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.948 [2024-06-07 21:48:22.039750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.948 qpair failed and we were unable to recover it. 00:31:21.948 [2024-06-07 21:48:22.040021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.948 [2024-06-07 21:48:22.040034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.948 qpair failed and we were unable to recover it. 00:31:21.948 [2024-06-07 21:48:22.040362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.948 [2024-06-07 21:48:22.040372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.948 qpair failed and we were unable to recover it. 00:31:21.948 [2024-06-07 21:48:22.040713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.948 [2024-06-07 21:48:22.040744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.948 qpair failed and we were unable to recover it. 00:31:21.948 [2024-06-07 21:48:22.040950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.948 [2024-06-07 21:48:22.040980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.948 qpair failed and we were unable to recover it. 00:31:21.948 [2024-06-07 21:48:22.041339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.948 [2024-06-07 21:48:22.041371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.948 qpair failed and we were unable to recover it. 00:31:21.948 [2024-06-07 21:48:22.041657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.948 [2024-06-07 21:48:22.041688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.948 qpair failed and we were unable to recover it. 00:31:21.948 [2024-06-07 21:48:22.042062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.948 [2024-06-07 21:48:22.042095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.948 qpair failed and we were unable to recover it. 00:31:21.948 [2024-06-07 21:48:22.042443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.948 [2024-06-07 21:48:22.042473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.948 qpair failed and we were unable to recover it. 00:31:21.948 [2024-06-07 21:48:22.042818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.948 [2024-06-07 21:48:22.042848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.948 qpair failed and we were unable to recover it. 00:31:21.948 [2024-06-07 21:48:22.043112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.948 [2024-06-07 21:48:22.043143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.948 qpair failed and we were unable to recover it. 00:31:21.948 [2024-06-07 21:48:22.043497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.948 [2024-06-07 21:48:22.043527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.948 qpair failed and we were unable to recover it. 00:31:21.948 [2024-06-07 21:48:22.043841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.948 [2024-06-07 21:48:22.043872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.948 qpair failed and we were unable to recover it. 00:31:21.948 [2024-06-07 21:48:22.044202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.948 [2024-06-07 21:48:22.044234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.948 qpair failed and we were unable to recover it. 00:31:21.948 [2024-06-07 21:48:22.044579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.948 [2024-06-07 21:48:22.044610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.948 qpair failed and we were unable to recover it. 00:31:21.948 [2024-06-07 21:48:22.044812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.948 [2024-06-07 21:48:22.044842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.948 qpair failed and we were unable to recover it. 00:31:21.948 [2024-06-07 21:48:22.045131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.948 [2024-06-07 21:48:22.045162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.948 qpair failed and we were unable to recover it. 00:31:21.948 [2024-06-07 21:48:22.045466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.948 [2024-06-07 21:48:22.045475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.948 qpair failed and we were unable to recover it. 00:31:21.948 [2024-06-07 21:48:22.045691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.948 [2024-06-07 21:48:22.045700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.948 qpair failed and we were unable to recover it. 00:31:21.948 [2024-06-07 21:48:22.046070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.948 [2024-06-07 21:48:22.046080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.948 qpair failed and we were unable to recover it. 00:31:21.948 [2024-06-07 21:48:22.046388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.948 [2024-06-07 21:48:22.046397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.948 qpair failed and we were unable to recover it. 00:31:21.948 [2024-06-07 21:48:22.046695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.948 [2024-06-07 21:48:22.046705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.948 qpair failed and we were unable to recover it. 00:31:21.948 [2024-06-07 21:48:22.047033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.948 [2024-06-07 21:48:22.047064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.948 qpair failed and we were unable to recover it. 00:31:21.948 [2024-06-07 21:48:22.047407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.948 [2024-06-07 21:48:22.047437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.948 qpair failed and we were unable to recover it. 00:31:21.948 [2024-06-07 21:48:22.047757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.948 [2024-06-07 21:48:22.047787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.948 qpair failed and we were unable to recover it. 00:31:21.948 [2024-06-07 21:48:22.048048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.948 [2024-06-07 21:48:22.048079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.948 qpair failed and we were unable to recover it. 00:31:21.948 [2024-06-07 21:48:22.048336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.948 [2024-06-07 21:48:22.048345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.948 qpair failed and we were unable to recover it. 00:31:21.948 [2024-06-07 21:48:22.048640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.948 [2024-06-07 21:48:22.048651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.948 qpair failed and we were unable to recover it. 00:31:21.948 [2024-06-07 21:48:22.048863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.948 [2024-06-07 21:48:22.048873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.948 qpair failed and we were unable to recover it. 00:31:21.948 [2024-06-07 21:48:22.049162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.948 [2024-06-07 21:48:22.049171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.948 qpair failed and we were unable to recover it. 00:31:21.948 [2024-06-07 21:48:22.049464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.948 [2024-06-07 21:48:22.049473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.948 qpair failed and we were unable to recover it. 00:31:21.948 [2024-06-07 21:48:22.049778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.948 [2024-06-07 21:48:22.049809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.948 qpair failed and we were unable to recover it. 00:31:21.948 [2024-06-07 21:48:22.050084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.948 [2024-06-07 21:48:22.050116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.948 qpair failed and we were unable to recover it. 00:31:21.948 [2024-06-07 21:48:22.050401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.948 [2024-06-07 21:48:22.050430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.948 qpair failed and we were unable to recover it. 00:31:21.948 [2024-06-07 21:48:22.050702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.948 [2024-06-07 21:48:22.050732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.948 qpair failed and we were unable to recover it. 00:31:21.948 [2024-06-07 21:48:22.050997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.948 [2024-06-07 21:48:22.051044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.948 qpair failed and we were unable to recover it. 00:31:21.948 [2024-06-07 21:48:22.051367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.948 [2024-06-07 21:48:22.051376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.948 qpair failed and we were unable to recover it. 00:31:21.948 [2024-06-07 21:48:22.051648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.948 [2024-06-07 21:48:22.051678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.948 qpair failed and we were unable to recover it. 00:31:21.948 [2024-06-07 21:48:22.051897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.949 [2024-06-07 21:48:22.051927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.949 qpair failed and we were unable to recover it. 00:31:21.949 [2024-06-07 21:48:22.052301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.949 [2024-06-07 21:48:22.052333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.949 qpair failed and we were unable to recover it. 00:31:21.949 [2024-06-07 21:48:22.052676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.949 [2024-06-07 21:48:22.052706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.949 qpair failed and we were unable to recover it. 00:31:21.949 [2024-06-07 21:48:22.052968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.949 [2024-06-07 21:48:22.052977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.949 qpair failed and we were unable to recover it. 00:31:21.949 [2024-06-07 21:48:22.053273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.949 [2024-06-07 21:48:22.053282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.949 qpair failed and we were unable to recover it. 00:31:21.949 [2024-06-07 21:48:22.053498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.949 [2024-06-07 21:48:22.053508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.949 qpair failed and we were unable to recover it. 00:31:21.949 [2024-06-07 21:48:22.053758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.949 [2024-06-07 21:48:22.053788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.949 qpair failed and we were unable to recover it. 00:31:21.949 [2024-06-07 21:48:22.054043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.949 [2024-06-07 21:48:22.054075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.949 qpair failed and we were unable to recover it. 00:31:21.949 [2024-06-07 21:48:22.054384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.949 [2024-06-07 21:48:22.054394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.949 qpair failed and we were unable to recover it. 00:31:21.949 [2024-06-07 21:48:22.054670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.949 [2024-06-07 21:48:22.054679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.949 qpair failed and we were unable to recover it. 00:31:21.949 [2024-06-07 21:48:22.054974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.949 [2024-06-07 21:48:22.054984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.949 qpair failed and we were unable to recover it. 00:31:21.949 [2024-06-07 21:48:22.055309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.949 [2024-06-07 21:48:22.055340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.949 qpair failed and we were unable to recover it. 00:31:21.949 [2024-06-07 21:48:22.055629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.949 [2024-06-07 21:48:22.055659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.949 qpair failed and we were unable to recover it. 00:31:21.949 [2024-06-07 21:48:22.055971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.949 [2024-06-07 21:48:22.056002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.949 qpair failed and we were unable to recover it. 00:31:21.949 [2024-06-07 21:48:22.056269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.949 [2024-06-07 21:48:22.056300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.949 qpair failed and we were unable to recover it. 00:31:21.949 [2024-06-07 21:48:22.056573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.949 [2024-06-07 21:48:22.056603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.949 qpair failed and we were unable to recover it. 00:31:21.949 [2024-06-07 21:48:22.056939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.949 [2024-06-07 21:48:22.057009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b4d60 with addr=10.0.0.2, port=4420 00:31:21.949 qpair failed and we were unable to recover it. 00:31:21.949 [2024-06-07 21:48:22.057388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.949 [2024-06-07 21:48:22.057423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b4d60 with addr=10.0.0.2, port=4420 00:31:21.949 qpair failed and we were unable to recover it. 00:31:21.949 [2024-06-07 21:48:22.057680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.949 [2024-06-07 21:48:22.057711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b4d60 with addr=10.0.0.2, port=4420 00:31:21.949 qpair failed and we were unable to recover it. 00:31:21.949 [2024-06-07 21:48:22.057977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.949 [2024-06-07 21:48:22.058008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b4d60 with addr=10.0.0.2, port=4420 00:31:21.949 qpair failed and we were unable to recover it. 00:31:21.949 [2024-06-07 21:48:22.058283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.949 [2024-06-07 21:48:22.058294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.949 qpair failed and we were unable to recover it. 00:31:21.949 [2024-06-07 21:48:22.058566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.949 [2024-06-07 21:48:22.058576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.949 qpair failed and we were unable to recover it. 00:31:21.949 [2024-06-07 21:48:22.058780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.949 [2024-06-07 21:48:22.058790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.949 qpair failed and we were unable to recover it. 00:31:21.949 [2024-06-07 21:48:22.058954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.949 [2024-06-07 21:48:22.058963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.949 qpair failed and we were unable to recover it. 00:31:21.949 [2024-06-07 21:48:22.059283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.949 [2024-06-07 21:48:22.059316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.949 qpair failed and we were unable to recover it. 00:31:21.949 [2024-06-07 21:48:22.059662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.949 [2024-06-07 21:48:22.059692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.949 qpair failed and we were unable to recover it. 00:31:21.949 [2024-06-07 21:48:22.059935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.949 [2024-06-07 21:48:22.059966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.949 qpair failed and we were unable to recover it. 00:31:21.949 [2024-06-07 21:48:22.060283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.949 [2024-06-07 21:48:22.060314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.949 qpair failed and we were unable to recover it. 00:31:21.949 [2024-06-07 21:48:22.060642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.949 [2024-06-07 21:48:22.060672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.949 qpair failed and we were unable to recover it. 00:31:21.949 [2024-06-07 21:48:22.061014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.949 [2024-06-07 21:48:22.061060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.949 qpair failed and we were unable to recover it. 00:31:21.949 [2024-06-07 21:48:22.061403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.949 [2024-06-07 21:48:22.061434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.949 qpair failed and we were unable to recover it. 00:31:21.949 [2024-06-07 21:48:22.061781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.949 [2024-06-07 21:48:22.061812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.949 qpair failed and we were unable to recover it. 00:31:21.949 [2024-06-07 21:48:22.062126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.949 [2024-06-07 21:48:22.062158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.949 qpair failed and we were unable to recover it. 00:31:21.949 [2024-06-07 21:48:22.062500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.949 [2024-06-07 21:48:22.062530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.949 qpair failed and we were unable to recover it. 00:31:21.949 [2024-06-07 21:48:22.062725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.949 [2024-06-07 21:48:22.062755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.949 qpair failed and we were unable to recover it. 00:31:21.949 [2024-06-07 21:48:22.063013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.949 [2024-06-07 21:48:22.063056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.949 qpair failed and we were unable to recover it. 00:31:21.949 [2024-06-07 21:48:22.063350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.949 [2024-06-07 21:48:22.063381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.949 qpair failed and we were unable to recover it. 00:31:21.949 [2024-06-07 21:48:22.063660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.949 [2024-06-07 21:48:22.063691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.949 qpair failed and we were unable to recover it. 00:31:21.949 [2024-06-07 21:48:22.064004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.950 [2024-06-07 21:48:22.064058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.950 qpair failed and we were unable to recover it. 00:31:21.950 [2024-06-07 21:48:22.064252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.950 [2024-06-07 21:48:22.064262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.950 qpair failed and we were unable to recover it. 00:31:21.950 [2024-06-07 21:48:22.064508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.950 [2024-06-07 21:48:22.064538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.950 qpair failed and we were unable to recover it. 00:31:21.950 [2024-06-07 21:48:22.064850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.950 [2024-06-07 21:48:22.064880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.950 qpair failed and we were unable to recover it. 00:31:21.950 [2024-06-07 21:48:22.065206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.950 [2024-06-07 21:48:22.065216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.950 qpair failed and we were unable to recover it. 00:31:21.950 [2024-06-07 21:48:22.065437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.950 [2024-06-07 21:48:22.065446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.950 qpair failed and we were unable to recover it. 00:31:21.950 [2024-06-07 21:48:22.065651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.950 [2024-06-07 21:48:22.065661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.950 qpair failed and we were unable to recover it. 00:31:21.950 [2024-06-07 21:48:22.065931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.950 [2024-06-07 21:48:22.065940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.950 qpair failed and we were unable to recover it. 00:31:21.950 [2024-06-07 21:48:22.066261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.950 [2024-06-07 21:48:22.066271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.950 qpair failed and we were unable to recover it. 00:31:21.950 [2024-06-07 21:48:22.066587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.950 [2024-06-07 21:48:22.066607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.950 qpair failed and we were unable to recover it. 00:31:21.950 [2024-06-07 21:48:22.066863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.950 [2024-06-07 21:48:22.066872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.950 qpair failed and we were unable to recover it. 00:31:21.950 [2024-06-07 21:48:22.067084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.950 [2024-06-07 21:48:22.067093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.950 qpair failed and we were unable to recover it. 00:31:21.950 [2024-06-07 21:48:22.067435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.950 [2024-06-07 21:48:22.067466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.950 qpair failed and we were unable to recover it. 00:31:21.950 [2024-06-07 21:48:22.067729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.950 [2024-06-07 21:48:22.067760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.950 qpair failed and we were unable to recover it. 00:31:21.950 [2024-06-07 21:48:22.068092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.950 [2024-06-07 21:48:22.068123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.950 qpair failed and we were unable to recover it. 00:31:21.950 [2024-06-07 21:48:22.068377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.950 [2024-06-07 21:48:22.068386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.950 qpair failed and we were unable to recover it. 00:31:21.950 [2024-06-07 21:48:22.068599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.950 [2024-06-07 21:48:22.068608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.950 qpair failed and we were unable to recover it. 00:31:21.950 [2024-06-07 21:48:22.068905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.950 [2024-06-07 21:48:22.068914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.950 qpair failed and we were unable to recover it. 00:31:21.950 [2024-06-07 21:48:22.069202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.950 [2024-06-07 21:48:22.069212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.950 qpair failed and we were unable to recover it. 00:31:21.950 [2024-06-07 21:48:22.069495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.950 [2024-06-07 21:48:22.069504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.950 qpair failed and we were unable to recover it. 00:31:21.950 [2024-06-07 21:48:22.069773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.950 [2024-06-07 21:48:22.069783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.950 qpair failed and we were unable to recover it. 00:31:21.950 [2024-06-07 21:48:22.070085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.950 [2024-06-07 21:48:22.070095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.950 qpair failed and we were unable to recover it. 00:31:21.950 [2024-06-07 21:48:22.070378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.950 [2024-06-07 21:48:22.070387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.950 qpair failed and we were unable to recover it. 00:31:21.950 [2024-06-07 21:48:22.070599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.950 [2024-06-07 21:48:22.070608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.950 qpair failed and we were unable to recover it. 00:31:21.950 [2024-06-07 21:48:22.070825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.950 [2024-06-07 21:48:22.070834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.950 qpair failed and we were unable to recover it. 00:31:21.950 [2024-06-07 21:48:22.071086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.950 [2024-06-07 21:48:22.071096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.950 qpair failed and we were unable to recover it. 00:31:21.950 [2024-06-07 21:48:22.071328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.950 [2024-06-07 21:48:22.071337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.950 qpair failed and we were unable to recover it. 00:31:21.950 [2024-06-07 21:48:22.071579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.950 [2024-06-07 21:48:22.071588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.950 qpair failed and we were unable to recover it. 00:31:21.950 [2024-06-07 21:48:22.071906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.950 [2024-06-07 21:48:22.071915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.950 qpair failed and we were unable to recover it. 00:31:21.950 [2024-06-07 21:48:22.072208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.950 [2024-06-07 21:48:22.072217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.950 qpair failed and we were unable to recover it. 00:31:21.950 [2024-06-07 21:48:22.072455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.950 [2024-06-07 21:48:22.072485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.950 qpair failed and we were unable to recover it. 00:31:21.950 [2024-06-07 21:48:22.072850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.950 [2024-06-07 21:48:22.072885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.950 qpair failed and we were unable to recover it. 00:31:21.950 [2024-06-07 21:48:22.073186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.951 [2024-06-07 21:48:22.073219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.951 qpair failed and we were unable to recover it. 00:31:21.951 [2024-06-07 21:48:22.073489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.951 [2024-06-07 21:48:22.073519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.951 qpair failed and we were unable to recover it. 00:31:21.951 [2024-06-07 21:48:22.073877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.951 [2024-06-07 21:48:22.073907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.951 qpair failed and we were unable to recover it. 00:31:21.951 [2024-06-07 21:48:22.074248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.951 [2024-06-07 21:48:22.074279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.951 qpair failed and we were unable to recover it. 00:31:21.951 [2024-06-07 21:48:22.074560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.951 [2024-06-07 21:48:22.074592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.951 qpair failed and we were unable to recover it. 00:31:21.951 [2024-06-07 21:48:22.074941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.951 [2024-06-07 21:48:22.074971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.951 qpair failed and we were unable to recover it. 00:31:21.951 [2024-06-07 21:48:22.075296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.951 [2024-06-07 21:48:22.075328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.951 qpair failed and we were unable to recover it. 00:31:21.951 [2024-06-07 21:48:22.075644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.951 [2024-06-07 21:48:22.075674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.951 qpair failed and we were unable to recover it. 00:31:21.951 [2024-06-07 21:48:22.076004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.951 [2024-06-07 21:48:22.076043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.951 qpair failed and we were unable to recover it. 00:31:21.951 [2024-06-07 21:48:22.076298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.951 [2024-06-07 21:48:22.076328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.951 qpair failed and we were unable to recover it. 00:31:21.951 [2024-06-07 21:48:22.076591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.951 [2024-06-07 21:48:22.076621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.951 qpair failed and we were unable to recover it. 00:31:21.951 [2024-06-07 21:48:22.076989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.951 [2024-06-07 21:48:22.077035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.951 qpair failed and we were unable to recover it. 00:31:21.951 [2024-06-07 21:48:22.077349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.951 [2024-06-07 21:48:22.077358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.951 qpair failed and we were unable to recover it. 00:31:21.951 [2024-06-07 21:48:22.077600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.951 [2024-06-07 21:48:22.077609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.951 qpair failed and we were unable to recover it. 00:31:21.951 [2024-06-07 21:48:22.077808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.951 [2024-06-07 21:48:22.077817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.951 qpair failed and we were unable to recover it. 00:31:21.951 [2024-06-07 21:48:22.078117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.951 [2024-06-07 21:48:22.078127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.951 qpair failed and we were unable to recover it. 00:31:21.951 [2024-06-07 21:48:22.078416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.951 [2024-06-07 21:48:22.078425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.951 qpair failed and we were unable to recover it. 00:31:21.951 [2024-06-07 21:48:22.078638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.951 [2024-06-07 21:48:22.078648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.951 qpair failed and we were unable to recover it. 00:31:21.951 [2024-06-07 21:48:22.078884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.951 [2024-06-07 21:48:22.078893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.951 qpair failed and we were unable to recover it. 00:31:21.951 [2024-06-07 21:48:22.079187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.951 [2024-06-07 21:48:22.079197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.951 qpair failed and we were unable to recover it. 00:31:21.951 [2024-06-07 21:48:22.079423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.951 [2024-06-07 21:48:22.079432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.951 qpair failed and we were unable to recover it. 00:31:21.951 [2024-06-07 21:48:22.079708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.951 [2024-06-07 21:48:22.079718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.951 qpair failed and we were unable to recover it. 00:31:21.951 [2024-06-07 21:48:22.079935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.951 [2024-06-07 21:48:22.079944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.951 qpair failed and we were unable to recover it. 00:31:21.951 [2024-06-07 21:48:22.080180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.951 [2024-06-07 21:48:22.080189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.951 qpair failed and we were unable to recover it. 00:31:21.951 [2024-06-07 21:48:22.080324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.951 [2024-06-07 21:48:22.080334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.951 qpair failed and we were unable to recover it. 00:31:21.951 [2024-06-07 21:48:22.080645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.951 [2024-06-07 21:48:22.080675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.951 qpair failed and we were unable to recover it. 00:31:21.951 [2024-06-07 21:48:22.081021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.951 [2024-06-07 21:48:22.081060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.951 qpair failed and we were unable to recover it. 00:31:21.951 [2024-06-07 21:48:22.081400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.951 [2024-06-07 21:48:22.081409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.951 qpair failed and we were unable to recover it. 00:31:21.951 [2024-06-07 21:48:22.081749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.951 [2024-06-07 21:48:22.081780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.951 qpair failed and we were unable to recover it. 00:31:21.951 [2024-06-07 21:48:22.082114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.951 [2024-06-07 21:48:22.082146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.951 qpair failed and we were unable to recover it. 00:31:21.951 [2024-06-07 21:48:22.082461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.951 [2024-06-07 21:48:22.082492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.951 qpair failed and we were unable to recover it. 00:31:21.951 [2024-06-07 21:48:22.082820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.951 [2024-06-07 21:48:22.082853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.951 qpair failed and we were unable to recover it. 00:31:21.951 [2024-06-07 21:48:22.083120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.951 [2024-06-07 21:48:22.083130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.951 qpair failed and we were unable to recover it. 00:31:21.951 [2024-06-07 21:48:22.083345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.951 [2024-06-07 21:48:22.083354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.951 qpair failed and we were unable to recover it. 00:31:21.951 [2024-06-07 21:48:22.083634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.951 [2024-06-07 21:48:22.083643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.951 qpair failed and we were unable to recover it. 00:31:21.951 [2024-06-07 21:48:22.083884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.951 [2024-06-07 21:48:22.083894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.951 qpair failed and we were unable to recover it. 00:31:21.951 [2024-06-07 21:48:22.084055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.951 [2024-06-07 21:48:22.084065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.951 qpair failed and we were unable to recover it. 00:31:21.951 [2024-06-07 21:48:22.084351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.951 [2024-06-07 21:48:22.084382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.951 qpair failed and we were unable to recover it. 00:31:21.951 [2024-06-07 21:48:22.084636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.951 [2024-06-07 21:48:22.084666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.952 qpair failed and we were unable to recover it. 00:31:21.952 [2024-06-07 21:48:22.084980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.952 [2024-06-07 21:48:22.085010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.952 qpair failed and we were unable to recover it. 00:31:21.952 [2024-06-07 21:48:22.085337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.952 [2024-06-07 21:48:22.085368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.952 qpair failed and we were unable to recover it. 00:31:21.952 [2024-06-07 21:48:22.085706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.952 [2024-06-07 21:48:22.085736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.952 qpair failed and we were unable to recover it. 00:31:21.952 [2024-06-07 21:48:22.086074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.952 [2024-06-07 21:48:22.086106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.952 qpair failed and we were unable to recover it. 00:31:21.952 [2024-06-07 21:48:22.086366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.952 [2024-06-07 21:48:22.086397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.952 qpair failed and we were unable to recover it. 00:31:21.952 [2024-06-07 21:48:22.086764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.952 [2024-06-07 21:48:22.086794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.952 qpair failed and we were unable to recover it. 00:31:21.952 [2024-06-07 21:48:22.087084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.952 [2024-06-07 21:48:22.087116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.952 qpair failed and we were unable to recover it. 00:31:21.952 [2024-06-07 21:48:22.087380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.952 [2024-06-07 21:48:22.087411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.952 qpair failed and we were unable to recover it. 00:31:21.952 [2024-06-07 21:48:22.087770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.952 [2024-06-07 21:48:22.087800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.952 qpair failed and we were unable to recover it. 00:31:21.952 [2024-06-07 21:48:22.088161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.952 [2024-06-07 21:48:22.088193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.952 qpair failed and we were unable to recover it. 00:31:21.952 [2024-06-07 21:48:22.088451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.952 [2024-06-07 21:48:22.088481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.952 qpair failed and we were unable to recover it. 00:31:21.952 [2024-06-07 21:48:22.088824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.952 [2024-06-07 21:48:22.088856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.952 qpair failed and we were unable to recover it. 00:31:21.952 [2024-06-07 21:48:22.089199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.952 [2024-06-07 21:48:22.089231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.952 qpair failed and we were unable to recover it. 00:31:21.952 [2024-06-07 21:48:22.089570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.952 [2024-06-07 21:48:22.089601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.952 qpair failed and we were unable to recover it. 00:31:21.952 [2024-06-07 21:48:22.089946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.952 [2024-06-07 21:48:22.089976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.952 qpair failed and we were unable to recover it. 00:31:21.952 [2024-06-07 21:48:22.090227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.952 [2024-06-07 21:48:22.090238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.952 qpair failed and we were unable to recover it. 00:31:21.952 [2024-06-07 21:48:22.090519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.952 [2024-06-07 21:48:22.090529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.952 qpair failed and we were unable to recover it. 00:31:21.952 [2024-06-07 21:48:22.090801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.952 [2024-06-07 21:48:22.090810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.952 qpair failed and we were unable to recover it. 00:31:21.952 [2024-06-07 21:48:22.091032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.952 [2024-06-07 21:48:22.091042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.952 qpair failed and we were unable to recover it. 00:31:21.952 [2024-06-07 21:48:22.091281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.952 [2024-06-07 21:48:22.091290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.952 qpair failed and we were unable to recover it. 00:31:21.952 [2024-06-07 21:48:22.091608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.952 [2024-06-07 21:48:22.091618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.952 qpair failed and we were unable to recover it. 00:31:21.952 [2024-06-07 21:48:22.091818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.952 [2024-06-07 21:48:22.091828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.952 qpair failed and we were unable to recover it. 00:31:21.952 [2024-06-07 21:48:22.092101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.952 [2024-06-07 21:48:22.092111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.952 qpair failed and we were unable to recover it. 00:31:21.952 [2024-06-07 21:48:22.092336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.952 [2024-06-07 21:48:22.092345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.952 qpair failed and we were unable to recover it. 00:31:21.952 [2024-06-07 21:48:22.092637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.952 [2024-06-07 21:48:22.092646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.952 qpair failed and we were unable to recover it. 00:31:21.952 [2024-06-07 21:48:22.092934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.952 [2024-06-07 21:48:22.092943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.952 qpair failed and we were unable to recover it. 00:31:21.952 [2024-06-07 21:48:22.093248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.952 [2024-06-07 21:48:22.093257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.952 qpair failed and we were unable to recover it. 00:31:21.952 [2024-06-07 21:48:22.093493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.952 [2024-06-07 21:48:22.093504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.952 qpair failed and we were unable to recover it. 00:31:21.952 [2024-06-07 21:48:22.093731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.952 [2024-06-07 21:48:22.093741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.952 qpair failed and we were unable to recover it. 00:31:21.952 [2024-06-07 21:48:22.093885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.952 [2024-06-07 21:48:22.093894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.952 qpair failed and we were unable to recover it. 00:31:21.952 [2024-06-07 21:48:22.094234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.952 [2024-06-07 21:48:22.094265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.952 qpair failed and we were unable to recover it. 00:31:21.952 [2024-06-07 21:48:22.094613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.952 [2024-06-07 21:48:22.094644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.952 qpair failed and we were unable to recover it. 00:31:21.952 [2024-06-07 21:48:22.094847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.952 [2024-06-07 21:48:22.094878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.952 qpair failed and we were unable to recover it. 00:31:21.952 [2024-06-07 21:48:22.095223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.952 [2024-06-07 21:48:22.095255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.952 qpair failed and we were unable to recover it. 00:31:21.952 [2024-06-07 21:48:22.095648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.952 [2024-06-07 21:48:22.095678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.952 qpair failed and we were unable to recover it. 00:31:21.952 [2024-06-07 21:48:22.095963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.952 [2024-06-07 21:48:22.095994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.952 qpair failed and we were unable to recover it. 00:31:21.952 [2024-06-07 21:48:22.096268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.952 [2024-06-07 21:48:22.096278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.952 qpair failed and we were unable to recover it. 00:31:21.952 [2024-06-07 21:48:22.096560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.952 [2024-06-07 21:48:22.096570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.952 qpair failed and we were unable to recover it. 00:31:21.953 [2024-06-07 21:48:22.096875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.953 [2024-06-07 21:48:22.096884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.953 qpair failed and we were unable to recover it. 00:31:21.953 [2024-06-07 21:48:22.097120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.953 [2024-06-07 21:48:22.097130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.953 qpair failed and we were unable to recover it. 00:31:21.953 [2024-06-07 21:48:22.097400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.953 [2024-06-07 21:48:22.097409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.953 qpair failed and we were unable to recover it. 00:31:21.953 [2024-06-07 21:48:22.097690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.953 [2024-06-07 21:48:22.097700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.953 qpair failed and we were unable to recover it. 00:31:21.953 [2024-06-07 21:48:22.097986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.953 [2024-06-07 21:48:22.097995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.953 qpair failed and we were unable to recover it. 00:31:21.953 [2024-06-07 21:48:22.098267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.953 [2024-06-07 21:48:22.098277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.953 qpair failed and we were unable to recover it. 00:31:21.953 [2024-06-07 21:48:22.098572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.953 [2024-06-07 21:48:22.098581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.953 qpair failed and we were unable to recover it. 00:31:21.953 [2024-06-07 21:48:22.098871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.953 [2024-06-07 21:48:22.098880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.953 qpair failed and we were unable to recover it. 00:31:21.953 [2024-06-07 21:48:22.099202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.953 [2024-06-07 21:48:22.099235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.953 qpair failed and we were unable to recover it. 00:31:21.953 [2024-06-07 21:48:22.099607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.953 [2024-06-07 21:48:22.099638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.953 qpair failed and we were unable to recover it. 00:31:21.953 [2024-06-07 21:48:22.099896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.953 [2024-06-07 21:48:22.099927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.953 qpair failed and we were unable to recover it. 00:31:21.953 [2024-06-07 21:48:22.100193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.953 [2024-06-07 21:48:22.100232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.953 qpair failed and we were unable to recover it. 00:31:21.953 [2024-06-07 21:48:22.100485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.953 [2024-06-07 21:48:22.100494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.953 qpair failed and we were unable to recover it. 00:31:21.953 [2024-06-07 21:48:22.100716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.953 [2024-06-07 21:48:22.100726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.953 qpair failed and we were unable to recover it. 00:31:21.953 [2024-06-07 21:48:22.101008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.953 [2024-06-07 21:48:22.101017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.953 qpair failed and we were unable to recover it. 00:31:21.953 [2024-06-07 21:48:22.101332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.953 [2024-06-07 21:48:22.101341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.953 qpair failed and we were unable to recover it. 00:31:21.953 [2024-06-07 21:48:22.101595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.953 [2024-06-07 21:48:22.101626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.953 qpair failed and we were unable to recover it. 00:31:21.953 [2024-06-07 21:48:22.101963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.953 [2024-06-07 21:48:22.101993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.953 qpair failed and we were unable to recover it. 00:31:21.953 [2024-06-07 21:48:22.102304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.953 [2024-06-07 21:48:22.102335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.953 qpair failed and we were unable to recover it. 00:31:21.953 [2024-06-07 21:48:22.102674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.953 [2024-06-07 21:48:22.102706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.953 qpair failed and we were unable to recover it. 00:31:21.953 [2024-06-07 21:48:22.102977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.953 [2024-06-07 21:48:22.103007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.953 qpair failed and we were unable to recover it. 00:31:21.953 [2024-06-07 21:48:22.103381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.953 [2024-06-07 21:48:22.103413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.953 qpair failed and we were unable to recover it. 00:31:21.953 [2024-06-07 21:48:22.103727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.953 [2024-06-07 21:48:22.103758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.953 qpair failed and we were unable to recover it. 00:31:21.953 [2024-06-07 21:48:22.104102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.953 [2024-06-07 21:48:22.104135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.953 qpair failed and we were unable to recover it. 00:31:21.953 [2024-06-07 21:48:22.104471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.953 [2024-06-07 21:48:22.104481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.953 qpair failed and we were unable to recover it. 00:31:21.953 [2024-06-07 21:48:22.104801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.953 [2024-06-07 21:48:22.104810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.953 qpair failed and we were unable to recover it. 00:31:21.953 [2024-06-07 21:48:22.105049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.953 [2024-06-07 21:48:22.105058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.953 qpair failed and we were unable to recover it. 00:31:21.953 [2024-06-07 21:48:22.105301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.953 [2024-06-07 21:48:22.105310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.953 qpair failed and we were unable to recover it. 00:31:21.953 [2024-06-07 21:48:22.105516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.953 [2024-06-07 21:48:22.105526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.953 qpair failed and we were unable to recover it. 00:31:21.953 [2024-06-07 21:48:22.105801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.953 [2024-06-07 21:48:22.105812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.953 qpair failed and we were unable to recover it. 00:31:21.953 [2024-06-07 21:48:22.106061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.953 [2024-06-07 21:48:22.106070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.953 qpair failed and we were unable to recover it. 00:31:21.953 [2024-06-07 21:48:22.106310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.953 [2024-06-07 21:48:22.106319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.953 qpair failed and we were unable to recover it. 00:31:21.953 [2024-06-07 21:48:22.106591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.953 [2024-06-07 21:48:22.106600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.953 qpair failed and we were unable to recover it. 00:31:21.953 [2024-06-07 21:48:22.106879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.953 [2024-06-07 21:48:22.106888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.953 qpair failed and we were unable to recover it. 00:31:21.953 [2024-06-07 21:48:22.107188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.953 [2024-06-07 21:48:22.107198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.953 qpair failed and we were unable to recover it. 00:31:21.953 [2024-06-07 21:48:22.107339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.953 [2024-06-07 21:48:22.107348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.953 qpair failed and we were unable to recover it. 00:31:21.953 [2024-06-07 21:48:22.107656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.953 [2024-06-07 21:48:22.107686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.953 qpair failed and we were unable to recover it. 00:31:21.953 [2024-06-07 21:48:22.108065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.953 [2024-06-07 21:48:22.108097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.953 qpair failed and we were unable to recover it. 00:31:21.954 [2024-06-07 21:48:22.108433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.954 [2024-06-07 21:48:22.108462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.954 qpair failed and we were unable to recover it. 00:31:21.954 [2024-06-07 21:48:22.108667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.954 [2024-06-07 21:48:22.108698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.954 qpair failed and we were unable to recover it. 00:31:21.954 [2024-06-07 21:48:22.109022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.954 [2024-06-07 21:48:22.109061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.954 qpair failed and we were unable to recover it. 00:31:21.954 [2024-06-07 21:48:22.109357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.954 [2024-06-07 21:48:22.109366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.954 qpair failed and we were unable to recover it. 00:31:21.954 [2024-06-07 21:48:22.109668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.954 [2024-06-07 21:48:22.109678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.954 qpair failed and we were unable to recover it. 00:31:21.954 [2024-06-07 21:48:22.109955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.954 [2024-06-07 21:48:22.109964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.954 qpair failed and we were unable to recover it. 00:31:21.954 [2024-06-07 21:48:22.110244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.954 [2024-06-07 21:48:22.110254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.954 qpair failed and we were unable to recover it. 00:31:21.954 [2024-06-07 21:48:22.110562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.954 [2024-06-07 21:48:22.110572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.954 qpair failed and we were unable to recover it. 00:31:21.954 [2024-06-07 21:48:22.110939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.954 [2024-06-07 21:48:22.110969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.954 qpair failed and we were unable to recover it. 00:31:21.954 [2024-06-07 21:48:22.111332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.954 [2024-06-07 21:48:22.111364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.954 qpair failed and we were unable to recover it. 00:31:21.954 [2024-06-07 21:48:22.111698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.954 [2024-06-07 21:48:22.111708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.954 qpair failed and we were unable to recover it. 00:31:21.954 [2024-06-07 21:48:22.111993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.954 [2024-06-07 21:48:22.112002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.954 qpair failed and we were unable to recover it. 00:31:21.954 [2024-06-07 21:48:22.112224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.954 [2024-06-07 21:48:22.112234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.954 qpair failed and we were unable to recover it. 00:31:21.954 [2024-06-07 21:48:22.112531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.954 [2024-06-07 21:48:22.112540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.954 qpair failed and we were unable to recover it. 00:31:21.954 [2024-06-07 21:48:22.112882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.954 [2024-06-07 21:48:22.112891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.954 qpair failed and we were unable to recover it. 00:31:21.954 [2024-06-07 21:48:22.113196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.954 [2024-06-07 21:48:22.113228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.954 qpair failed and we were unable to recover it. 00:31:21.954 [2024-06-07 21:48:22.113582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.954 [2024-06-07 21:48:22.113612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.954 qpair failed and we were unable to recover it. 00:31:21.954 [2024-06-07 21:48:22.113905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.954 [2024-06-07 21:48:22.113935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.954 qpair failed and we were unable to recover it. 00:31:21.954 [2024-06-07 21:48:22.114130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.954 [2024-06-07 21:48:22.114140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.954 qpair failed and we were unable to recover it. 00:31:21.954 [2024-06-07 21:48:22.114419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.954 [2024-06-07 21:48:22.114428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.954 qpair failed and we were unable to recover it. 00:31:21.954 [2024-06-07 21:48:22.114634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.954 [2024-06-07 21:48:22.114644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.954 qpair failed and we were unable to recover it. 00:31:21.954 [2024-06-07 21:48:22.114953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.954 [2024-06-07 21:48:22.114963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.954 qpair failed and we were unable to recover it. 00:31:21.954 [2024-06-07 21:48:22.115235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.954 [2024-06-07 21:48:22.115244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.954 qpair failed and we were unable to recover it. 00:31:21.954 [2024-06-07 21:48:22.115469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.954 [2024-06-07 21:48:22.115479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.954 qpair failed and we were unable to recover it. 00:31:21.954 [2024-06-07 21:48:22.115693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.954 [2024-06-07 21:48:22.115702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.954 qpair failed and we were unable to recover it. 00:31:21.954 [2024-06-07 21:48:22.115933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.954 [2024-06-07 21:48:22.115943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.954 qpair failed and we were unable to recover it. 00:31:21.954 [2024-06-07 21:48:22.116265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.954 [2024-06-07 21:48:22.116276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.954 qpair failed and we were unable to recover it. 00:31:21.954 [2024-06-07 21:48:22.116478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.954 [2024-06-07 21:48:22.116488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.954 qpair failed and we were unable to recover it. 00:31:21.954 [2024-06-07 21:48:22.116761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.954 [2024-06-07 21:48:22.116771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.954 qpair failed and we were unable to recover it. 00:31:21.954 [2024-06-07 21:48:22.116973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.954 [2024-06-07 21:48:22.116983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.954 qpair failed and we were unable to recover it. 00:31:21.954 [2024-06-07 21:48:22.117261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.954 [2024-06-07 21:48:22.117270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.954 qpair failed and we were unable to recover it. 00:31:21.954 [2024-06-07 21:48:22.117580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.954 [2024-06-07 21:48:22.117592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.954 qpair failed and we were unable to recover it. 00:31:21.954 [2024-06-07 21:48:22.117833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.954 [2024-06-07 21:48:22.117842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.954 qpair failed and we were unable to recover it. 00:31:21.954 [2024-06-07 21:48:22.118140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.954 [2024-06-07 21:48:22.118150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.954 qpair failed and we were unable to recover it. 00:31:21.954 [2024-06-07 21:48:22.118441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.954 [2024-06-07 21:48:22.118451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.954 qpair failed and we were unable to recover it. 00:31:21.954 [2024-06-07 21:48:22.118735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.954 [2024-06-07 21:48:22.118744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.954 qpair failed and we were unable to recover it. 00:31:21.954 [2024-06-07 21:48:22.119046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.954 [2024-06-07 21:48:22.119056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.954 qpair failed and we were unable to recover it. 00:31:21.954 [2024-06-07 21:48:22.119261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.954 [2024-06-07 21:48:22.119271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.954 qpair failed and we were unable to recover it. 00:31:21.955 [2024-06-07 21:48:22.119479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.955 [2024-06-07 21:48:22.119489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.955 qpair failed and we were unable to recover it. 00:31:21.955 [2024-06-07 21:48:22.119732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.955 [2024-06-07 21:48:22.119741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.955 qpair failed and we were unable to recover it. 00:31:21.955 [2024-06-07 21:48:22.119958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.955 [2024-06-07 21:48:22.119967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.955 qpair failed and we were unable to recover it. 00:31:21.955 [2024-06-07 21:48:22.120242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.955 [2024-06-07 21:48:22.120252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.955 qpair failed and we were unable to recover it. 00:31:21.955 [2024-06-07 21:48:22.120466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.955 [2024-06-07 21:48:22.120476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.955 qpair failed and we were unable to recover it. 00:31:21.955 [2024-06-07 21:48:22.120635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.955 [2024-06-07 21:48:22.120644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.955 qpair failed and we were unable to recover it. 00:31:21.955 [2024-06-07 21:48:22.120953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.955 [2024-06-07 21:48:22.120984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.955 qpair failed and we were unable to recover it. 00:31:21.955 [2024-06-07 21:48:22.121251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.955 [2024-06-07 21:48:22.121283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.955 qpair failed and we were unable to recover it. 00:31:21.955 [2024-06-07 21:48:22.121549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.955 [2024-06-07 21:48:22.121559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.955 qpair failed and we were unable to recover it. 00:31:21.955 [2024-06-07 21:48:22.121831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.955 [2024-06-07 21:48:22.121841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.955 qpair failed and we were unable to recover it. 00:31:21.955 [2024-06-07 21:48:22.122163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.955 [2024-06-07 21:48:22.122173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.955 qpair failed and we were unable to recover it. 00:31:21.955 [2024-06-07 21:48:22.122396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.955 [2024-06-07 21:48:22.122406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.955 qpair failed and we were unable to recover it. 00:31:21.955 [2024-06-07 21:48:22.122709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.955 [2024-06-07 21:48:22.122719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.955 qpair failed and we were unable to recover it. 00:31:21.955 [2024-06-07 21:48:22.123010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.955 [2024-06-07 21:48:22.123019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.955 qpair failed and we were unable to recover it. 00:31:21.955 [2024-06-07 21:48:22.123257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.955 [2024-06-07 21:48:22.123266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.955 qpair failed and we were unable to recover it. 00:31:21.955 [2024-06-07 21:48:22.123539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.955 [2024-06-07 21:48:22.123548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.955 qpair failed and we were unable to recover it. 00:31:21.955 [2024-06-07 21:48:22.123753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.955 [2024-06-07 21:48:22.123762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.955 qpair failed and we were unable to recover it. 00:31:21.955 [2024-06-07 21:48:22.123988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.955 [2024-06-07 21:48:22.124018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.955 qpair failed and we were unable to recover it. 00:31:21.955 [2024-06-07 21:48:22.124373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.955 [2024-06-07 21:48:22.124404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.955 qpair failed and we were unable to recover it. 00:31:21.955 [2024-06-07 21:48:22.124744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.955 [2024-06-07 21:48:22.124775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.955 qpair failed and we were unable to recover it. 00:31:21.955 [2024-06-07 21:48:22.125099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.955 [2024-06-07 21:48:22.125131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.955 qpair failed and we were unable to recover it. 00:31:21.955 [2024-06-07 21:48:22.125444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.955 [2024-06-07 21:48:22.125454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.955 qpair failed and we were unable to recover it. 00:31:21.955 [2024-06-07 21:48:22.125723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.955 [2024-06-07 21:48:22.125733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.955 qpair failed and we were unable to recover it. 00:31:21.955 [2024-06-07 21:48:22.125952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.955 [2024-06-07 21:48:22.125962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.955 qpair failed and we were unable to recover it. 00:31:21.955 [2024-06-07 21:48:22.126232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.955 [2024-06-07 21:48:22.126242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.955 qpair failed and we were unable to recover it. 00:31:21.955 [2024-06-07 21:48:22.126543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.955 [2024-06-07 21:48:22.126553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.955 qpair failed and we were unable to recover it. 00:31:21.955 [2024-06-07 21:48:22.126825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.955 [2024-06-07 21:48:22.126835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.955 qpair failed and we were unable to recover it. 00:31:21.955 [2024-06-07 21:48:22.127119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.955 [2024-06-07 21:48:22.127128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.955 qpair failed and we were unable to recover it. 00:31:21.955 [2024-06-07 21:48:22.127343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.955 [2024-06-07 21:48:22.127353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.955 qpair failed and we were unable to recover it. 00:31:21.955 [2024-06-07 21:48:22.127635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.955 [2024-06-07 21:48:22.127644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.955 qpair failed and we were unable to recover it. 00:31:21.955 [2024-06-07 21:48:22.127856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.955 [2024-06-07 21:48:22.127866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.955 qpair failed and we were unable to recover it. 00:31:21.955 [2024-06-07 21:48:22.128157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.955 [2024-06-07 21:48:22.128167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.955 qpair failed and we were unable to recover it. 00:31:21.955 [2024-06-07 21:48:22.128464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.955 [2024-06-07 21:48:22.128473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.956 qpair failed and we were unable to recover it. 00:31:21.956 [2024-06-07 21:48:22.128719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.956 [2024-06-07 21:48:22.128730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.956 qpair failed and we were unable to recover it. 00:31:21.956 [2024-06-07 21:48:22.129034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.956 [2024-06-07 21:48:22.129044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.956 qpair failed and we were unable to recover it. 00:31:21.956 [2024-06-07 21:48:22.129349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.956 [2024-06-07 21:48:22.129359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.956 qpair failed and we were unable to recover it. 00:31:21.956 [2024-06-07 21:48:22.129501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.956 [2024-06-07 21:48:22.129510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.956 qpair failed and we were unable to recover it. 00:31:21.956 [2024-06-07 21:48:22.129814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.956 [2024-06-07 21:48:22.129844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.956 qpair failed and we were unable to recover it. 00:31:21.956 [2024-06-07 21:48:22.130169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.956 [2024-06-07 21:48:22.130201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.956 qpair failed and we were unable to recover it. 00:31:21.956 [2024-06-07 21:48:22.130538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.956 [2024-06-07 21:48:22.130548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.956 qpair failed and we were unable to recover it. 00:31:21.956 [2024-06-07 21:48:22.130851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.956 [2024-06-07 21:48:22.130861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.956 qpair failed and we were unable to recover it. 00:31:21.956 [2024-06-07 21:48:22.131173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.956 [2024-06-07 21:48:22.131204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.956 qpair failed and we were unable to recover it. 00:31:21.956 [2024-06-07 21:48:22.131544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.956 [2024-06-07 21:48:22.131575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.956 qpair failed and we were unable to recover it. 00:31:21.956 [2024-06-07 21:48:22.131889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.956 [2024-06-07 21:48:22.131920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.956 qpair failed and we were unable to recover it. 00:31:21.956 [2024-06-07 21:48:22.132261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.956 [2024-06-07 21:48:22.132293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.956 qpair failed and we were unable to recover it. 00:31:21.956 [2024-06-07 21:48:22.132547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.956 [2024-06-07 21:48:22.132579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.956 qpair failed and we were unable to recover it. 00:31:21.956 [2024-06-07 21:48:22.132836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.956 [2024-06-07 21:48:22.132866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.956 qpair failed and we were unable to recover it. 00:31:21.956 [2024-06-07 21:48:22.133240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.956 [2024-06-07 21:48:22.133272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.956 qpair failed and we were unable to recover it. 00:31:21.956 [2024-06-07 21:48:22.133538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.956 [2024-06-07 21:48:22.133568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.956 qpair failed and we were unable to recover it. 00:31:21.956 [2024-06-07 21:48:22.133936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.956 [2024-06-07 21:48:22.133967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.956 qpair failed and we were unable to recover it. 00:31:21.956 [2024-06-07 21:48:22.134326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.956 [2024-06-07 21:48:22.134357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.956 qpair failed and we were unable to recover it. 00:31:21.956 [2024-06-07 21:48:22.134669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.956 [2024-06-07 21:48:22.134679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.956 qpair failed and we were unable to recover it. 00:31:21.956 [2024-06-07 21:48:22.134847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.956 [2024-06-07 21:48:22.134858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.956 qpair failed and we were unable to recover it. 00:31:21.956 [2024-06-07 21:48:22.135073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.956 [2024-06-07 21:48:22.135083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.956 qpair failed and we were unable to recover it. 00:31:21.956 [2024-06-07 21:48:22.135336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.956 [2024-06-07 21:48:22.135367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.956 qpair failed and we were unable to recover it. 00:31:21.956 [2024-06-07 21:48:22.135577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.956 [2024-06-07 21:48:22.135608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.956 qpair failed and we were unable to recover it. 00:31:21.956 [2024-06-07 21:48:22.135880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.956 [2024-06-07 21:48:22.135911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.956 qpair failed and we were unable to recover it. 00:31:21.956 [2024-06-07 21:48:22.136169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.956 [2024-06-07 21:48:22.136200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.956 qpair failed and we were unable to recover it. 00:31:21.956 [2024-06-07 21:48:22.136467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.956 [2024-06-07 21:48:22.136498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.956 qpair failed and we were unable to recover it. 00:31:21.956 [2024-06-07 21:48:22.136760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.956 [2024-06-07 21:48:22.136791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.956 qpair failed and we were unable to recover it. 00:31:21.956 [2024-06-07 21:48:22.137114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.956 [2024-06-07 21:48:22.137154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.956 qpair failed and we were unable to recover it. 00:31:21.956 [2024-06-07 21:48:22.137445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.956 [2024-06-07 21:48:22.137455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.956 qpair failed and we were unable to recover it. 00:31:21.956 [2024-06-07 21:48:22.137686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.956 [2024-06-07 21:48:22.137696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.956 qpair failed and we were unable to recover it. 00:31:21.956 [2024-06-07 21:48:22.138017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.956 [2024-06-07 21:48:22.138030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.956 qpair failed and we were unable to recover it. 00:31:21.956 [2024-06-07 21:48:22.138309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.956 [2024-06-07 21:48:22.138320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.956 qpair failed and we were unable to recover it. 00:31:21.956 [2024-06-07 21:48:22.138623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.956 [2024-06-07 21:48:22.138633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.956 qpair failed and we were unable to recover it. 00:31:21.956 [2024-06-07 21:48:22.138903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.956 [2024-06-07 21:48:22.138912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.956 qpair failed and we were unable to recover it. 00:31:21.956 [2024-06-07 21:48:22.139244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.956 [2024-06-07 21:48:22.139276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.956 qpair failed and we were unable to recover it. 00:31:21.956 [2024-06-07 21:48:22.139610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.956 [2024-06-07 21:48:22.139641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.956 qpair failed and we were unable to recover it. 00:31:21.956 [2024-06-07 21:48:22.140006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.956 [2024-06-07 21:48:22.140060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.956 qpair failed and we were unable to recover it. 00:31:21.956 [2024-06-07 21:48:22.140478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.956 [2024-06-07 21:48:22.140508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.956 qpair failed and we were unable to recover it. 00:31:21.957 [2024-06-07 21:48:22.140764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.957 [2024-06-07 21:48:22.140773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.957 qpair failed and we were unable to recover it. 00:31:21.957 [2024-06-07 21:48:22.141058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.957 [2024-06-07 21:48:22.141084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.957 qpair failed and we were unable to recover it. 00:31:21.957 [2024-06-07 21:48:22.141321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.957 [2024-06-07 21:48:22.141332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.957 qpair failed and we were unable to recover it. 00:31:21.957 [2024-06-07 21:48:22.141580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.957 [2024-06-07 21:48:22.141589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.957 qpair failed and we were unable to recover it. 00:31:21.957 [2024-06-07 21:48:22.141813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.957 [2024-06-07 21:48:22.141823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.957 qpair failed and we were unable to recover it. 00:31:21.957 [2024-06-07 21:48:22.141974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.957 [2024-06-07 21:48:22.141984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.957 qpair failed and we were unable to recover it. 00:31:21.957 [2024-06-07 21:48:22.142198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.957 [2024-06-07 21:48:22.142230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.957 qpair failed and we were unable to recover it. 00:31:21.957 [2024-06-07 21:48:22.142501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.957 [2024-06-07 21:48:22.142531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.957 qpair failed and we were unable to recover it. 00:31:21.957 [2024-06-07 21:48:22.142875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.957 [2024-06-07 21:48:22.142906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.957 qpair failed and we were unable to recover it. 00:31:21.957 [2024-06-07 21:48:22.143188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.957 [2024-06-07 21:48:22.143219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.957 qpair failed and we were unable to recover it. 00:31:21.957 [2024-06-07 21:48:22.143512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.957 [2024-06-07 21:48:22.143543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.957 qpair failed and we were unable to recover it. 00:31:21.957 [2024-06-07 21:48:22.143894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.957 [2024-06-07 21:48:22.143924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.957 qpair failed and we were unable to recover it. 00:31:21.957 [2024-06-07 21:48:22.144188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.957 [2024-06-07 21:48:22.144221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.957 qpair failed and we were unable to recover it. 00:31:21.957 [2024-06-07 21:48:22.144506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.957 [2024-06-07 21:48:22.144538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.957 qpair failed and we were unable to recover it. 00:31:21.957 [2024-06-07 21:48:22.144827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.957 [2024-06-07 21:48:22.144858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.957 qpair failed and we were unable to recover it. 00:31:21.957 [2024-06-07 21:48:22.145231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.957 [2024-06-07 21:48:22.145263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.957 qpair failed and we were unable to recover it. 00:31:21.957 [2024-06-07 21:48:22.145585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.957 [2024-06-07 21:48:22.145617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.957 qpair failed and we were unable to recover it. 00:31:21.957 [2024-06-07 21:48:22.145874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.957 [2024-06-07 21:48:22.145906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.957 qpair failed and we were unable to recover it. 00:31:21.957 [2024-06-07 21:48:22.146250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.957 [2024-06-07 21:48:22.146282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.957 qpair failed and we were unable to recover it. 00:31:21.957 [2024-06-07 21:48:22.146532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.957 [2024-06-07 21:48:22.146542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.957 qpair failed and we were unable to recover it. 00:31:21.957 [2024-06-07 21:48:22.146872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.957 [2024-06-07 21:48:22.146882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.957 qpair failed and we were unable to recover it. 00:31:21.957 [2024-06-07 21:48:22.147141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.957 [2024-06-07 21:48:22.147151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.957 qpair failed and we were unable to recover it. 00:31:21.957 [2024-06-07 21:48:22.147368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.957 [2024-06-07 21:48:22.147378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.957 qpair failed and we were unable to recover it. 00:31:21.957 [2024-06-07 21:48:22.147628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.957 [2024-06-07 21:48:22.147638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.957 qpair failed and we were unable to recover it. 00:31:21.957 [2024-06-07 21:48:22.147846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.957 [2024-06-07 21:48:22.147855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.957 qpair failed and we were unable to recover it. 00:31:21.957 [2024-06-07 21:48:22.148143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.957 [2024-06-07 21:48:22.148153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.957 qpair failed and we were unable to recover it. 00:31:21.957 [2024-06-07 21:48:22.148320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.957 [2024-06-07 21:48:22.148330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.957 qpair failed and we were unable to recover it. 00:31:21.957 [2024-06-07 21:48:22.148575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.957 [2024-06-07 21:48:22.148605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.957 qpair failed and we were unable to recover it. 00:31:21.957 [2024-06-07 21:48:22.148888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.957 [2024-06-07 21:48:22.148919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.957 qpair failed and we were unable to recover it. 00:31:21.957 [2024-06-07 21:48:22.149263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.957 [2024-06-07 21:48:22.149295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.957 qpair failed and we were unable to recover it. 00:31:21.957 [2024-06-07 21:48:22.149635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.957 [2024-06-07 21:48:22.149645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.957 qpair failed and we were unable to recover it. 00:31:21.957 [2024-06-07 21:48:22.149953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.957 [2024-06-07 21:48:22.149962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.957 qpair failed and we were unable to recover it. 00:31:21.957 [2024-06-07 21:48:22.150191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.957 [2024-06-07 21:48:22.150203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.957 qpair failed and we were unable to recover it. 00:31:21.957 [2024-06-07 21:48:22.150521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.957 [2024-06-07 21:48:22.150531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.957 qpair failed and we were unable to recover it. 00:31:21.957 [2024-06-07 21:48:22.150797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.957 [2024-06-07 21:48:22.150807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.957 qpair failed and we were unable to recover it. 00:31:21.957 [2024-06-07 21:48:22.151034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.957 [2024-06-07 21:48:22.151044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.957 qpair failed and we were unable to recover it. 00:31:21.957 [2024-06-07 21:48:22.151316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.957 [2024-06-07 21:48:22.151326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.957 qpair failed and we were unable to recover it. 00:31:21.957 [2024-06-07 21:48:22.151605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.957 [2024-06-07 21:48:22.151614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.957 qpair failed and we were unable to recover it. 00:31:21.957 [2024-06-07 21:48:22.151925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.958 [2024-06-07 21:48:22.151934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.958 qpair failed and we were unable to recover it. 00:31:21.958 [2024-06-07 21:48:22.152252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.958 [2024-06-07 21:48:22.152262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.958 qpair failed and we were unable to recover it. 00:31:21.958 [2024-06-07 21:48:22.152483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.958 [2024-06-07 21:48:22.152493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.958 qpair failed and we were unable to recover it. 00:31:21.958 [2024-06-07 21:48:22.152754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.958 [2024-06-07 21:48:22.152763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.958 qpair failed and we were unable to recover it. 00:31:21.958 [2024-06-07 21:48:22.153049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.958 [2024-06-07 21:48:22.153061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.958 qpair failed and we were unable to recover it. 00:31:21.958 [2024-06-07 21:48:22.153330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.958 [2024-06-07 21:48:22.153340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.958 qpair failed and we were unable to recover it. 00:31:21.958 [2024-06-07 21:48:22.153644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.958 [2024-06-07 21:48:22.153653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.958 qpair failed and we were unable to recover it. 00:31:21.958 [2024-06-07 21:48:22.153921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.958 [2024-06-07 21:48:22.153931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.958 qpair failed and we were unable to recover it. 00:31:21.958 [2024-06-07 21:48:22.154225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.958 [2024-06-07 21:48:22.154235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.958 qpair failed and we were unable to recover it. 00:31:21.958 [2024-06-07 21:48:22.154456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.958 [2024-06-07 21:48:22.154466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.958 qpair failed and we were unable to recover it. 00:31:21.958 [2024-06-07 21:48:22.154685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.958 [2024-06-07 21:48:22.154694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.958 qpair failed and we were unable to recover it. 00:31:21.958 [2024-06-07 21:48:22.154978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.958 [2024-06-07 21:48:22.154988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.958 qpair failed and we were unable to recover it. 00:31:21.958 [2024-06-07 21:48:22.155241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.958 [2024-06-07 21:48:22.155251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.958 qpair failed and we were unable to recover it. 00:31:21.958 [2024-06-07 21:48:22.155459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.958 [2024-06-07 21:48:22.155469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.958 qpair failed and we were unable to recover it. 00:31:21.958 [2024-06-07 21:48:22.155795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.958 [2024-06-07 21:48:22.155805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.958 qpair failed and we were unable to recover it. 00:31:21.958 [2024-06-07 21:48:22.155975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.958 [2024-06-07 21:48:22.156005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.958 qpair failed and we were unable to recover it. 00:31:21.958 [2024-06-07 21:48:22.156274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.958 [2024-06-07 21:48:22.156284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.958 qpair failed and we were unable to recover it. 00:31:21.958 [2024-06-07 21:48:22.156562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.958 [2024-06-07 21:48:22.156572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.958 qpair failed and we were unable to recover it. 00:31:21.958 [2024-06-07 21:48:22.156967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.958 [2024-06-07 21:48:22.156998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.958 qpair failed and we were unable to recover it. 00:31:21.958 [2024-06-07 21:48:22.157315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.958 [2024-06-07 21:48:22.157346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.958 qpair failed and we were unable to recover it. 00:31:21.958 [2024-06-07 21:48:22.157603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.958 [2024-06-07 21:48:22.157612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.958 qpair failed and we were unable to recover it. 00:31:21.958 [2024-06-07 21:48:22.157834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.958 [2024-06-07 21:48:22.157843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.958 qpair failed and we were unable to recover it. 00:31:21.958 [2024-06-07 21:48:22.158083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.958 [2024-06-07 21:48:22.158092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.958 qpair failed and we were unable to recover it. 00:31:21.958 [2024-06-07 21:48:22.158359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.958 [2024-06-07 21:48:22.158368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.958 qpair failed and we were unable to recover it. 00:31:21.958 [2024-06-07 21:48:22.158643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.958 [2024-06-07 21:48:22.158652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.958 qpair failed and we were unable to recover it. 00:31:21.958 [2024-06-07 21:48:22.158937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.958 [2024-06-07 21:48:22.158946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.958 qpair failed and we were unable to recover it. 00:31:21.958 [2024-06-07 21:48:22.159160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.958 [2024-06-07 21:48:22.159169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.958 qpair failed and we were unable to recover it. 00:31:21.958 [2024-06-07 21:48:22.159419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.958 [2024-06-07 21:48:22.159428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.958 qpair failed and we were unable to recover it. 00:31:21.958 [2024-06-07 21:48:22.159663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.958 [2024-06-07 21:48:22.159694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.958 qpair failed and we were unable to recover it. 00:31:21.958 [2024-06-07 21:48:22.160056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.958 [2024-06-07 21:48:22.160087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.958 qpair failed and we were unable to recover it. 00:31:21.958 [2024-06-07 21:48:22.160354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.958 [2024-06-07 21:48:22.160385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.958 qpair failed and we were unable to recover it. 00:31:21.958 [2024-06-07 21:48:22.160707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.958 [2024-06-07 21:48:22.160738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.958 qpair failed and we were unable to recover it. 00:31:21.958 [2024-06-07 21:48:22.160994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.958 [2024-06-07 21:48:22.161033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.958 qpair failed and we were unable to recover it. 00:31:21.958 [2024-06-07 21:48:22.161297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.958 [2024-06-07 21:48:22.161307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.958 qpair failed and we were unable to recover it. 00:31:21.958 [2024-06-07 21:48:22.161524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.958 [2024-06-07 21:48:22.161534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.958 qpair failed and we were unable to recover it. 00:31:21.958 [2024-06-07 21:48:22.161873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.958 [2024-06-07 21:48:22.161899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.958 qpair failed and we were unable to recover it. 00:31:21.958 [2024-06-07 21:48:22.162234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.958 [2024-06-07 21:48:22.162265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.958 qpair failed and we were unable to recover it. 00:31:21.958 [2024-06-07 21:48:22.162574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.958 [2024-06-07 21:48:22.162604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.958 qpair failed and we were unable to recover it. 00:31:21.958 [2024-06-07 21:48:22.162870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.959 [2024-06-07 21:48:22.162901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.959 qpair failed and we were unable to recover it. 00:31:21.959 [2024-06-07 21:48:22.163248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.959 [2024-06-07 21:48:22.163279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.959 qpair failed and we were unable to recover it. 00:31:21.959 [2024-06-07 21:48:22.163494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.959 [2024-06-07 21:48:22.163525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.959 qpair failed and we were unable to recover it. 00:31:21.959 [2024-06-07 21:48:22.163850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.959 [2024-06-07 21:48:22.163859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.959 qpair failed and we were unable to recover it. 00:31:21.959 [2024-06-07 21:48:22.164095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.959 [2024-06-07 21:48:22.164105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.959 qpair failed and we were unable to recover it. 00:31:21.959 [2024-06-07 21:48:22.164268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.959 [2024-06-07 21:48:22.164278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.959 qpair failed and we were unable to recover it. 00:31:21.959 [2024-06-07 21:48:22.164497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.959 [2024-06-07 21:48:22.164545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.959 qpair failed and we were unable to recover it. 00:31:21.959 [2024-06-07 21:48:22.164862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.959 [2024-06-07 21:48:22.164892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.959 qpair failed and we were unable to recover it. 00:31:21.959 [2024-06-07 21:48:22.165158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.959 [2024-06-07 21:48:22.165190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.959 qpair failed and we were unable to recover it. 00:31:21.959 [2024-06-07 21:48:22.165482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.959 [2024-06-07 21:48:22.165522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.959 qpair failed and we were unable to recover it. 00:31:21.959 [2024-06-07 21:48:22.165722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.959 [2024-06-07 21:48:22.165731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.959 qpair failed and we were unable to recover it. 00:31:21.959 [2024-06-07 21:48:22.166069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.959 [2024-06-07 21:48:22.166079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.959 qpair failed and we were unable to recover it. 00:31:21.959 [2024-06-07 21:48:22.166376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.959 [2024-06-07 21:48:22.166385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.959 qpair failed and we were unable to recover it. 00:31:21.959 [2024-06-07 21:48:22.166609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.959 [2024-06-07 21:48:22.166618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.959 qpair failed and we were unable to recover it. 00:31:21.959 [2024-06-07 21:48:22.166933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.959 [2024-06-07 21:48:22.166942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.959 qpair failed and we were unable to recover it. 00:31:21.959 [2024-06-07 21:48:22.167092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.959 [2024-06-07 21:48:22.167101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.959 qpair failed and we were unable to recover it. 00:31:21.959 [2024-06-07 21:48:22.167310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.959 [2024-06-07 21:48:22.167320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.959 qpair failed and we were unable to recover it. 00:31:21.959 [2024-06-07 21:48:22.167628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.959 [2024-06-07 21:48:22.167659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.959 qpair failed and we were unable to recover it. 00:31:21.959 [2024-06-07 21:48:22.168003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.959 [2024-06-07 21:48:22.168054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.959 qpair failed and we were unable to recover it. 00:31:21.959 [2024-06-07 21:48:22.168390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.959 [2024-06-07 21:48:22.168399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.959 qpair failed and we were unable to recover it. 00:31:21.959 [2024-06-07 21:48:22.168700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.959 [2024-06-07 21:48:22.168710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.959 qpair failed and we were unable to recover it. 00:31:21.959 [2024-06-07 21:48:22.169018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.959 [2024-06-07 21:48:22.169074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.959 qpair failed and we were unable to recover it. 00:31:21.959 [2024-06-07 21:48:22.169414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.959 [2024-06-07 21:48:22.169452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.959 qpair failed and we were unable to recover it. 00:31:21.959 [2024-06-07 21:48:22.169667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.959 [2024-06-07 21:48:22.169677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.959 qpair failed and we were unable to recover it. 00:31:21.959 [2024-06-07 21:48:22.169887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.959 [2024-06-07 21:48:22.169897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.959 qpair failed and we were unable to recover it. 00:31:21.959 [2024-06-07 21:48:22.170121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.959 [2024-06-07 21:48:22.170131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.959 qpair failed and we were unable to recover it. 00:31:21.959 [2024-06-07 21:48:22.170333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.959 [2024-06-07 21:48:22.170343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.959 qpair failed and we were unable to recover it. 00:31:21.959 [2024-06-07 21:48:22.170574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.959 [2024-06-07 21:48:22.170584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.959 qpair failed and we were unable to recover it. 00:31:21.959 [2024-06-07 21:48:22.170873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.959 [2024-06-07 21:48:22.170904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.959 qpair failed and we were unable to recover it. 00:31:21.959 [2024-06-07 21:48:22.171259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.959 [2024-06-07 21:48:22.171291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.959 qpair failed and we were unable to recover it. 00:31:21.959 [2024-06-07 21:48:22.171579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.959 [2024-06-07 21:48:22.171608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.959 qpair failed and we were unable to recover it. 00:31:21.959 [2024-06-07 21:48:22.171958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.959 [2024-06-07 21:48:22.171989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.959 qpair failed and we were unable to recover it. 00:31:21.959 [2024-06-07 21:48:22.172343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.959 [2024-06-07 21:48:22.172375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.959 qpair failed and we were unable to recover it. 00:31:21.959 [2024-06-07 21:48:22.172645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.959 [2024-06-07 21:48:22.172655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:21.960 qpair failed and we were unable to recover it. 00:31:22.271 [2024-06-07 21:48:22.172976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.271 [2024-06-07 21:48:22.172986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.271 qpair failed and we were unable to recover it. 00:31:22.271 [2024-06-07 21:48:22.173211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.271 [2024-06-07 21:48:22.173222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.271 qpair failed and we were unable to recover it. 00:31:22.271 [2024-06-07 21:48:22.173369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.271 [2024-06-07 21:48:22.173379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.271 qpair failed and we were unable to recover it. 00:31:22.271 [2024-06-07 21:48:22.173610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.271 [2024-06-07 21:48:22.173621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.271 qpair failed and we were unable to recover it. 00:31:22.271 [2024-06-07 21:48:22.173836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.271 [2024-06-07 21:48:22.173845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.271 qpair failed and we were unable to recover it. 00:31:22.271 [2024-06-07 21:48:22.174129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.271 [2024-06-07 21:48:22.174161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.271 qpair failed and we were unable to recover it. 00:31:22.271 [2024-06-07 21:48:22.174567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.271 [2024-06-07 21:48:22.174598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.271 qpair failed and we were unable to recover it. 00:31:22.271 [2024-06-07 21:48:22.174911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.271 [2024-06-07 21:48:22.174921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.271 qpair failed and we were unable to recover it. 00:31:22.271 [2024-06-07 21:48:22.175142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.271 [2024-06-07 21:48:22.175152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.271 qpair failed and we were unable to recover it. 00:31:22.271 [2024-06-07 21:48:22.175427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.271 [2024-06-07 21:48:22.175437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.271 qpair failed and we were unable to recover it. 00:31:22.271 [2024-06-07 21:48:22.175745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.271 [2024-06-07 21:48:22.175754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.271 qpair failed and we were unable to recover it. 00:31:22.271 [2024-06-07 21:48:22.175974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.271 [2024-06-07 21:48:22.175984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.271 qpair failed and we were unable to recover it. 00:31:22.271 [2024-06-07 21:48:22.176282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.272 [2024-06-07 21:48:22.176294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.272 qpair failed and we were unable to recover it. 00:31:22.272 [2024-06-07 21:48:22.176573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.272 [2024-06-07 21:48:22.176583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.272 qpair failed and we were unable to recover it. 00:31:22.272 [2024-06-07 21:48:22.176855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.272 [2024-06-07 21:48:22.176865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.272 qpair failed and we were unable to recover it. 00:31:22.272 [2024-06-07 21:48:22.177082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.272 [2024-06-07 21:48:22.177092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.272 qpair failed and we were unable to recover it. 00:31:22.272 [2024-06-07 21:48:22.177370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.272 [2024-06-07 21:48:22.177380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.272 qpair failed and we were unable to recover it. 00:31:22.272 [2024-06-07 21:48:22.177619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.272 [2024-06-07 21:48:22.177629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.272 qpair failed and we were unable to recover it. 00:31:22.272 [2024-06-07 21:48:22.177916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.272 [2024-06-07 21:48:22.177926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.272 qpair failed and we were unable to recover it. 00:31:22.272 [2024-06-07 21:48:22.178238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.272 [2024-06-07 21:48:22.178248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.272 qpair failed and we were unable to recover it. 00:31:22.272 [2024-06-07 21:48:22.178402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.272 [2024-06-07 21:48:22.178412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.272 qpair failed and we were unable to recover it. 00:31:22.272 [2024-06-07 21:48:22.178683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.272 [2024-06-07 21:48:22.178693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.272 qpair failed and we were unable to recover it. 00:31:22.272 [2024-06-07 21:48:22.178897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.272 [2024-06-07 21:48:22.178906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.272 qpair failed and we were unable to recover it. 00:31:22.272 [2024-06-07 21:48:22.179118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.272 [2024-06-07 21:48:22.179128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.272 qpair failed and we were unable to recover it. 00:31:22.272 [2024-06-07 21:48:22.179351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.272 [2024-06-07 21:48:22.179361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.272 qpair failed and we were unable to recover it. 00:31:22.272 [2024-06-07 21:48:22.179571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.272 [2024-06-07 21:48:22.179580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.272 qpair failed and we were unable to recover it. 00:31:22.272 [2024-06-07 21:48:22.179797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.272 [2024-06-07 21:48:22.179807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.272 qpair failed and we were unable to recover it. 00:31:22.272 [2024-06-07 21:48:22.180095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.272 [2024-06-07 21:48:22.180105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.272 qpair failed and we were unable to recover it. 00:31:22.272 [2024-06-07 21:48:22.180400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.272 [2024-06-07 21:48:22.180409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.272 qpair failed and we were unable to recover it. 00:31:22.272 [2024-06-07 21:48:22.180627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.272 [2024-06-07 21:48:22.180636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.272 qpair failed and we were unable to recover it. 00:31:22.272 [2024-06-07 21:48:22.180923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.272 [2024-06-07 21:48:22.180932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.272 qpair failed and we were unable to recover it. 00:31:22.272 [2024-06-07 21:48:22.181220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.272 [2024-06-07 21:48:22.181230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.272 qpair failed and we were unable to recover it. 00:31:22.272 [2024-06-07 21:48:22.181448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.272 [2024-06-07 21:48:22.181458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.272 qpair failed and we were unable to recover it. 00:31:22.272 [2024-06-07 21:48:22.181667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.272 [2024-06-07 21:48:22.181677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.272 qpair failed and we were unable to recover it. 00:31:22.272 [2024-06-07 21:48:22.181974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.272 [2024-06-07 21:48:22.181984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.272 qpair failed and we were unable to recover it. 00:31:22.272 [2024-06-07 21:48:22.182284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.272 [2024-06-07 21:48:22.182295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.272 qpair failed and we were unable to recover it. 00:31:22.272 [2024-06-07 21:48:22.182513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.272 [2024-06-07 21:48:22.182523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.272 qpair failed and we were unable to recover it. 00:31:22.272 [2024-06-07 21:48:22.182768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.272 [2024-06-07 21:48:22.182778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.272 qpair failed and we were unable to recover it. 00:31:22.272 [2024-06-07 21:48:22.183079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.272 [2024-06-07 21:48:22.183089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.272 qpair failed and we were unable to recover it. 00:31:22.272 [2024-06-07 21:48:22.183375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.272 [2024-06-07 21:48:22.183385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.272 qpair failed and we were unable to recover it. 00:31:22.272 [2024-06-07 21:48:22.183694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.272 [2024-06-07 21:48:22.183704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.272 qpair failed and we were unable to recover it. 00:31:22.272 [2024-06-07 21:48:22.184021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.272 [2024-06-07 21:48:22.184034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.272 qpair failed and we were unable to recover it. 00:31:22.272 [2024-06-07 21:48:22.184379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.272 [2024-06-07 21:48:22.184389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.272 qpair failed and we were unable to recover it. 00:31:22.272 [2024-06-07 21:48:22.184634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.272 [2024-06-07 21:48:22.184643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.272 qpair failed and we were unable to recover it. 00:31:22.272 [2024-06-07 21:48:22.184874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.272 [2024-06-07 21:48:22.184883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.272 qpair failed and we were unable to recover it. 00:31:22.272 [2024-06-07 21:48:22.185223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.272 [2024-06-07 21:48:22.185233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.272 qpair failed and we were unable to recover it. 00:31:22.272 [2024-06-07 21:48:22.185506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.272 [2024-06-07 21:48:22.185516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.272 qpair failed and we were unable to recover it. 00:31:22.272 [2024-06-07 21:48:22.185727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.272 [2024-06-07 21:48:22.185737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.272 qpair failed and we were unable to recover it. 00:31:22.272 [2024-06-07 21:48:22.186016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.272 [2024-06-07 21:48:22.186030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.272 qpair failed and we were unable to recover it. 00:31:22.272 [2024-06-07 21:48:22.186239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.272 [2024-06-07 21:48:22.186248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.272 qpair failed and we were unable to recover it. 00:31:22.273 [2024-06-07 21:48:22.186555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.273 [2024-06-07 21:48:22.186564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.273 qpair failed and we were unable to recover it. 00:31:22.273 [2024-06-07 21:48:22.186804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.273 [2024-06-07 21:48:22.186813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.273 qpair failed and we were unable to recover it. 00:31:22.273 [2024-06-07 21:48:22.187139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.273 [2024-06-07 21:48:22.187150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.273 qpair failed and we were unable to recover it. 00:31:22.273 [2024-06-07 21:48:22.187465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.273 [2024-06-07 21:48:22.187474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.273 qpair failed and we were unable to recover it. 00:31:22.273 [2024-06-07 21:48:22.187674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.273 [2024-06-07 21:48:22.187684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.273 qpair failed and we were unable to recover it. 00:31:22.273 [2024-06-07 21:48:22.187994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.273 [2024-06-07 21:48:22.188003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.273 qpair failed and we were unable to recover it. 00:31:22.273 [2024-06-07 21:48:22.188288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.273 [2024-06-07 21:48:22.188298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.273 qpair failed and we were unable to recover it. 00:31:22.273 [2024-06-07 21:48:22.188604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.273 [2024-06-07 21:48:22.188613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.273 qpair failed and we were unable to recover it. 00:31:22.273 [2024-06-07 21:48:22.188937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.273 [2024-06-07 21:48:22.188946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.273 qpair failed and we were unable to recover it. 00:31:22.273 [2024-06-07 21:48:22.189164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.273 [2024-06-07 21:48:22.189174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.273 qpair failed and we were unable to recover it. 00:31:22.273 [2024-06-07 21:48:22.189395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.273 [2024-06-07 21:48:22.189404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.273 qpair failed and we were unable to recover it. 00:31:22.273 [2024-06-07 21:48:22.189604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.273 [2024-06-07 21:48:22.189614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.273 qpair failed and we were unable to recover it. 00:31:22.273 [2024-06-07 21:48:22.189832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.273 [2024-06-07 21:48:22.189841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.273 qpair failed and we were unable to recover it. 00:31:22.273 [2024-06-07 21:48:22.190095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.273 [2024-06-07 21:48:22.190105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.273 qpair failed and we were unable to recover it. 00:31:22.273 [2024-06-07 21:48:22.190394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.273 [2024-06-07 21:48:22.190404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.273 qpair failed and we were unable to recover it. 00:31:22.273 [2024-06-07 21:48:22.190721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.273 [2024-06-07 21:48:22.190731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.273 qpair failed and we were unable to recover it. 00:31:22.273 [2024-06-07 21:48:22.191036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.273 [2024-06-07 21:48:22.191046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.273 qpair failed and we were unable to recover it. 00:31:22.273 [2024-06-07 21:48:22.191286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.273 [2024-06-07 21:48:22.191296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.273 qpair failed and we were unable to recover it. 00:31:22.273 [2024-06-07 21:48:22.191513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.273 [2024-06-07 21:48:22.191523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.273 qpair failed and we were unable to recover it. 00:31:22.273 [2024-06-07 21:48:22.191733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.273 [2024-06-07 21:48:22.191742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.273 qpair failed and we were unable to recover it. 00:31:22.273 [2024-06-07 21:48:22.192066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.273 [2024-06-07 21:48:22.192076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.273 qpair failed and we were unable to recover it. 00:31:22.273 [2024-06-07 21:48:22.192347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.273 [2024-06-07 21:48:22.192357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.273 qpair failed and we were unable to recover it. 00:31:22.273 [2024-06-07 21:48:22.192617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.273 [2024-06-07 21:48:22.192626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.273 qpair failed and we were unable to recover it. 00:31:22.273 [2024-06-07 21:48:22.192848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.273 [2024-06-07 21:48:22.192857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.273 qpair failed and we were unable to recover it. 00:31:22.273 [2024-06-07 21:48:22.193152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.273 [2024-06-07 21:48:22.193162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.273 qpair failed and we were unable to recover it. 00:31:22.273 [2024-06-07 21:48:22.193378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.273 [2024-06-07 21:48:22.193388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.273 qpair failed and we were unable to recover it. 00:31:22.273 [2024-06-07 21:48:22.193650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.273 [2024-06-07 21:48:22.193660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.273 qpair failed and we were unable to recover it. 00:31:22.273 [2024-06-07 21:48:22.193894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.273 [2024-06-07 21:48:22.193903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.273 qpair failed and we were unable to recover it. 00:31:22.273 [2024-06-07 21:48:22.194177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.273 [2024-06-07 21:48:22.194186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.273 qpair failed and we were unable to recover it. 00:31:22.273 [2024-06-07 21:48:22.194490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.273 [2024-06-07 21:48:22.194499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.273 qpair failed and we were unable to recover it. 00:31:22.273 [2024-06-07 21:48:22.194795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.273 [2024-06-07 21:48:22.194805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.273 qpair failed and we were unable to recover it. 00:31:22.273 [2024-06-07 21:48:22.195018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.273 [2024-06-07 21:48:22.195033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.273 qpair failed and we were unable to recover it. 00:31:22.273 [2024-06-07 21:48:22.195378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.273 [2024-06-07 21:48:22.195387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.273 qpair failed and we were unable to recover it. 00:31:22.273 [2024-06-07 21:48:22.195603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.273 [2024-06-07 21:48:22.195612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.273 qpair failed and we were unable to recover it. 00:31:22.273 [2024-06-07 21:48:22.195764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.273 [2024-06-07 21:48:22.195774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.273 qpair failed and we were unable to recover it. 00:31:22.273 [2024-06-07 21:48:22.196020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.273 [2024-06-07 21:48:22.196035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.273 qpair failed and we were unable to recover it. 00:31:22.273 [2024-06-07 21:48:22.196252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.273 [2024-06-07 21:48:22.196262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.273 qpair failed and we were unable to recover it. 00:31:22.273 [2024-06-07 21:48:22.196560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.273 [2024-06-07 21:48:22.196570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.273 qpair failed and we were unable to recover it. 00:31:22.274 [2024-06-07 21:48:22.196849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.274 [2024-06-07 21:48:22.196858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.274 qpair failed and we were unable to recover it. 00:31:22.274 [2024-06-07 21:48:22.197162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.274 [2024-06-07 21:48:22.197172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.274 qpair failed and we were unable to recover it. 00:31:22.274 [2024-06-07 21:48:22.197390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.274 [2024-06-07 21:48:22.197399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.274 qpair failed and we were unable to recover it. 00:31:22.274 [2024-06-07 21:48:22.197668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.274 [2024-06-07 21:48:22.197678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.274 qpair failed and we were unable to recover it. 00:31:22.274 [2024-06-07 21:48:22.197950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.274 [2024-06-07 21:48:22.197961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.274 qpair failed and we were unable to recover it. 00:31:22.274 [2024-06-07 21:48:22.198259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.274 [2024-06-07 21:48:22.198269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.274 qpair failed and we were unable to recover it. 00:31:22.274 [2024-06-07 21:48:22.198479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.274 [2024-06-07 21:48:22.198489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.274 qpair failed and we were unable to recover it. 00:31:22.274 [2024-06-07 21:48:22.198775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.274 [2024-06-07 21:48:22.198784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.274 qpair failed and we were unable to recover it. 00:31:22.274 [2024-06-07 21:48:22.199092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.274 [2024-06-07 21:48:22.199101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.274 qpair failed and we were unable to recover it. 00:31:22.274 [2024-06-07 21:48:22.199423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.274 [2024-06-07 21:48:22.199453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.274 qpair failed and we were unable to recover it. 00:31:22.274 [2024-06-07 21:48:22.199722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.274 [2024-06-07 21:48:22.199752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.274 qpair failed and we were unable to recover it. 00:31:22.274 [2024-06-07 21:48:22.200111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.274 [2024-06-07 21:48:22.200142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.274 qpair failed and we were unable to recover it. 00:31:22.274 [2024-06-07 21:48:22.200386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.274 [2024-06-07 21:48:22.200416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.274 qpair failed and we were unable to recover it. 00:31:22.274 [2024-06-07 21:48:22.200746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.274 [2024-06-07 21:48:22.200777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.274 qpair failed and we were unable to recover it. 00:31:22.274 [2024-06-07 21:48:22.201092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.274 [2024-06-07 21:48:22.201123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.274 qpair failed and we were unable to recover it. 00:31:22.274 [2024-06-07 21:48:22.201364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.274 [2024-06-07 21:48:22.201374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.274 qpair failed and we were unable to recover it. 00:31:22.274 [2024-06-07 21:48:22.201670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.274 [2024-06-07 21:48:22.201679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.274 qpair failed and we were unable to recover it. 00:31:22.274 [2024-06-07 21:48:22.201965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.274 [2024-06-07 21:48:22.201974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.274 qpair failed and we were unable to recover it. 00:31:22.274 [2024-06-07 21:48:22.202254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.274 [2024-06-07 21:48:22.202263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.274 qpair failed and we were unable to recover it. 00:31:22.274 [2024-06-07 21:48:22.202583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.274 [2024-06-07 21:48:22.202592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.274 qpair failed and we were unable to recover it. 00:31:22.274 [2024-06-07 21:48:22.202940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.274 [2024-06-07 21:48:22.202949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.274 qpair failed and we were unable to recover it. 00:31:22.274 [2024-06-07 21:48:22.203243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.274 [2024-06-07 21:48:22.203253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.274 qpair failed and we were unable to recover it. 00:31:22.274 [2024-06-07 21:48:22.203572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.274 [2024-06-07 21:48:22.203581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.274 qpair failed and we were unable to recover it. 00:31:22.274 [2024-06-07 21:48:22.203818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.274 [2024-06-07 21:48:22.203828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.274 qpair failed and we were unable to recover it. 00:31:22.274 [2024-06-07 21:48:22.204099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.274 [2024-06-07 21:48:22.204109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.274 qpair failed and we were unable to recover it. 00:31:22.274 [2024-06-07 21:48:22.204342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.274 [2024-06-07 21:48:22.204351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.274 qpair failed and we were unable to recover it. 00:31:22.274 [2024-06-07 21:48:22.204665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.274 [2024-06-07 21:48:22.204674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.274 qpair failed and we were unable to recover it. 00:31:22.274 [2024-06-07 21:48:22.204874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.274 [2024-06-07 21:48:22.204884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.274 qpair failed and we were unable to recover it. 00:31:22.274 [2024-06-07 21:48:22.205100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.274 [2024-06-07 21:48:22.205132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.274 qpair failed and we were unable to recover it. 00:31:22.274 [2024-06-07 21:48:22.205306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.274 [2024-06-07 21:48:22.205337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.274 qpair failed and we were unable to recover it. 00:31:22.274 [2024-06-07 21:48:22.205591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.274 [2024-06-07 21:48:22.205622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.274 qpair failed and we were unable to recover it. 00:31:22.274 [2024-06-07 21:48:22.205781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.274 [2024-06-07 21:48:22.205791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.274 qpair failed and we were unable to recover it. 00:31:22.274 [2024-06-07 21:48:22.206094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.274 [2024-06-07 21:48:22.206125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.274 qpair failed and we were unable to recover it. 00:31:22.274 [2024-06-07 21:48:22.206377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.274 [2024-06-07 21:48:22.206387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.274 qpair failed and we were unable to recover it. 00:31:22.274 [2024-06-07 21:48:22.206631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.274 [2024-06-07 21:48:22.206640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.274 qpair failed and we were unable to recover it. 00:31:22.274 [2024-06-07 21:48:22.206788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.274 [2024-06-07 21:48:22.206798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.274 qpair failed and we were unable to recover it. 00:31:22.274 [2024-06-07 21:48:22.207152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.274 [2024-06-07 21:48:22.207183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.274 qpair failed and we were unable to recover it. 00:31:22.274 [2024-06-07 21:48:22.207456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.274 [2024-06-07 21:48:22.207465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.275 qpair failed and we were unable to recover it. 00:31:22.275 [2024-06-07 21:48:22.207733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.275 [2024-06-07 21:48:22.207742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.275 qpair failed and we were unable to recover it. 00:31:22.275 [2024-06-07 21:48:22.208040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.275 [2024-06-07 21:48:22.208050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.275 qpair failed and we were unable to recover it. 00:31:22.275 [2024-06-07 21:48:22.208319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.275 [2024-06-07 21:48:22.208329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.275 qpair failed and we were unable to recover it. 00:31:22.275 [2024-06-07 21:48:22.208606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.275 [2024-06-07 21:48:22.208615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.275 qpair failed and we were unable to recover it. 00:31:22.275 [2024-06-07 21:48:22.208910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.275 [2024-06-07 21:48:22.208919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.275 qpair failed and we were unable to recover it. 00:31:22.275 [2024-06-07 21:48:22.209139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.275 [2024-06-07 21:48:22.209149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.275 qpair failed and we were unable to recover it. 00:31:22.275 [2024-06-07 21:48:22.209360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.275 [2024-06-07 21:48:22.209371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.275 qpair failed and we were unable to recover it. 00:31:22.275 [2024-06-07 21:48:22.209666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.275 [2024-06-07 21:48:22.209675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.275 qpair failed and we were unable to recover it. 00:31:22.275 [2024-06-07 21:48:22.209969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.275 [2024-06-07 21:48:22.209978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.275 qpair failed and we were unable to recover it. 00:31:22.275 [2024-06-07 21:48:22.210288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.275 [2024-06-07 21:48:22.210319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.275 qpair failed and we were unable to recover it. 00:31:22.275 [2024-06-07 21:48:22.210562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.275 [2024-06-07 21:48:22.210593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.275 qpair failed and we were unable to recover it. 00:31:22.275 [2024-06-07 21:48:22.210922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.275 [2024-06-07 21:48:22.210952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.275 qpair failed and we were unable to recover it. 00:31:22.275 [2024-06-07 21:48:22.211201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.275 [2024-06-07 21:48:22.211233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.275 qpair failed and we were unable to recover it. 00:31:22.275 [2024-06-07 21:48:22.211490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.275 [2024-06-07 21:48:22.211520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.275 qpair failed and we were unable to recover it. 00:31:22.275 [2024-06-07 21:48:22.211845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.275 [2024-06-07 21:48:22.211854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.275 qpair failed and we were unable to recover it. 00:31:22.275 [2024-06-07 21:48:22.212097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.275 [2024-06-07 21:48:22.212106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.275 qpair failed and we were unable to recover it. 00:31:22.275 [2024-06-07 21:48:22.212388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.275 [2024-06-07 21:48:22.212397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.275 qpair failed and we were unable to recover it. 00:31:22.275 [2024-06-07 21:48:22.212717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.275 [2024-06-07 21:48:22.212727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.275 qpair failed and we were unable to recover it. 00:31:22.275 [2024-06-07 21:48:22.213023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.275 [2024-06-07 21:48:22.213065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.275 qpair failed and we were unable to recover it. 00:31:22.275 [2024-06-07 21:48:22.213417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.275 [2024-06-07 21:48:22.213447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.275 qpair failed and we were unable to recover it. 00:31:22.275 [2024-06-07 21:48:22.213659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.275 [2024-06-07 21:48:22.213669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.275 qpair failed and we were unable to recover it. 00:31:22.275 [2024-06-07 21:48:22.213879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.275 [2024-06-07 21:48:22.213909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.275 qpair failed and we were unable to recover it. 00:31:22.275 [2024-06-07 21:48:22.214104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.275 [2024-06-07 21:48:22.214136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.275 qpair failed and we were unable to recover it. 00:31:22.275 [2024-06-07 21:48:22.214483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.275 [2024-06-07 21:48:22.214513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.275 qpair failed and we were unable to recover it. 00:31:22.275 [2024-06-07 21:48:22.214766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.275 [2024-06-07 21:48:22.214797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.275 qpair failed and we were unable to recover it. 00:31:22.275 [2024-06-07 21:48:22.215141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.275 [2024-06-07 21:48:22.215173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.275 qpair failed and we were unable to recover it. 00:31:22.275 [2024-06-07 21:48:22.215372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.275 [2024-06-07 21:48:22.215413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.275 qpair failed and we were unable to recover it. 00:31:22.275 [2024-06-07 21:48:22.215615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.275 [2024-06-07 21:48:22.215624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.275 qpair failed and we were unable to recover it. 00:31:22.275 [2024-06-07 21:48:22.215836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.275 [2024-06-07 21:48:22.215845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.275 qpair failed and we were unable to recover it. 00:31:22.275 [2024-06-07 21:48:22.216126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.275 [2024-06-07 21:48:22.216136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.275 qpair failed and we were unable to recover it. 00:31:22.275 [2024-06-07 21:48:22.216435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.275 [2024-06-07 21:48:22.216445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.275 qpair failed and we were unable to recover it. 00:31:22.275 [2024-06-07 21:48:22.216590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.275 [2024-06-07 21:48:22.216599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.275 qpair failed and we were unable to recover it. 00:31:22.275 [2024-06-07 21:48:22.216805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.275 [2024-06-07 21:48:22.216815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.275 qpair failed and we were unable to recover it. 00:31:22.275 [2024-06-07 21:48:22.216959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.275 [2024-06-07 21:48:22.216967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.275 qpair failed and we were unable to recover it. 00:31:22.275 [2024-06-07 21:48:22.217188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.275 [2024-06-07 21:48:22.217219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.275 qpair failed and we were unable to recover it. 00:31:22.275 [2024-06-07 21:48:22.217556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.275 [2024-06-07 21:48:22.217587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.275 qpair failed and we were unable to recover it. 00:31:22.275 [2024-06-07 21:48:22.217767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.275 [2024-06-07 21:48:22.217797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.275 qpair failed and we were unable to recover it. 00:31:22.275 [2024-06-07 21:48:22.218049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.275 [2024-06-07 21:48:22.218082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.275 qpair failed and we were unable to recover it. 00:31:22.276 [2024-06-07 21:48:22.218450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.276 [2024-06-07 21:48:22.218481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.276 qpair failed and we were unable to recover it. 00:31:22.276 [2024-06-07 21:48:22.218720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.276 [2024-06-07 21:48:22.218750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.276 qpair failed and we were unable to recover it. 00:31:22.276 [2024-06-07 21:48:22.219040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.276 [2024-06-07 21:48:22.219071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.276 qpair failed and we were unable to recover it. 00:31:22.276 [2024-06-07 21:48:22.219283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.276 [2024-06-07 21:48:22.219314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.276 qpair failed and we were unable to recover it. 00:31:22.276 [2024-06-07 21:48:22.219625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.276 [2024-06-07 21:48:22.219655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.276 qpair failed and we were unable to recover it. 00:31:22.276 [2024-06-07 21:48:22.220004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.276 [2024-06-07 21:48:22.220013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.276 qpair failed and we were unable to recover it. 00:31:22.276 [2024-06-07 21:48:22.220350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.276 [2024-06-07 21:48:22.220381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.276 qpair failed and we were unable to recover it. 00:31:22.276 [2024-06-07 21:48:22.220707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.276 [2024-06-07 21:48:22.220716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.276 qpair failed and we were unable to recover it. 00:31:22.276 [2024-06-07 21:48:22.221000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.276 [2024-06-07 21:48:22.221011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.276 qpair failed and we were unable to recover it. 00:31:22.276 [2024-06-07 21:48:22.221211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.276 [2024-06-07 21:48:22.221221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.276 qpair failed and we were unable to recover it. 00:31:22.276 [2024-06-07 21:48:22.221379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.276 [2024-06-07 21:48:22.221389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.276 qpair failed and we were unable to recover it. 00:31:22.276 [2024-06-07 21:48:22.221540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.276 [2024-06-07 21:48:22.221549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.276 qpair failed and we were unable to recover it. 00:31:22.276 [2024-06-07 21:48:22.221824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.276 [2024-06-07 21:48:22.221833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.276 qpair failed and we were unable to recover it. 00:31:22.276 [2024-06-07 21:48:22.222036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.276 [2024-06-07 21:48:22.222045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.276 qpair failed and we were unable to recover it. 00:31:22.276 [2024-06-07 21:48:22.222258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.276 [2024-06-07 21:48:22.222267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.276 qpair failed and we were unable to recover it. 00:31:22.276 [2024-06-07 21:48:22.222407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.276 [2024-06-07 21:48:22.222416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.276 qpair failed and we were unable to recover it. 00:31:22.276 [2024-06-07 21:48:22.222579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.276 [2024-06-07 21:48:22.222588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.276 qpair failed and we were unable to recover it. 00:31:22.276 [2024-06-07 21:48:22.222796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.276 [2024-06-07 21:48:22.222826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.276 qpair failed and we were unable to recover it. 00:31:22.276 [2024-06-07 21:48:22.223087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.276 [2024-06-07 21:48:22.223119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.276 qpair failed and we were unable to recover it. 00:31:22.276 [2024-06-07 21:48:22.223462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.276 [2024-06-07 21:48:22.223492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.276 qpair failed and we were unable to recover it. 00:31:22.276 [2024-06-07 21:48:22.223747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.276 [2024-06-07 21:48:22.223778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.276 qpair failed and we were unable to recover it. 00:31:22.276 [2024-06-07 21:48:22.224130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.276 [2024-06-07 21:48:22.224161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.276 qpair failed and we were unable to recover it. 00:31:22.276 [2024-06-07 21:48:22.224507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.276 [2024-06-07 21:48:22.224537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.276 qpair failed and we were unable to recover it. 00:31:22.276 [2024-06-07 21:48:22.224842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.276 [2024-06-07 21:48:22.224851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.276 qpair failed and we were unable to recover it. 00:31:22.276 [2024-06-07 21:48:22.225096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.276 [2024-06-07 21:48:22.225105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.276 qpair failed and we were unable to recover it. 00:31:22.276 [2024-06-07 21:48:22.225334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.276 [2024-06-07 21:48:22.225343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.276 qpair failed and we were unable to recover it. 00:31:22.276 [2024-06-07 21:48:22.225543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.276 [2024-06-07 21:48:22.225553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.276 qpair failed and we were unable to recover it. 00:31:22.276 [2024-06-07 21:48:22.225830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.276 [2024-06-07 21:48:22.225860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.276 qpair failed and we were unable to recover it. 00:31:22.276 [2024-06-07 21:48:22.226141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.276 [2024-06-07 21:48:22.226172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.276 qpair failed and we were unable to recover it. 00:31:22.276 [2024-06-07 21:48:22.226430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.276 [2024-06-07 21:48:22.226471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.276 qpair failed and we were unable to recover it. 00:31:22.277 [2024-06-07 21:48:22.226769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.277 [2024-06-07 21:48:22.226778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.277 qpair failed and we were unable to recover it. 00:31:22.277 [2024-06-07 21:48:22.226918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.277 [2024-06-07 21:48:22.226928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.277 qpair failed and we were unable to recover it. 00:31:22.277 [2024-06-07 21:48:22.227161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.277 [2024-06-07 21:48:22.227193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.277 qpair failed and we were unable to recover it. 00:31:22.277 [2024-06-07 21:48:22.227509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.277 [2024-06-07 21:48:22.227539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.277 qpair failed and we were unable to recover it. 00:31:22.277 [2024-06-07 21:48:22.227843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.277 [2024-06-07 21:48:22.227852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.277 qpair failed and we were unable to recover it. 00:31:22.277 [2024-06-07 21:48:22.228138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.277 [2024-06-07 21:48:22.228148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.277 qpair failed and we were unable to recover it. 00:31:22.277 [2024-06-07 21:48:22.228443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.277 [2024-06-07 21:48:22.228453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.277 qpair failed and we were unable to recover it. 00:31:22.277 [2024-06-07 21:48:22.228676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.277 [2024-06-07 21:48:22.228685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.277 qpair failed and we were unable to recover it. 00:31:22.277 [2024-06-07 21:48:22.228910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.277 [2024-06-07 21:48:22.228920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.277 qpair failed and we were unable to recover it. 00:31:22.277 [2024-06-07 21:48:22.229216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.277 [2024-06-07 21:48:22.229226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.277 qpair failed and we were unable to recover it. 00:31:22.277 [2024-06-07 21:48:22.229433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.277 [2024-06-07 21:48:22.229443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.277 qpair failed and we were unable to recover it. 00:31:22.277 [2024-06-07 21:48:22.229680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.277 [2024-06-07 21:48:22.229689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.277 qpair failed and we were unable to recover it. 00:31:22.277 [2024-06-07 21:48:22.229905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.277 [2024-06-07 21:48:22.229914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.277 qpair failed and we were unable to recover it. 00:31:22.277 [2024-06-07 21:48:22.230129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.277 [2024-06-07 21:48:22.230139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.277 qpair failed and we were unable to recover it. 00:31:22.277 [2024-06-07 21:48:22.230344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.277 [2024-06-07 21:48:22.230353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.277 qpair failed and we were unable to recover it. 00:31:22.277 [2024-06-07 21:48:22.230585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.277 [2024-06-07 21:48:22.230595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.277 qpair failed and we were unable to recover it. 00:31:22.277 [2024-06-07 21:48:22.230752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.277 [2024-06-07 21:48:22.230761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.277 qpair failed and we were unable to recover it. 00:31:22.277 [2024-06-07 21:48:22.230982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.277 [2024-06-07 21:48:22.230991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.277 qpair failed and we were unable to recover it. 00:31:22.277 [2024-06-07 21:48:22.231234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.277 [2024-06-07 21:48:22.231243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.277 qpair failed and we were unable to recover it. 00:31:22.277 [2024-06-07 21:48:22.231517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.277 [2024-06-07 21:48:22.231526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.277 qpair failed and we were unable to recover it. 00:31:22.277 [2024-06-07 21:48:22.231852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.277 [2024-06-07 21:48:22.231862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.277 qpair failed and we were unable to recover it. 00:31:22.277 [2024-06-07 21:48:22.232087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.277 [2024-06-07 21:48:22.232096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.277 qpair failed and we were unable to recover it. 00:31:22.277 [2024-06-07 21:48:22.232414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.277 [2024-06-07 21:48:22.232424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.277 qpair failed and we were unable to recover it. 00:31:22.277 [2024-06-07 21:48:22.232589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.277 [2024-06-07 21:48:22.232598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.277 qpair failed and we were unable to recover it. 00:31:22.277 [2024-06-07 21:48:22.232872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.277 [2024-06-07 21:48:22.232881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.277 qpair failed and we were unable to recover it. 00:31:22.277 [2024-06-07 21:48:22.233178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.277 [2024-06-07 21:48:22.233187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.277 qpair failed and we were unable to recover it. 00:31:22.277 [2024-06-07 21:48:22.233453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.277 [2024-06-07 21:48:22.233462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.277 qpair failed and we were unable to recover it. 00:31:22.277 [2024-06-07 21:48:22.233656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.277 [2024-06-07 21:48:22.233666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.277 qpair failed and we were unable to recover it. 00:31:22.277 [2024-06-07 21:48:22.233878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.277 [2024-06-07 21:48:22.233887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.277 qpair failed and we were unable to recover it. 00:31:22.277 [2024-06-07 21:48:22.234096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.277 [2024-06-07 21:48:22.234106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.277 qpair failed and we were unable to recover it. 00:31:22.277 [2024-06-07 21:48:22.234399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.277 [2024-06-07 21:48:22.234408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.277 qpair failed and we were unable to recover it. 00:31:22.277 [2024-06-07 21:48:22.234705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.277 [2024-06-07 21:48:22.234714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.277 qpair failed and we were unable to recover it. 00:31:22.277 [2024-06-07 21:48:22.234962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.277 [2024-06-07 21:48:22.234971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.277 qpair failed and we were unable to recover it. 00:31:22.277 [2024-06-07 21:48:22.235173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.277 [2024-06-07 21:48:22.235183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.277 qpair failed and we were unable to recover it. 00:31:22.277 [2024-06-07 21:48:22.235397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.277 [2024-06-07 21:48:22.235406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.277 qpair failed and we were unable to recover it. 00:31:22.277 [2024-06-07 21:48:22.235620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.277 [2024-06-07 21:48:22.235630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.277 qpair failed and we were unable to recover it. 00:31:22.277 [2024-06-07 21:48:22.235837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.277 [2024-06-07 21:48:22.235846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.277 qpair failed and we were unable to recover it. 00:31:22.277 [2024-06-07 21:48:22.236091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.277 [2024-06-07 21:48:22.236101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.277 qpair failed and we were unable to recover it. 00:31:22.278 [2024-06-07 21:48:22.236320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.278 [2024-06-07 21:48:22.236329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.278 qpair failed and we were unable to recover it. 00:31:22.278 [2024-06-07 21:48:22.236546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.278 [2024-06-07 21:48:22.236555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.278 qpair failed and we were unable to recover it. 00:31:22.278 [2024-06-07 21:48:22.236771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.278 [2024-06-07 21:48:22.236780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.278 qpair failed and we were unable to recover it. 00:31:22.278 [2024-06-07 21:48:22.237007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.278 [2024-06-07 21:48:22.237015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.278 qpair failed and we were unable to recover it. 00:31:22.278 [2024-06-07 21:48:22.237239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.278 [2024-06-07 21:48:22.237249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.278 qpair failed and we were unable to recover it. 00:31:22.278 [2024-06-07 21:48:22.237515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.278 [2024-06-07 21:48:22.237524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.278 qpair failed and we were unable to recover it. 00:31:22.278 [2024-06-07 21:48:22.237767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.278 [2024-06-07 21:48:22.237776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.278 qpair failed and we were unable to recover it. 00:31:22.278 [2024-06-07 21:48:22.238065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.278 [2024-06-07 21:48:22.238077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.278 qpair failed and we were unable to recover it. 00:31:22.278 [2024-06-07 21:48:22.238363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.278 [2024-06-07 21:48:22.238372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.278 qpair failed and we were unable to recover it. 00:31:22.278 [2024-06-07 21:48:22.238590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.278 [2024-06-07 21:48:22.238599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.278 qpair failed and we were unable to recover it. 00:31:22.278 [2024-06-07 21:48:22.238813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.278 [2024-06-07 21:48:22.238823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.278 qpair failed and we were unable to recover it. 00:31:22.278 [2024-06-07 21:48:22.239140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.278 [2024-06-07 21:48:22.239149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.278 qpair failed and we were unable to recover it. 00:31:22.278 [2024-06-07 21:48:22.239415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.278 [2024-06-07 21:48:22.239424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.278 qpair failed and we were unable to recover it. 00:31:22.278 [2024-06-07 21:48:22.239692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.278 [2024-06-07 21:48:22.239701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.278 qpair failed and we were unable to recover it. 00:31:22.278 [2024-06-07 21:48:22.239987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.278 [2024-06-07 21:48:22.239996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.278 qpair failed and we were unable to recover it. 00:31:22.278 [2024-06-07 21:48:22.240232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.278 [2024-06-07 21:48:22.240241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.278 qpair failed and we were unable to recover it. 00:31:22.278 [2024-06-07 21:48:22.240515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.278 [2024-06-07 21:48:22.240524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.278 qpair failed and we were unable to recover it. 00:31:22.278 [2024-06-07 21:48:22.240725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.278 [2024-06-07 21:48:22.240735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.278 qpair failed and we were unable to recover it. 00:31:22.278 [2024-06-07 21:48:22.240971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.278 [2024-06-07 21:48:22.240980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.278 qpair failed and we were unable to recover it. 00:31:22.278 [2024-06-07 21:48:22.241268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.278 [2024-06-07 21:48:22.241278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.278 qpair failed and we were unable to recover it. 00:31:22.278 [2024-06-07 21:48:22.241419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.278 [2024-06-07 21:48:22.241428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.278 qpair failed and we were unable to recover it. 00:31:22.278 [2024-06-07 21:48:22.241709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.278 [2024-06-07 21:48:22.241718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.278 qpair failed and we were unable to recover it. 00:31:22.278 [2024-06-07 21:48:22.242069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.278 [2024-06-07 21:48:22.242078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.278 qpair failed and we were unable to recover it. 00:31:22.278 [2024-06-07 21:48:22.242345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.278 [2024-06-07 21:48:22.242354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.278 qpair failed and we were unable to recover it. 00:31:22.278 [2024-06-07 21:48:22.242508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.278 [2024-06-07 21:48:22.242517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.278 qpair failed and we were unable to recover it. 00:31:22.278 [2024-06-07 21:48:22.242753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.278 [2024-06-07 21:48:22.242763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.278 qpair failed and we were unable to recover it. 00:31:22.278 [2024-06-07 21:48:22.243075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.278 [2024-06-07 21:48:22.243085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.278 qpair failed and we were unable to recover it. 00:31:22.278 [2024-06-07 21:48:22.243385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.278 [2024-06-07 21:48:22.243394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.278 qpair failed and we were unable to recover it. 00:31:22.278 [2024-06-07 21:48:22.243682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.278 [2024-06-07 21:48:22.243691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.278 qpair failed and we were unable to recover it. 00:31:22.278 [2024-06-07 21:48:22.243987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.278 [2024-06-07 21:48:22.243996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.278 qpair failed and we were unable to recover it. 00:31:22.278 [2024-06-07 21:48:22.244288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.278 [2024-06-07 21:48:22.244297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.278 qpair failed and we were unable to recover it. 00:31:22.278 [2024-06-07 21:48:22.244618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.278 [2024-06-07 21:48:22.244627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.278 qpair failed and we were unable to recover it. 00:31:22.278 [2024-06-07 21:48:22.244860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.278 [2024-06-07 21:48:22.244869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.278 qpair failed and we were unable to recover it. 00:31:22.278 [2024-06-07 21:48:22.245185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.278 [2024-06-07 21:48:22.245194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.278 qpair failed and we were unable to recover it. 00:31:22.278 [2024-06-07 21:48:22.245489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.278 [2024-06-07 21:48:22.245498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.278 qpair failed and we were unable to recover it. 00:31:22.278 [2024-06-07 21:48:22.245808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.278 [2024-06-07 21:48:22.245817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.278 qpair failed and we were unable to recover it. 00:31:22.278 [2024-06-07 21:48:22.246060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.278 [2024-06-07 21:48:22.246069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.278 qpair failed and we were unable to recover it. 00:31:22.278 [2024-06-07 21:48:22.246338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.278 [2024-06-07 21:48:22.246348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.278 qpair failed and we were unable to recover it. 00:31:22.278 [2024-06-07 21:48:22.246547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.279 [2024-06-07 21:48:22.246556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.279 qpair failed and we were unable to recover it. 00:31:22.279 [2024-06-07 21:48:22.246761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.279 [2024-06-07 21:48:22.246770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.279 qpair failed and we were unable to recover it. 00:31:22.279 [2024-06-07 21:48:22.246983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.279 [2024-06-07 21:48:22.246992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.279 qpair failed and we were unable to recover it. 00:31:22.279 [2024-06-07 21:48:22.247298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.279 [2024-06-07 21:48:22.247308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.279 qpair failed and we were unable to recover it. 00:31:22.279 [2024-06-07 21:48:22.247546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.279 [2024-06-07 21:48:22.247555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.279 qpair failed and we were unable to recover it. 00:31:22.279 [2024-06-07 21:48:22.247873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.279 [2024-06-07 21:48:22.247883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.279 qpair failed and we were unable to recover it. 00:31:22.279 [2024-06-07 21:48:22.248113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.279 [2024-06-07 21:48:22.248122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.279 qpair failed and we were unable to recover it. 00:31:22.279 [2024-06-07 21:48:22.248327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.279 [2024-06-07 21:48:22.248337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.279 qpair failed and we were unable to recover it. 00:31:22.279 [2024-06-07 21:48:22.248629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.279 [2024-06-07 21:48:22.248638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.279 qpair failed and we were unable to recover it. 00:31:22.279 [2024-06-07 21:48:22.248963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.279 [2024-06-07 21:48:22.248974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.279 qpair failed and we were unable to recover it. 00:31:22.279 [2024-06-07 21:48:22.249195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.279 [2024-06-07 21:48:22.249205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.279 qpair failed and we were unable to recover it. 00:31:22.279 [2024-06-07 21:48:22.249505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.279 [2024-06-07 21:48:22.249514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.279 qpair failed and we were unable to recover it. 00:31:22.279 [2024-06-07 21:48:22.249728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.279 [2024-06-07 21:48:22.249738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.279 qpair failed and we were unable to recover it. 00:31:22.279 [2024-06-07 21:48:22.249936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.279 [2024-06-07 21:48:22.249946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.279 qpair failed and we were unable to recover it. 00:31:22.279 [2024-06-07 21:48:22.250182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.279 [2024-06-07 21:48:22.250192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.279 qpair failed and we were unable to recover it. 00:31:22.279 [2024-06-07 21:48:22.250430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.279 [2024-06-07 21:48:22.250439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.279 qpair failed and we were unable to recover it. 00:31:22.279 [2024-06-07 21:48:22.250579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.279 [2024-06-07 21:48:22.250589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.279 qpair failed and we were unable to recover it. 00:31:22.279 [2024-06-07 21:48:22.250868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.279 [2024-06-07 21:48:22.250877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.279 qpair failed and we were unable to recover it. 00:31:22.279 [2024-06-07 21:48:22.251196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.279 [2024-06-07 21:48:22.251206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.279 qpair failed and we were unable to recover it. 00:31:22.279 [2024-06-07 21:48:22.251511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.279 [2024-06-07 21:48:22.251520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.279 qpair failed and we were unable to recover it. 00:31:22.279 [2024-06-07 21:48:22.251727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.279 [2024-06-07 21:48:22.251736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.279 qpair failed and we were unable to recover it. 00:31:22.279 [2024-06-07 21:48:22.252046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.279 [2024-06-07 21:48:22.252077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.279 qpair failed and we were unable to recover it. 00:31:22.279 [2024-06-07 21:48:22.252334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.279 [2024-06-07 21:48:22.252343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.279 qpair failed and we were unable to recover it. 00:31:22.279 [2024-06-07 21:48:22.252582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.279 [2024-06-07 21:48:22.252612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.279 qpair failed and we were unable to recover it. 00:31:22.279 [2024-06-07 21:48:22.252919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.279 [2024-06-07 21:48:22.252949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.279 qpair failed and we were unable to recover it. 00:31:22.279 [2024-06-07 21:48:22.253236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.279 [2024-06-07 21:48:22.253267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.279 qpair failed and we were unable to recover it. 00:31:22.279 [2024-06-07 21:48:22.253573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.279 [2024-06-07 21:48:22.253582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.279 qpair failed and we were unable to recover it. 00:31:22.279 [2024-06-07 21:48:22.253826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.279 [2024-06-07 21:48:22.253835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.279 qpair failed and we were unable to recover it. 00:31:22.279 [2024-06-07 21:48:22.254156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.279 [2024-06-07 21:48:22.254165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.279 qpair failed and we were unable to recover it. 00:31:22.279 [2024-06-07 21:48:22.254362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.279 [2024-06-07 21:48:22.254371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.279 qpair failed and we were unable to recover it. 00:31:22.279 [2024-06-07 21:48:22.254657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.279 [2024-06-07 21:48:22.254666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.279 qpair failed and we were unable to recover it. 00:31:22.279 [2024-06-07 21:48:22.254932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.279 [2024-06-07 21:48:22.254941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.279 qpair failed and we were unable to recover it. 00:31:22.279 [2024-06-07 21:48:22.255265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.279 [2024-06-07 21:48:22.255274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.279 qpair failed and we were unable to recover it. 00:31:22.279 [2024-06-07 21:48:22.255505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.279 [2024-06-07 21:48:22.255514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.279 qpair failed and we were unable to recover it. 00:31:22.279 [2024-06-07 21:48:22.255741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.279 [2024-06-07 21:48:22.255750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.279 qpair failed and we were unable to recover it. 00:31:22.279 [2024-06-07 21:48:22.256017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.279 [2024-06-07 21:48:22.256032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.279 qpair failed and we were unable to recover it. 00:31:22.279 [2024-06-07 21:48:22.256303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.279 [2024-06-07 21:48:22.256313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.279 qpair failed and we were unable to recover it. 00:31:22.279 [2024-06-07 21:48:22.256528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.279 [2024-06-07 21:48:22.256537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.279 qpair failed and we were unable to recover it. 00:31:22.279 [2024-06-07 21:48:22.256747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.280 [2024-06-07 21:48:22.256756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.280 qpair failed and we were unable to recover it. 00:31:22.280 [2024-06-07 21:48:22.257079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.280 [2024-06-07 21:48:22.257088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.280 qpair failed and we were unable to recover it. 00:31:22.280 [2024-06-07 21:48:22.257355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.280 [2024-06-07 21:48:22.257365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.280 qpair failed and we were unable to recover it. 00:31:22.280 [2024-06-07 21:48:22.257662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.280 [2024-06-07 21:48:22.257671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.280 qpair failed and we were unable to recover it. 00:31:22.280 [2024-06-07 21:48:22.257876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.280 [2024-06-07 21:48:22.257885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.280 qpair failed and we were unable to recover it. 00:31:22.280 [2024-06-07 21:48:22.258049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.280 [2024-06-07 21:48:22.258058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.280 qpair failed and we were unable to recover it. 00:31:22.280 [2024-06-07 21:48:22.258302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.280 [2024-06-07 21:48:22.258312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.280 qpair failed and we were unable to recover it. 00:31:22.280 [2024-06-07 21:48:22.258660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.280 [2024-06-07 21:48:22.258669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.280 qpair failed and we were unable to recover it. 00:31:22.280 [2024-06-07 21:48:22.258867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.280 [2024-06-07 21:48:22.258877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.280 qpair failed and we were unable to recover it. 00:31:22.280 [2024-06-07 21:48:22.259076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.280 [2024-06-07 21:48:22.259086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.280 qpair failed and we were unable to recover it. 00:31:22.280 [2024-06-07 21:48:22.259376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.280 [2024-06-07 21:48:22.259407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.280 qpair failed and we were unable to recover it. 00:31:22.280 [2024-06-07 21:48:22.259607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.280 [2024-06-07 21:48:22.259643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.280 qpair failed and we were unable to recover it. 00:31:22.280 [2024-06-07 21:48:22.259902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.280 [2024-06-07 21:48:22.259932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.280 qpair failed and we were unable to recover it. 00:31:22.280 [2024-06-07 21:48:22.260219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.280 [2024-06-07 21:48:22.260250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.280 qpair failed and we were unable to recover it. 00:31:22.280 [2024-06-07 21:48:22.260594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.280 [2024-06-07 21:48:22.260625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.280 qpair failed and we were unable to recover it. 00:31:22.280 [2024-06-07 21:48:22.260963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.280 [2024-06-07 21:48:22.260993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.280 qpair failed and we were unable to recover it. 00:31:22.280 [2024-06-07 21:48:22.261328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.280 [2024-06-07 21:48:22.261359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.280 qpair failed and we were unable to recover it. 00:31:22.280 [2024-06-07 21:48:22.261624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.280 [2024-06-07 21:48:22.261634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.280 qpair failed and we were unable to recover it. 00:31:22.280 [2024-06-07 21:48:22.261939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.280 [2024-06-07 21:48:22.261948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.280 qpair failed and we were unable to recover it. 00:31:22.280 [2024-06-07 21:48:22.262241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.280 [2024-06-07 21:48:22.262251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.280 qpair failed and we were unable to recover it. 00:31:22.280 [2024-06-07 21:48:22.262496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.280 [2024-06-07 21:48:22.262505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.280 qpair failed and we were unable to recover it. 00:31:22.280 [2024-06-07 21:48:22.262660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.280 [2024-06-07 21:48:22.262669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.280 qpair failed and we were unable to recover it. 00:31:22.280 [2024-06-07 21:48:22.262867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.280 [2024-06-07 21:48:22.262876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.280 qpair failed and we were unable to recover it. 00:31:22.280 [2024-06-07 21:48:22.263128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.280 [2024-06-07 21:48:22.263160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.280 qpair failed and we were unable to recover it. 00:31:22.280 [2024-06-07 21:48:22.263506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.280 [2024-06-07 21:48:22.263536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.280 qpair failed and we were unable to recover it. 00:31:22.280 [2024-06-07 21:48:22.263819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.280 [2024-06-07 21:48:22.263829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.280 qpair failed and we were unable to recover it. 00:31:22.280 [2024-06-07 21:48:22.264040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.280 [2024-06-07 21:48:22.264051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.280 qpair failed and we were unable to recover it. 00:31:22.280 [2024-06-07 21:48:22.264320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.280 [2024-06-07 21:48:22.264329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.280 qpair failed and we were unable to recover it. 00:31:22.280 [2024-06-07 21:48:22.264605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.280 [2024-06-07 21:48:22.264615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.280 qpair failed and we were unable to recover it. 00:31:22.280 [2024-06-07 21:48:22.264832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.280 [2024-06-07 21:48:22.264841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.280 qpair failed and we were unable to recover it. 00:31:22.280 [2024-06-07 21:48:22.265111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.280 [2024-06-07 21:48:22.265121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.280 qpair failed and we were unable to recover it. 00:31:22.280 [2024-06-07 21:48:22.265394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.280 [2024-06-07 21:48:22.265403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.280 qpair failed and we were unable to recover it. 00:31:22.280 [2024-06-07 21:48:22.265697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.280 [2024-06-07 21:48:22.265706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.280 qpair failed and we were unable to recover it. 00:31:22.281 [2024-06-07 21:48:22.265944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.281 [2024-06-07 21:48:22.265953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.281 qpair failed and we were unable to recover it. 00:31:22.281 [2024-06-07 21:48:22.266217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.281 [2024-06-07 21:48:22.266226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.281 qpair failed and we were unable to recover it. 00:31:22.281 [2024-06-07 21:48:22.266496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.281 [2024-06-07 21:48:22.266505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.281 qpair failed and we were unable to recover it. 00:31:22.281 [2024-06-07 21:48:22.266738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.281 [2024-06-07 21:48:22.266747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.281 qpair failed and we were unable to recover it. 00:31:22.281 [2024-06-07 21:48:22.267048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.281 [2024-06-07 21:48:22.267057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.281 qpair failed and we were unable to recover it. 00:31:22.281 [2024-06-07 21:48:22.267365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.281 [2024-06-07 21:48:22.267375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.281 qpair failed and we were unable to recover it. 00:31:22.281 [2024-06-07 21:48:22.267531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.281 [2024-06-07 21:48:22.267540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.281 qpair failed and we were unable to recover it. 00:31:22.281 [2024-06-07 21:48:22.267785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.281 [2024-06-07 21:48:22.267794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.281 qpair failed and we were unable to recover it. 00:31:22.281 [2024-06-07 21:48:22.267950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.281 [2024-06-07 21:48:22.267959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.281 qpair failed and we were unable to recover it. 00:31:22.281 [2024-06-07 21:48:22.268158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.281 [2024-06-07 21:48:22.268168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.281 qpair failed and we were unable to recover it. 00:31:22.281 [2024-06-07 21:48:22.268437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.281 [2024-06-07 21:48:22.268467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.281 qpair failed and we were unable to recover it. 00:31:22.281 [2024-06-07 21:48:22.268801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.281 [2024-06-07 21:48:22.268810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.281 qpair failed and we were unable to recover it. 00:31:22.281 [2024-06-07 21:48:22.269105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.281 [2024-06-07 21:48:22.269136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.281 qpair failed and we were unable to recover it. 00:31:22.281 [2024-06-07 21:48:22.269475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.281 [2024-06-07 21:48:22.269506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.281 qpair failed and we were unable to recover it. 00:31:22.281 [2024-06-07 21:48:22.269772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.281 [2024-06-07 21:48:22.269803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.281 qpair failed and we were unable to recover it. 00:31:22.281 [2024-06-07 21:48:22.270174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.281 [2024-06-07 21:48:22.270205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.281 qpair failed and we were unable to recover it. 00:31:22.281 [2024-06-07 21:48:22.270485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.281 [2024-06-07 21:48:22.270494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.281 qpair failed and we were unable to recover it. 00:31:22.281 [2024-06-07 21:48:22.270720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.281 [2024-06-07 21:48:22.270750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.281 qpair failed and we were unable to recover it. 00:31:22.281 [2024-06-07 21:48:22.270929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.281 [2024-06-07 21:48:22.270965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.281 qpair failed and we were unable to recover it. 00:31:22.281 [2024-06-07 21:48:22.271225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.281 [2024-06-07 21:48:22.271257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.281 qpair failed and we were unable to recover it. 00:31:22.281 [2024-06-07 21:48:22.271506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.281 [2024-06-07 21:48:22.271515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.281 qpair failed and we were unable to recover it. 00:31:22.281 [2024-06-07 21:48:22.271720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.281 [2024-06-07 21:48:22.271750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.281 qpair failed and we were unable to recover it. 00:31:22.281 [2024-06-07 21:48:22.272042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.281 [2024-06-07 21:48:22.272073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.281 qpair failed and we were unable to recover it. 00:31:22.281 [2024-06-07 21:48:22.272330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.281 [2024-06-07 21:48:22.272360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.281 qpair failed and we were unable to recover it. 00:31:22.281 [2024-06-07 21:48:22.272691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.281 [2024-06-07 21:48:22.272721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.281 qpair failed and we were unable to recover it. 00:31:22.281 [2024-06-07 21:48:22.272974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.281 [2024-06-07 21:48:22.273005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.281 qpair failed and we were unable to recover it. 00:31:22.281 [2024-06-07 21:48:22.273299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.281 [2024-06-07 21:48:22.273330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.281 qpair failed and we were unable to recover it. 00:31:22.281 [2024-06-07 21:48:22.273596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.281 [2024-06-07 21:48:22.273605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.281 qpair failed and we were unable to recover it. 00:31:22.281 [2024-06-07 21:48:22.273842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.281 [2024-06-07 21:48:22.273850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.281 qpair failed and we were unable to recover it. 00:31:22.281 [2024-06-07 21:48:22.274077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.281 [2024-06-07 21:48:22.274088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.281 qpair failed and we were unable to recover it. 00:31:22.281 [2024-06-07 21:48:22.274329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.281 [2024-06-07 21:48:22.274338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.281 qpair failed and we were unable to recover it. 00:31:22.281 [2024-06-07 21:48:22.274492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.281 [2024-06-07 21:48:22.274501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.281 qpair failed and we were unable to recover it. 00:31:22.281 [2024-06-07 21:48:22.274747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.281 [2024-06-07 21:48:22.274777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.281 qpair failed and we were unable to recover it. 00:31:22.281 [2024-06-07 21:48:22.275046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.281 [2024-06-07 21:48:22.275077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.281 qpair failed and we were unable to recover it. 00:31:22.281 [2024-06-07 21:48:22.275417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.281 [2024-06-07 21:48:22.275448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.281 qpair failed and we were unable to recover it. 00:31:22.281 [2024-06-07 21:48:22.275758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.281 [2024-06-07 21:48:22.275789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.281 qpair failed and we were unable to recover it. 00:31:22.281 [2024-06-07 21:48:22.276047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.281 [2024-06-07 21:48:22.276080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.281 qpair failed and we were unable to recover it. 00:31:22.281 [2024-06-07 21:48:22.276448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.281 [2024-06-07 21:48:22.276485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.282 qpair failed and we were unable to recover it. 00:31:22.282 [2024-06-07 21:48:22.276703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.282 [2024-06-07 21:48:22.276712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.282 qpair failed and we were unable to recover it. 00:31:22.282 [2024-06-07 21:48:22.276977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.282 [2024-06-07 21:48:22.276986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.282 qpair failed and we were unable to recover it. 00:31:22.282 [2024-06-07 21:48:22.277217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.282 [2024-06-07 21:48:22.277226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.282 qpair failed and we were unable to recover it. 00:31:22.282 [2024-06-07 21:48:22.277491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.282 [2024-06-07 21:48:22.277501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.282 qpair failed and we were unable to recover it. 00:31:22.282 [2024-06-07 21:48:22.277648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.282 [2024-06-07 21:48:22.277657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.282 qpair failed and we were unable to recover it. 00:31:22.282 [2024-06-07 21:48:22.277930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.282 [2024-06-07 21:48:22.277960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.282 qpair failed and we were unable to recover it. 00:31:22.282 [2024-06-07 21:48:22.278220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.282 [2024-06-07 21:48:22.278251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.282 qpair failed and we were unable to recover it. 00:31:22.282 [2024-06-07 21:48:22.278613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.282 [2024-06-07 21:48:22.278644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.282 qpair failed and we were unable to recover it. 00:31:22.282 [2024-06-07 21:48:22.278924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.282 [2024-06-07 21:48:22.278955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.282 qpair failed and we were unable to recover it. 00:31:22.282 [2024-06-07 21:48:22.279273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.282 [2024-06-07 21:48:22.279305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.282 qpair failed and we were unable to recover it. 00:31:22.282 [2024-06-07 21:48:22.279641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.282 [2024-06-07 21:48:22.279650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.282 qpair failed and we were unable to recover it. 00:31:22.282 [2024-06-07 21:48:22.279977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.282 [2024-06-07 21:48:22.280008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.282 qpair failed and we were unable to recover it. 00:31:22.282 [2024-06-07 21:48:22.280350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.282 [2024-06-07 21:48:22.280382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.282 qpair failed and we were unable to recover it. 00:31:22.282 [2024-06-07 21:48:22.280628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.282 [2024-06-07 21:48:22.280636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.282 qpair failed and we were unable to recover it. 00:31:22.282 [2024-06-07 21:48:22.280900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.282 [2024-06-07 21:48:22.280909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.282 qpair failed and we were unable to recover it. 00:31:22.282 [2024-06-07 21:48:22.281180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.282 [2024-06-07 21:48:22.281190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.282 qpair failed and we were unable to recover it. 00:31:22.282 [2024-06-07 21:48:22.281473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.282 [2024-06-07 21:48:22.281482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.282 qpair failed and we were unable to recover it. 00:31:22.282 [2024-06-07 21:48:22.281717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.282 [2024-06-07 21:48:22.281726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.282 qpair failed and we were unable to recover it. 00:31:22.282 [2024-06-07 21:48:22.282001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.282 [2024-06-07 21:48:22.282010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.282 qpair failed and we were unable to recover it. 00:31:22.282 [2024-06-07 21:48:22.282173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.282 [2024-06-07 21:48:22.282183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.282 qpair failed and we were unable to recover it. 00:31:22.282 [2024-06-07 21:48:22.282391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.282 [2024-06-07 21:48:22.282403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.282 qpair failed and we were unable to recover it. 00:31:22.282 [2024-06-07 21:48:22.282615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.282 [2024-06-07 21:48:22.282624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.282 qpair failed and we were unable to recover it. 00:31:22.282 [2024-06-07 21:48:22.282918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.282 [2024-06-07 21:48:22.282927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.282 qpair failed and we were unable to recover it. 00:31:22.282 [2024-06-07 21:48:22.283215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.282 [2024-06-07 21:48:22.283225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.282 qpair failed and we were unable to recover it. 00:31:22.282 [2024-06-07 21:48:22.283467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.282 [2024-06-07 21:48:22.283477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.282 qpair failed and we were unable to recover it. 00:31:22.282 [2024-06-07 21:48:22.283754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.282 [2024-06-07 21:48:22.283763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.282 qpair failed and we were unable to recover it. 00:31:22.282 [2024-06-07 21:48:22.283972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.282 [2024-06-07 21:48:22.283981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.282 qpair failed and we were unable to recover it. 00:31:22.282 [2024-06-07 21:48:22.284289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.282 [2024-06-07 21:48:22.284299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.282 qpair failed and we were unable to recover it. 00:31:22.282 [2024-06-07 21:48:22.284536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.282 [2024-06-07 21:48:22.284544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.282 qpair failed and we were unable to recover it. 00:31:22.282 [2024-06-07 21:48:22.284779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.282 [2024-06-07 21:48:22.284788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.282 qpair failed and we were unable to recover it. 00:31:22.282 [2024-06-07 21:48:22.285034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.282 [2024-06-07 21:48:22.285044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.282 qpair failed and we were unable to recover it. 00:31:22.282 [2024-06-07 21:48:22.285241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.282 [2024-06-07 21:48:22.285250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.282 qpair failed and we were unable to recover it. 00:31:22.282 [2024-06-07 21:48:22.285519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.282 [2024-06-07 21:48:22.285528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.282 qpair failed and we were unable to recover it. 00:31:22.282 [2024-06-07 21:48:22.285861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.282 [2024-06-07 21:48:22.285891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.282 qpair failed and we were unable to recover it. 00:31:22.282 [2024-06-07 21:48:22.286146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.282 [2024-06-07 21:48:22.286177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.282 qpair failed and we were unable to recover it. 00:31:22.282 [2024-06-07 21:48:22.286380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.282 [2024-06-07 21:48:22.286411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.282 qpair failed and we were unable to recover it. 00:31:22.282 [2024-06-07 21:48:22.286740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.282 [2024-06-07 21:48:22.286771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.282 qpair failed and we were unable to recover it. 00:31:22.282 [2024-06-07 21:48:22.287037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.282 [2024-06-07 21:48:22.287047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.282 qpair failed and we were unable to recover it. 00:31:22.283 [2024-06-07 21:48:22.287202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.283 [2024-06-07 21:48:22.287211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.283 qpair failed and we were unable to recover it. 00:31:22.283 [2024-06-07 21:48:22.287456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.283 [2024-06-07 21:48:22.287466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.283 qpair failed and we were unable to recover it. 00:31:22.283 [2024-06-07 21:48:22.287611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.283 [2024-06-07 21:48:22.287621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.283 qpair failed and we were unable to recover it. 00:31:22.283 [2024-06-07 21:48:22.287935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.283 [2024-06-07 21:48:22.287965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.283 qpair failed and we were unable to recover it. 00:31:22.283 [2024-06-07 21:48:22.288252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.283 [2024-06-07 21:48:22.288283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.283 qpair failed and we were unable to recover it. 00:31:22.283 [2024-06-07 21:48:22.288526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.283 [2024-06-07 21:48:22.288536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.283 qpair failed and we were unable to recover it. 00:31:22.283 [2024-06-07 21:48:22.288755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.283 [2024-06-07 21:48:22.288764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.283 qpair failed and we were unable to recover it. 00:31:22.283 [2024-06-07 21:48:22.288978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.283 [2024-06-07 21:48:22.289008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.283 qpair failed and we were unable to recover it. 00:31:22.283 [2024-06-07 21:48:22.289359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.283 [2024-06-07 21:48:22.289391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.283 qpair failed and we were unable to recover it. 00:31:22.283 [2024-06-07 21:48:22.289707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.283 [2024-06-07 21:48:22.289738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.283 qpair failed and we were unable to recover it. 00:31:22.283 [2024-06-07 21:48:22.290060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.283 [2024-06-07 21:48:22.290091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.283 qpair failed and we were unable to recover it. 00:31:22.283 [2024-06-07 21:48:22.290355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.283 [2024-06-07 21:48:22.290385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.283 qpair failed and we were unable to recover it. 00:31:22.283 [2024-06-07 21:48:22.290754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.283 [2024-06-07 21:48:22.290784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.283 qpair failed and we were unable to recover it. 00:31:22.283 [2024-06-07 21:48:22.291019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.283 [2024-06-07 21:48:22.291031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.283 qpair failed and we were unable to recover it. 00:31:22.283 [2024-06-07 21:48:22.291331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.283 [2024-06-07 21:48:22.291369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.283 qpair failed and we were unable to recover it. 00:31:22.283 [2024-06-07 21:48:22.291615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.283 [2024-06-07 21:48:22.291646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.283 qpair failed and we were unable to recover it. 00:31:22.283 [2024-06-07 21:48:22.291989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.283 [2024-06-07 21:48:22.292019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.283 qpair failed and we were unable to recover it. 00:31:22.283 [2024-06-07 21:48:22.292312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.283 [2024-06-07 21:48:22.292343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.283 qpair failed and we were unable to recover it. 00:31:22.283 [2024-06-07 21:48:22.292545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.283 [2024-06-07 21:48:22.292554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.283 qpair failed and we were unable to recover it. 00:31:22.283 [2024-06-07 21:48:22.292821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.283 [2024-06-07 21:48:22.292830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.283 qpair failed and we were unable to recover it. 00:31:22.283 [2024-06-07 21:48:22.293048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.283 [2024-06-07 21:48:22.293074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.283 qpair failed and we were unable to recover it. 00:31:22.283 [2024-06-07 21:48:22.293317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.283 [2024-06-07 21:48:22.293327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.283 qpair failed and we were unable to recover it. 00:31:22.283 [2024-06-07 21:48:22.293652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.283 [2024-06-07 21:48:22.293663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.283 qpair failed and we were unable to recover it. 00:31:22.283 [2024-06-07 21:48:22.293932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.283 [2024-06-07 21:48:22.293962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.283 qpair failed and we were unable to recover it. 00:31:22.283 [2024-06-07 21:48:22.294341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.283 [2024-06-07 21:48:22.294372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.283 qpair failed and we were unable to recover it. 00:31:22.283 [2024-06-07 21:48:22.294632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.283 [2024-06-07 21:48:22.294662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.283 qpair failed and we were unable to recover it. 00:31:22.283 [2024-06-07 21:48:22.295001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.283 [2024-06-07 21:48:22.295055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.283 qpair failed and we were unable to recover it. 00:31:22.283 [2024-06-07 21:48:22.295320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.283 [2024-06-07 21:48:22.295349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.283 qpair failed and we were unable to recover it. 00:31:22.283 [2024-06-07 21:48:22.295706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.283 [2024-06-07 21:48:22.295736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.283 qpair failed and we were unable to recover it. 00:31:22.283 [2024-06-07 21:48:22.296037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.283 [2024-06-07 21:48:22.296069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.283 qpair failed and we were unable to recover it. 00:31:22.283 [2024-06-07 21:48:22.296411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.283 [2024-06-07 21:48:22.296441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.283 qpair failed and we were unable to recover it. 00:31:22.283 [2024-06-07 21:48:22.296755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.283 [2024-06-07 21:48:22.296785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.283 qpair failed and we were unable to recover it. 00:31:22.283 [2024-06-07 21:48:22.297103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.283 [2024-06-07 21:48:22.297136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.283 qpair failed and we were unable to recover it. 00:31:22.283 [2024-06-07 21:48:22.297456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.283 [2024-06-07 21:48:22.297487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.283 qpair failed and we were unable to recover it. 00:31:22.283 [2024-06-07 21:48:22.297802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.283 [2024-06-07 21:48:22.297832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.283 qpair failed and we were unable to recover it. 00:31:22.283 [2024-06-07 21:48:22.298158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.283 [2024-06-07 21:48:22.298189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.283 qpair failed and we were unable to recover it. 00:31:22.283 [2024-06-07 21:48:22.298484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.283 [2024-06-07 21:48:22.298516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.283 qpair failed and we were unable to recover it. 00:31:22.283 [2024-06-07 21:48:22.298823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.283 [2024-06-07 21:48:22.298855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.283 qpair failed and we were unable to recover it. 00:31:22.284 [2024-06-07 21:48:22.299122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.284 [2024-06-07 21:48:22.299160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.284 qpair failed and we were unable to recover it. 00:31:22.284 [2024-06-07 21:48:22.299437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.284 [2024-06-07 21:48:22.299446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.284 qpair failed and we were unable to recover it. 00:31:22.284 [2024-06-07 21:48:22.299662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.284 [2024-06-07 21:48:22.299672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.284 qpair failed and we were unable to recover it. 00:31:22.284 [2024-06-07 21:48:22.299938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.284 [2024-06-07 21:48:22.299947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.284 qpair failed and we were unable to recover it. 00:31:22.284 [2024-06-07 21:48:22.300165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.284 [2024-06-07 21:48:22.300175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.284 qpair failed and we were unable to recover it. 00:31:22.284 [2024-06-07 21:48:22.300385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.284 [2024-06-07 21:48:22.300394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.284 qpair failed and we were unable to recover it. 00:31:22.284 [2024-06-07 21:48:22.300695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.284 [2024-06-07 21:48:22.300725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.284 qpair failed and we were unable to recover it. 00:31:22.284 [2024-06-07 21:48:22.300975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.284 [2024-06-07 21:48:22.301006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.284 qpair failed and we were unable to recover it. 00:31:22.284 [2024-06-07 21:48:22.301347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.284 [2024-06-07 21:48:22.301357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.284 qpair failed and we were unable to recover it. 00:31:22.284 [2024-06-07 21:48:22.301666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.284 [2024-06-07 21:48:22.301675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.284 qpair failed and we were unable to recover it. 00:31:22.284 [2024-06-07 21:48:22.302003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.284 [2024-06-07 21:48:22.302044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.284 qpair failed and we were unable to recover it. 00:31:22.284 [2024-06-07 21:48:22.302302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.284 [2024-06-07 21:48:22.302333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.284 qpair failed and we were unable to recover it. 00:31:22.284 [2024-06-07 21:48:22.302543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.284 [2024-06-07 21:48:22.302573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.284 qpair failed and we were unable to recover it. 00:31:22.284 [2024-06-07 21:48:22.302907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.284 [2024-06-07 21:48:22.302937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.284 qpair failed and we were unable to recover it. 00:31:22.284 [2024-06-07 21:48:22.303253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.284 [2024-06-07 21:48:22.303262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.284 qpair failed and we were unable to recover it. 00:31:22.284 [2024-06-07 21:48:22.303481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.284 [2024-06-07 21:48:22.303490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.284 qpair failed and we were unable to recover it. 00:31:22.284 [2024-06-07 21:48:22.303841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.284 [2024-06-07 21:48:22.303850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.284 qpair failed and we were unable to recover it. 00:31:22.284 [2024-06-07 21:48:22.304159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.284 [2024-06-07 21:48:22.304169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.284 qpair failed and we were unable to recover it. 00:31:22.284 [2024-06-07 21:48:22.304389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.284 [2024-06-07 21:48:22.304398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.284 qpair failed and we were unable to recover it. 00:31:22.284 [2024-06-07 21:48:22.304657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.284 [2024-06-07 21:48:22.304667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.284 qpair failed and we were unable to recover it. 00:31:22.284 [2024-06-07 21:48:22.304880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.284 [2024-06-07 21:48:22.304889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.284 qpair failed and we were unable to recover it. 00:31:22.284 [2024-06-07 21:48:22.305212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.284 [2024-06-07 21:48:22.305222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.284 qpair failed and we were unable to recover it. 00:31:22.284 [2024-06-07 21:48:22.305496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.284 [2024-06-07 21:48:22.305505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.284 qpair failed and we were unable to recover it. 00:31:22.284 [2024-06-07 21:48:22.305752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.284 [2024-06-07 21:48:22.305761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.284 qpair failed and we were unable to recover it. 00:31:22.284 [2024-06-07 21:48:22.305979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.284 [2024-06-07 21:48:22.305990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.284 qpair failed and we were unable to recover it. 00:31:22.284 [2024-06-07 21:48:22.306270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.284 [2024-06-07 21:48:22.306280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.284 qpair failed and we were unable to recover it. 00:31:22.284 [2024-06-07 21:48:22.306490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.284 [2024-06-07 21:48:22.306499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.284 qpair failed and we were unable to recover it. 00:31:22.284 [2024-06-07 21:48:22.306713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.284 [2024-06-07 21:48:22.306722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.284 qpair failed and we were unable to recover it. 00:31:22.284 [2024-06-07 21:48:22.306920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.284 [2024-06-07 21:48:22.306928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.284 qpair failed and we were unable to recover it. 00:31:22.284 [2024-06-07 21:48:22.307092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.284 [2024-06-07 21:48:22.307102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.284 qpair failed and we were unable to recover it. 00:31:22.284 [2024-06-07 21:48:22.307377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.284 [2024-06-07 21:48:22.307408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.284 qpair failed and we were unable to recover it. 00:31:22.284 [2024-06-07 21:48:22.307667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.284 [2024-06-07 21:48:22.307698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.284 qpair failed and we were unable to recover it. 00:31:22.284 [2024-06-07 21:48:22.308066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.284 [2024-06-07 21:48:22.308097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.284 qpair failed and we were unable to recover it. 00:31:22.284 [2024-06-07 21:48:22.308350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.284 [2024-06-07 21:48:22.308381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.284 qpair failed and we were unable to recover it. 00:31:22.284 [2024-06-07 21:48:22.308725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.284 [2024-06-07 21:48:22.308756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.284 qpair failed and we were unable to recover it. 00:31:22.284 [2024-06-07 21:48:22.309066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.284 [2024-06-07 21:48:22.309098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.284 qpair failed and we were unable to recover it. 00:31:22.284 [2024-06-07 21:48:22.309387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.284 [2024-06-07 21:48:22.309417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.284 qpair failed and we were unable to recover it. 00:31:22.284 [2024-06-07 21:48:22.309606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.284 [2024-06-07 21:48:22.309615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.284 qpair failed and we were unable to recover it. 00:31:22.285 [2024-06-07 21:48:22.309876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.285 [2024-06-07 21:48:22.309885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.285 qpair failed and we were unable to recover it. 00:31:22.285 [2024-06-07 21:48:22.310124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.285 [2024-06-07 21:48:22.310133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.285 qpair failed and we were unable to recover it. 00:31:22.285 [2024-06-07 21:48:22.310350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.285 [2024-06-07 21:48:22.310359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.285 qpair failed and we were unable to recover it. 00:31:22.285 [2024-06-07 21:48:22.310578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.285 [2024-06-07 21:48:22.310587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.285 qpair failed and we were unable to recover it. 00:31:22.285 [2024-06-07 21:48:22.310837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.285 [2024-06-07 21:48:22.310846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.285 qpair failed and we were unable to recover it. 00:31:22.285 [2024-06-07 21:48:22.311113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.285 [2024-06-07 21:48:22.311124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.285 qpair failed and we were unable to recover it. 00:31:22.285 [2024-06-07 21:48:22.311362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.285 [2024-06-07 21:48:22.311371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.285 qpair failed and we were unable to recover it. 00:31:22.285 [2024-06-07 21:48:22.311655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.285 [2024-06-07 21:48:22.311664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.285 qpair failed and we were unable to recover it. 00:31:22.285 [2024-06-07 21:48:22.311962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.285 [2024-06-07 21:48:22.311971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.285 qpair failed and we were unable to recover it. 00:31:22.285 [2024-06-07 21:48:22.312239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.285 [2024-06-07 21:48:22.312249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.285 qpair failed and we were unable to recover it. 00:31:22.285 [2024-06-07 21:48:22.312462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.285 [2024-06-07 21:48:22.312471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.285 qpair failed and we were unable to recover it. 00:31:22.285 [2024-06-07 21:48:22.312784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.285 [2024-06-07 21:48:22.312793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.285 qpair failed and we were unable to recover it. 00:31:22.285 [2024-06-07 21:48:22.313007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.285 [2024-06-07 21:48:22.313016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.285 qpair failed and we were unable to recover it. 00:31:22.285 [2024-06-07 21:48:22.313179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.285 [2024-06-07 21:48:22.313190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.285 qpair failed and we were unable to recover it. 00:31:22.285 [2024-06-07 21:48:22.313408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.285 [2024-06-07 21:48:22.313417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.285 qpair failed and we were unable to recover it. 00:31:22.285 [2024-06-07 21:48:22.313664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.285 [2024-06-07 21:48:22.313673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.285 qpair failed and we were unable to recover it. 00:31:22.285 [2024-06-07 21:48:22.313840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.285 [2024-06-07 21:48:22.313849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.285 qpair failed and we were unable to recover it. 00:31:22.285 [2024-06-07 21:48:22.314102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.285 [2024-06-07 21:48:22.314133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.285 qpair failed and we were unable to recover it. 00:31:22.285 [2024-06-07 21:48:22.314391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.285 [2024-06-07 21:48:22.314421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.285 qpair failed and we were unable to recover it. 00:31:22.285 [2024-06-07 21:48:22.314676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.285 [2024-06-07 21:48:22.314706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.285 qpair failed and we were unable to recover it. 00:31:22.285 [2024-06-07 21:48:22.314904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.285 [2024-06-07 21:48:22.314934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.285 qpair failed and we were unable to recover it. 00:31:22.285 [2024-06-07 21:48:22.315242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.285 [2024-06-07 21:48:22.315252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.285 qpair failed and we were unable to recover it. 00:31:22.285 [2024-06-07 21:48:22.315519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.285 [2024-06-07 21:48:22.315528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.285 qpair failed and we were unable to recover it. 00:31:22.285 [2024-06-07 21:48:22.315795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.285 [2024-06-07 21:48:22.315804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.285 qpair failed and we were unable to recover it. 00:31:22.285 [2024-06-07 21:48:22.316076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.285 [2024-06-07 21:48:22.316085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.285 qpair failed and we were unable to recover it. 00:31:22.285 [2024-06-07 21:48:22.316378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.285 [2024-06-07 21:48:22.316387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.285 qpair failed and we were unable to recover it. 00:31:22.285 [2024-06-07 21:48:22.316684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.285 [2024-06-07 21:48:22.316695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.285 qpair failed and we were unable to recover it. 00:31:22.285 [2024-06-07 21:48:22.317019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.285 [2024-06-07 21:48:22.317058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.285 qpair failed and we were unable to recover it. 00:31:22.285 [2024-06-07 21:48:22.317395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.285 [2024-06-07 21:48:22.317425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.285 qpair failed and we were unable to recover it. 00:31:22.285 [2024-06-07 21:48:22.317711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.285 [2024-06-07 21:48:22.317741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.285 qpair failed and we were unable to recover it. 00:31:22.285 [2024-06-07 21:48:22.317999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.285 [2024-06-07 21:48:22.318037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.285 qpair failed and we were unable to recover it. 00:31:22.285 [2024-06-07 21:48:22.318323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.285 [2024-06-07 21:48:22.318353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.285 qpair failed and we were unable to recover it. 00:31:22.285 [2024-06-07 21:48:22.318613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.285 [2024-06-07 21:48:22.318643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.285 qpair failed and we were unable to recover it. 00:31:22.286 [2024-06-07 21:48:22.318931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.286 [2024-06-07 21:48:22.318962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.286 qpair failed and we were unable to recover it. 00:31:22.286 [2024-06-07 21:48:22.319352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.286 [2024-06-07 21:48:22.319384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.286 qpair failed and we were unable to recover it. 00:31:22.286 [2024-06-07 21:48:22.319676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.286 [2024-06-07 21:48:22.319707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.286 qpair failed and we were unable to recover it. 00:31:22.286 [2024-06-07 21:48:22.320033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.286 [2024-06-07 21:48:22.320042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.286 qpair failed and we were unable to recover it. 00:31:22.286 [2024-06-07 21:48:22.320379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.286 [2024-06-07 21:48:22.320410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.286 qpair failed and we were unable to recover it. 00:31:22.286 [2024-06-07 21:48:22.320738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.286 [2024-06-07 21:48:22.320768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.286 qpair failed and we were unable to recover it. 00:31:22.286 [2024-06-07 21:48:22.321095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.286 [2024-06-07 21:48:22.321127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.286 qpair failed and we were unable to recover it. 00:31:22.286 [2024-06-07 21:48:22.321471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.286 [2024-06-07 21:48:22.321502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.286 qpair failed and we were unable to recover it. 00:31:22.286 [2024-06-07 21:48:22.321856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.286 [2024-06-07 21:48:22.321887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.286 qpair failed and we were unable to recover it. 00:31:22.286 [2024-06-07 21:48:22.322223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.286 [2024-06-07 21:48:22.322254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.286 qpair failed and we were unable to recover it. 00:31:22.286 [2024-06-07 21:48:22.322507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.286 [2024-06-07 21:48:22.322538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.286 qpair failed and we were unable to recover it. 00:31:22.286 [2024-06-07 21:48:22.322917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.286 [2024-06-07 21:48:22.322947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.286 qpair failed and we were unable to recover it. 00:31:22.286 [2024-06-07 21:48:22.323286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.286 [2024-06-07 21:48:22.323317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.286 qpair failed and we were unable to recover it. 00:31:22.286 [2024-06-07 21:48:22.323656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.286 [2024-06-07 21:48:22.323686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.286 qpair failed and we were unable to recover it. 00:31:22.286 [2024-06-07 21:48:22.324036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.286 [2024-06-07 21:48:22.324067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.286 qpair failed and we were unable to recover it. 00:31:22.286 [2024-06-07 21:48:22.324436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.286 [2024-06-07 21:48:22.324470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.286 qpair failed and we were unable to recover it. 00:31:22.286 [2024-06-07 21:48:22.324766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.286 [2024-06-07 21:48:22.324798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.286 qpair failed and we were unable to recover it. 00:31:22.286 [2024-06-07 21:48:22.325046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.286 [2024-06-07 21:48:22.325077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.286 qpair failed and we were unable to recover it. 00:31:22.286 [2024-06-07 21:48:22.325391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.286 [2024-06-07 21:48:22.325421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.286 qpair failed and we were unable to recover it. 00:31:22.286 [2024-06-07 21:48:22.325685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.286 [2024-06-07 21:48:22.325715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.286 qpair failed and we were unable to recover it. 00:31:22.286 [2024-06-07 21:48:22.326064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.286 [2024-06-07 21:48:22.326096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.286 qpair failed and we were unable to recover it. 00:31:22.286 [2024-06-07 21:48:22.326341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.286 [2024-06-07 21:48:22.326371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.286 qpair failed and we were unable to recover it. 00:31:22.286 [2024-06-07 21:48:22.326613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.286 [2024-06-07 21:48:22.326643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.286 qpair failed and we were unable to recover it. 00:31:22.286 [2024-06-07 21:48:22.326975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.286 [2024-06-07 21:48:22.326984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.286 qpair failed and we were unable to recover it. 00:31:22.286 [2024-06-07 21:48:22.327290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.286 [2024-06-07 21:48:22.327300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.286 qpair failed and we were unable to recover it. 00:31:22.286 [2024-06-07 21:48:22.327605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.286 [2024-06-07 21:48:22.327615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.286 qpair failed and we were unable to recover it. 00:31:22.286 [2024-06-07 21:48:22.327859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.286 [2024-06-07 21:48:22.327868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.286 qpair failed and we were unable to recover it. 00:31:22.286 [2024-06-07 21:48:22.328097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.286 [2024-06-07 21:48:22.328107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.286 qpair failed and we were unable to recover it. 00:31:22.286 [2024-06-07 21:48:22.328446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.286 [2024-06-07 21:48:22.328455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.286 qpair failed and we were unable to recover it. 00:31:22.286 [2024-06-07 21:48:22.328695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.286 [2024-06-07 21:48:22.328725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.286 qpair failed and we were unable to recover it. 00:31:22.286 [2024-06-07 21:48:22.329066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.286 [2024-06-07 21:48:22.329098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.286 qpair failed and we were unable to recover it. 00:31:22.286 [2024-06-07 21:48:22.329440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.286 [2024-06-07 21:48:22.329469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.286 qpair failed and we were unable to recover it. 00:31:22.286 [2024-06-07 21:48:22.329800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.286 [2024-06-07 21:48:22.329831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.286 qpair failed and we were unable to recover it. 00:31:22.286 [2024-06-07 21:48:22.330045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.286 [2024-06-07 21:48:22.330082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.286 qpair failed and we were unable to recover it. 00:31:22.286 [2024-06-07 21:48:22.330424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.286 [2024-06-07 21:48:22.330455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.286 qpair failed and we were unable to recover it. 00:31:22.286 [2024-06-07 21:48:22.330770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.286 [2024-06-07 21:48:22.330800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.286 qpair failed and we were unable to recover it. 00:31:22.286 [2024-06-07 21:48:22.331145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.286 [2024-06-07 21:48:22.331176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.286 qpair failed and we were unable to recover it. 00:31:22.286 [2024-06-07 21:48:22.331543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.286 [2024-06-07 21:48:22.331573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.286 qpair failed and we were unable to recover it. 00:31:22.287 [2024-06-07 21:48:22.331885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.287 [2024-06-07 21:48:22.331915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.287 qpair failed and we were unable to recover it. 00:31:22.287 [2024-06-07 21:48:22.332225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.287 [2024-06-07 21:48:22.332235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.287 qpair failed and we were unable to recover it. 00:31:22.287 [2024-06-07 21:48:22.332505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.287 [2024-06-07 21:48:22.332514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.287 qpair failed and we were unable to recover it. 00:31:22.287 [2024-06-07 21:48:22.332790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.287 [2024-06-07 21:48:22.332799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.287 qpair failed and we were unable to recover it. 00:31:22.287 [2024-06-07 21:48:22.332949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.287 [2024-06-07 21:48:22.332959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.287 qpair failed and we were unable to recover it. 00:31:22.287 [2024-06-07 21:48:22.333209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.287 [2024-06-07 21:48:22.333218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.287 qpair failed and we were unable to recover it. 00:31:22.287 [2024-06-07 21:48:22.333440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.287 [2024-06-07 21:48:22.333450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.287 qpair failed and we were unable to recover it. 00:31:22.287 [2024-06-07 21:48:22.333663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.287 [2024-06-07 21:48:22.333673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.287 qpair failed and we were unable to recover it. 00:31:22.287 [2024-06-07 21:48:22.333841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.287 [2024-06-07 21:48:22.333850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.287 qpair failed and we were unable to recover it. 00:31:22.287 [2024-06-07 21:48:22.334021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.287 [2024-06-07 21:48:22.334068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.287 qpair failed and we were unable to recover it. 00:31:22.287 [2024-06-07 21:48:22.334360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.287 [2024-06-07 21:48:22.334391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.287 qpair failed and we were unable to recover it. 00:31:22.287 [2024-06-07 21:48:22.334542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.287 [2024-06-07 21:48:22.334572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.287 qpair failed and we were unable to recover it. 00:31:22.287 [2024-06-07 21:48:22.334939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.287 [2024-06-07 21:48:22.334969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.287 qpair failed and we were unable to recover it. 00:31:22.287 [2024-06-07 21:48:22.335110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.287 [2024-06-07 21:48:22.335142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.287 qpair failed and we were unable to recover it. 00:31:22.287 [2024-06-07 21:48:22.335412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.287 [2024-06-07 21:48:22.335422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.287 qpair failed and we were unable to recover it. 00:31:22.287 [2024-06-07 21:48:22.335627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.287 [2024-06-07 21:48:22.335636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.287 qpair failed and we were unable to recover it. 00:31:22.287 [2024-06-07 21:48:22.335851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.287 [2024-06-07 21:48:22.335860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.287 qpair failed and we were unable to recover it. 00:31:22.287 [2024-06-07 21:48:22.336156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.287 [2024-06-07 21:48:22.336166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.287 qpair failed and we were unable to recover it. 00:31:22.287 [2024-06-07 21:48:22.336407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.287 [2024-06-07 21:48:22.336416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.287 qpair failed and we were unable to recover it. 00:31:22.287 [2024-06-07 21:48:22.336619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.287 [2024-06-07 21:48:22.336629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.287 qpair failed and we were unable to recover it. 00:31:22.287 [2024-06-07 21:48:22.336850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.287 [2024-06-07 21:48:22.336859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.287 qpair failed and we were unable to recover it. 00:31:22.287 [2024-06-07 21:48:22.337091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.287 [2024-06-07 21:48:22.337100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.287 qpair failed and we were unable to recover it. 00:31:22.287 [2024-06-07 21:48:22.337256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.287 [2024-06-07 21:48:22.337267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.287 qpair failed and we were unable to recover it. 00:31:22.287 [2024-06-07 21:48:22.337597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.287 [2024-06-07 21:48:22.337627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.287 qpair failed and we were unable to recover it. 00:31:22.287 [2024-06-07 21:48:22.337883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.287 [2024-06-07 21:48:22.337913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.287 qpair failed and we were unable to recover it. 00:31:22.287 [2024-06-07 21:48:22.338245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.287 [2024-06-07 21:48:22.338255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.287 qpair failed and we were unable to recover it. 00:31:22.287 [2024-06-07 21:48:22.338510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.287 [2024-06-07 21:48:22.338540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.287 qpair failed and we were unable to recover it. 00:31:22.287 [2024-06-07 21:48:22.338906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.287 [2024-06-07 21:48:22.338936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.287 qpair failed and we were unable to recover it. 00:31:22.287 [2024-06-07 21:48:22.339198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.287 [2024-06-07 21:48:22.339229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.287 qpair failed and we were unable to recover it. 00:31:22.287 [2024-06-07 21:48:22.339490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.287 [2024-06-07 21:48:22.339521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.287 qpair failed and we were unable to recover it. 00:31:22.287 [2024-06-07 21:48:22.339861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.287 [2024-06-07 21:48:22.339892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.287 qpair failed and we were unable to recover it. 00:31:22.287 [2024-06-07 21:48:22.340129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.287 [2024-06-07 21:48:22.340139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.287 qpair failed and we were unable to recover it. 00:31:22.287 [2024-06-07 21:48:22.340351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.287 [2024-06-07 21:48:22.340361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.287 qpair failed and we were unable to recover it. 00:31:22.287 [2024-06-07 21:48:22.340601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.287 [2024-06-07 21:48:22.340610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.287 qpair failed and we were unable to recover it. 00:31:22.287 [2024-06-07 21:48:22.340816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.287 [2024-06-07 21:48:22.340826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.287 qpair failed and we were unable to recover it. 00:31:22.287 [2024-06-07 21:48:22.341132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.287 [2024-06-07 21:48:22.341173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.287 qpair failed and we were unable to recover it. 00:31:22.287 [2024-06-07 21:48:22.341374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.287 [2024-06-07 21:48:22.341406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.287 qpair failed and we were unable to recover it. 00:31:22.287 [2024-06-07 21:48:22.341644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.287 [2024-06-07 21:48:22.341674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.287 qpair failed and we were unable to recover it. 00:31:22.287 [2024-06-07 21:48:22.341952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.288 [2024-06-07 21:48:22.341961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.288 qpair failed and we were unable to recover it. 00:31:22.288 [2024-06-07 21:48:22.342182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.288 [2024-06-07 21:48:22.342191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.288 qpair failed and we were unable to recover it. 00:31:22.288 [2024-06-07 21:48:22.342402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.288 [2024-06-07 21:48:22.342411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.288 qpair failed and we were unable to recover it. 00:31:22.288 [2024-06-07 21:48:22.342572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.288 [2024-06-07 21:48:22.342581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.288 qpair failed and we were unable to recover it. 00:31:22.288 [2024-06-07 21:48:22.342748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.288 [2024-06-07 21:48:22.342757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.288 qpair failed and we were unable to recover it. 00:31:22.288 [2024-06-07 21:48:22.342986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.288 [2024-06-07 21:48:22.343017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.288 qpair failed and we were unable to recover it. 00:31:22.288 [2024-06-07 21:48:22.343230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.288 [2024-06-07 21:48:22.343260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.288 qpair failed and we were unable to recover it. 00:31:22.288 [2024-06-07 21:48:22.343619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.288 [2024-06-07 21:48:22.343668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.288 qpair failed and we were unable to recover it. 00:31:22.288 [2024-06-07 21:48:22.343857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.288 [2024-06-07 21:48:22.343887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.288 qpair failed and we were unable to recover it. 00:31:22.288 [2024-06-07 21:48:22.344078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.288 [2024-06-07 21:48:22.344109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.288 qpair failed and we were unable to recover it. 00:31:22.288 [2024-06-07 21:48:22.344396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.288 [2024-06-07 21:48:22.344406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.288 qpair failed and we were unable to recover it. 00:31:22.288 [2024-06-07 21:48:22.344568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.288 [2024-06-07 21:48:22.344579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.288 qpair failed and we were unable to recover it. 00:31:22.288 [2024-06-07 21:48:22.344734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.288 [2024-06-07 21:48:22.344745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.288 qpair failed and we were unable to recover it. 00:31:22.288 [2024-06-07 21:48:22.344966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.288 [2024-06-07 21:48:22.344976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.288 qpair failed and we were unable to recover it. 00:31:22.288 [2024-06-07 21:48:22.345251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.288 [2024-06-07 21:48:22.345262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.288 qpair failed and we were unable to recover it. 00:31:22.288 [2024-06-07 21:48:22.345472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.288 [2024-06-07 21:48:22.345483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.288 qpair failed and we were unable to recover it. 00:31:22.288 [2024-06-07 21:48:22.345706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.288 [2024-06-07 21:48:22.345715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.288 qpair failed and we were unable to recover it. 00:31:22.288 [2024-06-07 21:48:22.345862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.288 [2024-06-07 21:48:22.345872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.288 qpair failed and we were unable to recover it. 00:31:22.288 [2024-06-07 21:48:22.346113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.288 [2024-06-07 21:48:22.346144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.288 qpair failed and we were unable to recover it. 00:31:22.288 [2024-06-07 21:48:22.346482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.288 [2024-06-07 21:48:22.346513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.288 qpair failed and we were unable to recover it. 00:31:22.288 [2024-06-07 21:48:22.346702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.288 [2024-06-07 21:48:22.346732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.288 qpair failed and we were unable to recover it. 00:31:22.288 [2024-06-07 21:48:22.347008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.288 [2024-06-07 21:48:22.347047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.288 qpair failed and we were unable to recover it. 00:31:22.288 [2024-06-07 21:48:22.347237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.288 [2024-06-07 21:48:22.347268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.288 qpair failed and we were unable to recover it. 00:31:22.288 [2024-06-07 21:48:22.347613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.288 [2024-06-07 21:48:22.347643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.288 qpair failed and we were unable to recover it. 00:31:22.288 [2024-06-07 21:48:22.347958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.288 [2024-06-07 21:48:22.347968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.288 qpair failed and we were unable to recover it. 00:31:22.288 [2024-06-07 21:48:22.348270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.288 [2024-06-07 21:48:22.348301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.288 qpair failed and we were unable to recover it. 00:31:22.288 [2024-06-07 21:48:22.348615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.288 [2024-06-07 21:48:22.348645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.288 qpair failed and we were unable to recover it. 00:31:22.288 [2024-06-07 21:48:22.348928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.288 [2024-06-07 21:48:22.348959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.288 qpair failed and we were unable to recover it. 00:31:22.288 [2024-06-07 21:48:22.349284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.288 [2024-06-07 21:48:22.349315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.288 qpair failed and we were unable to recover it. 00:31:22.288 [2024-06-07 21:48:22.349589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.288 [2024-06-07 21:48:22.349620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.288 qpair failed and we were unable to recover it. 00:31:22.288 [2024-06-07 21:48:22.349875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.288 [2024-06-07 21:48:22.349907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.288 qpair failed and we were unable to recover it. 00:31:22.288 [2024-06-07 21:48:22.350187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.288 [2024-06-07 21:48:22.350219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.288 qpair failed and we were unable to recover it. 00:31:22.288 [2024-06-07 21:48:22.350481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.288 [2024-06-07 21:48:22.350512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.288 qpair failed and we were unable to recover it. 00:31:22.288 [2024-06-07 21:48:22.350847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.288 [2024-06-07 21:48:22.350856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.288 qpair failed and we were unable to recover it. 00:31:22.288 [2024-06-07 21:48:22.351053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.288 [2024-06-07 21:48:22.351079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.288 qpair failed and we were unable to recover it. 00:31:22.288 [2024-06-07 21:48:22.351293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.288 [2024-06-07 21:48:22.351303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.288 qpair failed and we were unable to recover it. 00:31:22.288 [2024-06-07 21:48:22.351446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.288 [2024-06-07 21:48:22.351456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.288 qpair failed and we were unable to recover it. 00:31:22.288 [2024-06-07 21:48:22.351588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.288 [2024-06-07 21:48:22.351600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.288 qpair failed and we were unable to recover it. 00:31:22.288 [2024-06-07 21:48:22.351882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.289 [2024-06-07 21:48:22.351891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.289 qpair failed and we were unable to recover it. 00:31:22.289 [2024-06-07 21:48:22.352165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.289 [2024-06-07 21:48:22.352175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.289 qpair failed and we were unable to recover it. 00:31:22.289 [2024-06-07 21:48:22.352451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.289 [2024-06-07 21:48:22.352460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.289 qpair failed and we were unable to recover it. 00:31:22.289 [2024-06-07 21:48:22.352735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.289 [2024-06-07 21:48:22.352745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.289 qpair failed and we were unable to recover it. 00:31:22.289 [2024-06-07 21:48:22.353036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.289 [2024-06-07 21:48:22.353046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.289 qpair failed and we were unable to recover it. 00:31:22.289 [2024-06-07 21:48:22.353264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.289 [2024-06-07 21:48:22.353273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.289 qpair failed and we were unable to recover it. 00:31:22.289 [2024-06-07 21:48:22.353539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.289 [2024-06-07 21:48:22.353549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.289 qpair failed and we were unable to recover it. 00:31:22.289 [2024-06-07 21:48:22.353717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.289 [2024-06-07 21:48:22.353726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.289 qpair failed and we were unable to recover it. 00:31:22.289 [2024-06-07 21:48:22.354001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.289 [2024-06-07 21:48:22.354039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.289 qpair failed and we were unable to recover it. 00:31:22.289 [2024-06-07 21:48:22.354301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.289 [2024-06-07 21:48:22.354331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.289 qpair failed and we were unable to recover it. 00:31:22.289 [2024-06-07 21:48:22.354479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.289 [2024-06-07 21:48:22.354510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.289 qpair failed and we were unable to recover it. 00:31:22.289 [2024-06-07 21:48:22.354791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.289 [2024-06-07 21:48:22.354822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.289 qpair failed and we were unable to recover it. 00:31:22.289 [2024-06-07 21:48:22.355075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.289 [2024-06-07 21:48:22.355106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.289 qpair failed and we were unable to recover it. 00:31:22.289 [2024-06-07 21:48:22.355449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.289 [2024-06-07 21:48:22.355480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.289 qpair failed and we were unable to recover it. 00:31:22.289 [2024-06-07 21:48:22.355814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.289 [2024-06-07 21:48:22.355845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.289 qpair failed and we were unable to recover it. 00:31:22.289 [2024-06-07 21:48:22.356102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.289 [2024-06-07 21:48:22.356133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.289 qpair failed and we were unable to recover it. 00:31:22.289 [2024-06-07 21:48:22.356473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.289 [2024-06-07 21:48:22.356503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.289 qpair failed and we were unable to recover it. 00:31:22.289 [2024-06-07 21:48:22.356826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.289 [2024-06-07 21:48:22.356857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.289 qpair failed and we were unable to recover it. 00:31:22.289 [2024-06-07 21:48:22.357114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.289 [2024-06-07 21:48:22.357123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.289 qpair failed and we were unable to recover it. 00:31:22.289 [2024-06-07 21:48:22.357356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.289 [2024-06-07 21:48:22.357365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.289 qpair failed and we were unable to recover it. 00:31:22.289 [2024-06-07 21:48:22.357607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.289 [2024-06-07 21:48:22.357617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.289 qpair failed and we were unable to recover it. 00:31:22.289 [2024-06-07 21:48:22.357888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.289 [2024-06-07 21:48:22.357897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.289 qpair failed and we were unable to recover it. 00:31:22.289 [2024-06-07 21:48:22.358178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.289 [2024-06-07 21:48:22.358188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.289 qpair failed and we were unable to recover it. 00:31:22.289 [2024-06-07 21:48:22.358455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.289 [2024-06-07 21:48:22.358464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.289 qpair failed and we were unable to recover it. 00:31:22.289 [2024-06-07 21:48:22.358730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.289 [2024-06-07 21:48:22.358760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.289 qpair failed and we were unable to recover it. 00:31:22.289 [2024-06-07 21:48:22.359050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.289 [2024-06-07 21:48:22.359082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.289 qpair failed and we were unable to recover it. 00:31:22.289 [2024-06-07 21:48:22.359367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.289 [2024-06-07 21:48:22.359399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.289 qpair failed and we were unable to recover it. 00:31:22.289 [2024-06-07 21:48:22.359714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.289 [2024-06-07 21:48:22.359744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.289 qpair failed and we were unable to recover it. 00:31:22.289 [2024-06-07 21:48:22.360084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.289 [2024-06-07 21:48:22.360116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.289 qpair failed and we were unable to recover it. 00:31:22.289 [2024-06-07 21:48:22.360430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.289 [2024-06-07 21:48:22.360460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.289 qpair failed and we were unable to recover it. 00:31:22.289 [2024-06-07 21:48:22.360773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.289 [2024-06-07 21:48:22.360815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.289 qpair failed and we were unable to recover it. 00:31:22.289 [2024-06-07 21:48:22.361110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.289 [2024-06-07 21:48:22.361120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.289 qpair failed and we were unable to recover it. 00:31:22.289 [2024-06-07 21:48:22.361377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.289 [2024-06-07 21:48:22.361387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.289 qpair failed and we were unable to recover it. 00:31:22.289 [2024-06-07 21:48:22.361614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.289 [2024-06-07 21:48:22.361623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.289 qpair failed and we were unable to recover it. 00:31:22.289 [2024-06-07 21:48:22.361779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.289 [2024-06-07 21:48:22.361788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.289 qpair failed and we were unable to recover it. 00:31:22.290 [2024-06-07 21:48:22.362065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.290 [2024-06-07 21:48:22.362097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.290 qpair failed and we were unable to recover it. 00:31:22.290 [2024-06-07 21:48:22.362380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.290 [2024-06-07 21:48:22.362410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.290 qpair failed and we were unable to recover it. 00:31:22.290 [2024-06-07 21:48:22.362593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.290 [2024-06-07 21:48:22.362623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.290 qpair failed and we were unable to recover it. 00:31:22.290 [2024-06-07 21:48:22.362803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.290 [2024-06-07 21:48:22.362813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.290 qpair failed and we were unable to recover it. 00:31:22.290 [2024-06-07 21:48:22.362967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.290 [2024-06-07 21:48:22.362977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.290 qpair failed and we were unable to recover it. 00:31:22.290 [2024-06-07 21:48:22.363260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.290 [2024-06-07 21:48:22.363292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.290 qpair failed and we were unable to recover it. 00:31:22.290 [2024-06-07 21:48:22.363572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.290 [2024-06-07 21:48:22.363602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.290 qpair failed and we were unable to recover it. 00:31:22.290 [2024-06-07 21:48:22.363912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.290 [2024-06-07 21:48:22.363943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.290 qpair failed and we were unable to recover it. 00:31:22.290 [2024-06-07 21:48:22.364160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.290 [2024-06-07 21:48:22.364192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.290 qpair failed and we were unable to recover it. 00:31:22.290 [2024-06-07 21:48:22.364460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.290 [2024-06-07 21:48:22.364490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.290 qpair failed and we were unable to recover it. 00:31:22.290 [2024-06-07 21:48:22.364683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.290 [2024-06-07 21:48:22.364713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.290 qpair failed and we were unable to recover it. 00:31:22.290 [2024-06-07 21:48:22.364989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.290 [2024-06-07 21:48:22.365019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.290 qpair failed and we were unable to recover it. 00:31:22.290 [2024-06-07 21:48:22.365260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.290 [2024-06-07 21:48:22.365270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.290 qpair failed and we were unable to recover it. 00:31:22.290 [2024-06-07 21:48:22.365508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.290 [2024-06-07 21:48:22.365518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.290 qpair failed and we were unable to recover it. 00:31:22.290 [2024-06-07 21:48:22.365769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.290 [2024-06-07 21:48:22.365779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.290 qpair failed and we were unable to recover it. 00:31:22.290 [2024-06-07 21:48:22.366071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.290 [2024-06-07 21:48:22.366081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.290 qpair failed and we were unable to recover it. 00:31:22.290 [2024-06-07 21:48:22.366349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.290 [2024-06-07 21:48:22.366359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.290 qpair failed and we were unable to recover it. 00:31:22.290 [2024-06-07 21:48:22.366637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.290 [2024-06-07 21:48:22.366646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.290 qpair failed and we were unable to recover it. 00:31:22.290 [2024-06-07 21:48:22.366845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.290 [2024-06-07 21:48:22.366854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.290 qpair failed and we were unable to recover it. 00:31:22.290 [2024-06-07 21:48:22.367049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.290 [2024-06-07 21:48:22.367074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.290 qpair failed and we were unable to recover it. 00:31:22.290 [2024-06-07 21:48:22.367206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.290 [2024-06-07 21:48:22.367216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.290 qpair failed and we were unable to recover it. 00:31:22.290 [2024-06-07 21:48:22.367410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.290 [2024-06-07 21:48:22.367441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.290 qpair failed and we were unable to recover it. 00:31:22.290 [2024-06-07 21:48:22.367714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.290 [2024-06-07 21:48:22.367744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.290 qpair failed and we were unable to recover it. 00:31:22.290 [2024-06-07 21:48:22.368042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.290 [2024-06-07 21:48:22.368051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.290 qpair failed and we were unable to recover it. 00:31:22.290 [2024-06-07 21:48:22.368280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.290 [2024-06-07 21:48:22.368289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.290 qpair failed and we were unable to recover it. 00:31:22.290 [2024-06-07 21:48:22.368560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.290 [2024-06-07 21:48:22.368569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.290 qpair failed and we were unable to recover it. 00:31:22.290 [2024-06-07 21:48:22.368707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.290 [2024-06-07 21:48:22.368716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.290 qpair failed and we were unable to recover it. 00:31:22.290 [2024-06-07 21:48:22.368947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.290 [2024-06-07 21:48:22.368978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.290 qpair failed and we were unable to recover it. 00:31:22.290 [2024-06-07 21:48:22.369247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.290 [2024-06-07 21:48:22.369278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.290 qpair failed and we were unable to recover it. 00:31:22.290 [2024-06-07 21:48:22.369615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.290 [2024-06-07 21:48:22.369646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.290 qpair failed and we were unable to recover it. 00:31:22.290 [2024-06-07 21:48:22.370011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.290 [2024-06-07 21:48:22.370052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.290 qpair failed and we were unable to recover it. 00:31:22.290 [2024-06-07 21:48:22.370302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.290 [2024-06-07 21:48:22.370333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.290 qpair failed and we were unable to recover it. 00:31:22.290 [2024-06-07 21:48:22.370678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.290 [2024-06-07 21:48:22.370708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.290 qpair failed and we were unable to recover it. 00:31:22.290 [2024-06-07 21:48:22.370909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.290 [2024-06-07 21:48:22.370939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.290 qpair failed and we were unable to recover it. 00:31:22.290 [2024-06-07 21:48:22.371180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.290 [2024-06-07 21:48:22.371190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.290 qpair failed and we were unable to recover it. 00:31:22.290 [2024-06-07 21:48:22.371390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.291 [2024-06-07 21:48:22.371400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.291 qpair failed and we were unable to recover it. 00:31:22.291 [2024-06-07 21:48:22.371721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.291 [2024-06-07 21:48:22.371731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.291 qpair failed and we were unable to recover it. 00:31:22.291 [2024-06-07 21:48:22.371881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.291 [2024-06-07 21:48:22.371890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.291 qpair failed and we were unable to recover it. 00:31:22.291 [2024-06-07 21:48:22.372108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.291 [2024-06-07 21:48:22.372118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.291 qpair failed and we were unable to recover it. 00:31:22.291 [2024-06-07 21:48:22.372335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.291 [2024-06-07 21:48:22.372365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.291 qpair failed and we were unable to recover it. 00:31:22.291 [2024-06-07 21:48:22.372554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.291 [2024-06-07 21:48:22.372584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.291 qpair failed and we were unable to recover it. 00:31:22.291 [2024-06-07 21:48:22.372757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.291 [2024-06-07 21:48:22.372787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.291 qpair failed and we were unable to recover it. 00:31:22.291 [2024-06-07 21:48:22.372985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.291 [2024-06-07 21:48:22.373015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.291 qpair failed and we were unable to recover it. 00:31:22.291 [2024-06-07 21:48:22.373295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.291 [2024-06-07 21:48:22.373328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.291 qpair failed and we were unable to recover it. 00:31:22.291 [2024-06-07 21:48:22.373597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.291 [2024-06-07 21:48:22.373628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.291 qpair failed and we were unable to recover it. 00:31:22.291 [2024-06-07 21:48:22.373868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.291 [2024-06-07 21:48:22.373877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.291 qpair failed and we were unable to recover it. 00:31:22.291 [2024-06-07 21:48:22.374098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.291 [2024-06-07 21:48:22.374108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.291 qpair failed and we were unable to recover it. 00:31:22.291 [2024-06-07 21:48:22.374352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.291 [2024-06-07 21:48:22.374361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.291 qpair failed and we were unable to recover it. 00:31:22.291 [2024-06-07 21:48:22.374655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.291 [2024-06-07 21:48:22.374664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.291 qpair failed and we were unable to recover it. 00:31:22.291 [2024-06-07 21:48:22.374875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.291 [2024-06-07 21:48:22.374884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.291 qpair failed and we were unable to recover it. 00:31:22.291 [2024-06-07 21:48:22.375157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.291 [2024-06-07 21:48:22.375166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.291 qpair failed and we were unable to recover it. 00:31:22.291 [2024-06-07 21:48:22.375410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.291 [2024-06-07 21:48:22.375419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.291 qpair failed and we were unable to recover it. 00:31:22.291 [2024-06-07 21:48:22.375642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.291 [2024-06-07 21:48:22.375651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.291 qpair failed and we were unable to recover it. 00:31:22.291 [2024-06-07 21:48:22.375860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.291 [2024-06-07 21:48:22.375869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.291 qpair failed and we were unable to recover it. 00:31:22.291 [2024-06-07 21:48:22.376045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.291 [2024-06-07 21:48:22.376078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.291 qpair failed and we were unable to recover it. 00:31:22.291 [2024-06-07 21:48:22.376351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.291 [2024-06-07 21:48:22.376382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.291 qpair failed and we were unable to recover it. 00:31:22.291 [2024-06-07 21:48:22.376693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.291 [2024-06-07 21:48:22.376723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.291 qpair failed and we were unable to recover it. 00:31:22.291 [2024-06-07 21:48:22.376968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.291 [2024-06-07 21:48:22.376977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.291 qpair failed and we were unable to recover it. 00:31:22.291 [2024-06-07 21:48:22.377188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.291 [2024-06-07 21:48:22.377197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.291 qpair failed and we were unable to recover it. 00:31:22.291 [2024-06-07 21:48:22.377409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.291 [2024-06-07 21:48:22.377419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.291 qpair failed and we were unable to recover it. 00:31:22.291 [2024-06-07 21:48:22.377620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.291 [2024-06-07 21:48:22.377629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.291 qpair failed and we were unable to recover it. 00:31:22.291 [2024-06-07 21:48:22.377827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.291 [2024-06-07 21:48:22.377858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.291 qpair failed and we were unable to recover it. 00:31:22.291 [2024-06-07 21:48:22.378110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.291 [2024-06-07 21:48:22.378141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.291 qpair failed and we were unable to recover it. 00:31:22.291 [2024-06-07 21:48:22.378412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.291 [2024-06-07 21:48:22.378443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.291 qpair failed and we were unable to recover it. 00:31:22.291 [2024-06-07 21:48:22.378753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.291 [2024-06-07 21:48:22.378783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.291 qpair failed and we were unable to recover it. 00:31:22.291 [2024-06-07 21:48:22.379095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.291 [2024-06-07 21:48:22.379126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.291 qpair failed and we were unable to recover it. 00:31:22.291 [2024-06-07 21:48:22.379314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.291 [2024-06-07 21:48:22.379345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.291 qpair failed and we were unable to recover it. 00:31:22.291 [2024-06-07 21:48:22.379626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.291 [2024-06-07 21:48:22.379657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.291 qpair failed and we were unable to recover it. 00:31:22.291 [2024-06-07 21:48:22.379839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.291 [2024-06-07 21:48:22.379870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.291 qpair failed and we were unable to recover it. 00:31:22.291 [2024-06-07 21:48:22.380044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.291 [2024-06-07 21:48:22.380054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.291 qpair failed and we were unable to recover it. 00:31:22.291 [2024-06-07 21:48:22.380380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.291 [2024-06-07 21:48:22.380411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.291 qpair failed and we were unable to recover it. 00:31:22.291 [2024-06-07 21:48:22.380752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.291 [2024-06-07 21:48:22.380787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.291 qpair failed and we were unable to recover it. 00:31:22.291 [2024-06-07 21:48:22.381121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.291 [2024-06-07 21:48:22.381152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.291 qpair failed and we were unable to recover it. 00:31:22.291 [2024-06-07 21:48:22.381434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.292 [2024-06-07 21:48:22.381464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.292 qpair failed and we were unable to recover it. 00:31:22.292 [2024-06-07 21:48:22.381664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.292 [2024-06-07 21:48:22.381694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.292 qpair failed and we were unable to recover it. 00:31:22.292 [2024-06-07 21:48:22.381961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.292 [2024-06-07 21:48:22.381991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.292 qpair failed and we were unable to recover it. 00:31:22.292 [2024-06-07 21:48:22.382245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.292 [2024-06-07 21:48:22.382277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.292 qpair failed and we were unable to recover it. 00:31:22.292 [2024-06-07 21:48:22.382560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.292 [2024-06-07 21:48:22.382591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.292 qpair failed and we were unable to recover it. 00:31:22.292 [2024-06-07 21:48:22.382776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.292 [2024-06-07 21:48:22.382785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.292 qpair failed and we were unable to recover it. 00:31:22.292 [2024-06-07 21:48:22.382998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.292 [2024-06-07 21:48:22.383037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.292 qpair failed and we were unable to recover it. 00:31:22.292 [2024-06-07 21:48:22.383324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.292 [2024-06-07 21:48:22.383354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.292 qpair failed and we were unable to recover it. 00:31:22.292 [2024-06-07 21:48:22.383676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.292 [2024-06-07 21:48:22.383706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.292 qpair failed and we were unable to recover it. 00:31:22.292 [2024-06-07 21:48:22.383990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.292 [2024-06-07 21:48:22.384020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.292 qpair failed and we were unable to recover it. 00:31:22.292 [2024-06-07 21:48:22.384312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.292 [2024-06-07 21:48:22.384343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.292 qpair failed and we were unable to recover it. 00:31:22.292 [2024-06-07 21:48:22.384677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.292 [2024-06-07 21:48:22.384707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.292 qpair failed and we were unable to recover it. 00:31:22.292 [2024-06-07 21:48:22.384994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.292 [2024-06-07 21:48:22.385035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.292 qpair failed and we were unable to recover it. 00:31:22.292 [2024-06-07 21:48:22.385349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.292 [2024-06-07 21:48:22.385380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.292 qpair failed and we were unable to recover it. 00:31:22.292 [2024-06-07 21:48:22.385741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.292 [2024-06-07 21:48:22.385771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.292 qpair failed and we were unable to recover it. 00:31:22.292 [2024-06-07 21:48:22.386115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.292 [2024-06-07 21:48:22.386147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.292 qpair failed and we were unable to recover it. 00:31:22.292 [2024-06-07 21:48:22.386457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.292 [2024-06-07 21:48:22.386487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.292 qpair failed and we were unable to recover it. 00:31:22.292 [2024-06-07 21:48:22.386799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.292 [2024-06-07 21:48:22.386829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.292 qpair failed and we were unable to recover it. 00:31:22.292 [2024-06-07 21:48:22.387174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.292 [2024-06-07 21:48:22.387206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.292 qpair failed and we were unable to recover it. 00:31:22.292 [2024-06-07 21:48:22.387544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.292 [2024-06-07 21:48:22.387574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.292 qpair failed and we were unable to recover it. 00:31:22.292 [2024-06-07 21:48:22.387798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.292 [2024-06-07 21:48:22.387828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.292 qpair failed and we were unable to recover it. 00:31:22.292 [2024-06-07 21:48:22.388162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.292 [2024-06-07 21:48:22.388194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.292 qpair failed and we were unable to recover it. 00:31:22.292 [2024-06-07 21:48:22.388467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.292 [2024-06-07 21:48:22.388476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.292 qpair failed and we were unable to recover it. 00:31:22.292 [2024-06-07 21:48:22.388679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.292 [2024-06-07 21:48:22.388688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.292 qpair failed and we were unable to recover it. 00:31:22.292 [2024-06-07 21:48:22.388830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.292 [2024-06-07 21:48:22.388839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.292 qpair failed and we were unable to recover it. 00:31:22.292 [2024-06-07 21:48:22.389125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.292 [2024-06-07 21:48:22.389157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.292 qpair failed and we were unable to recover it. 00:31:22.292 [2024-06-07 21:48:22.389352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.292 [2024-06-07 21:48:22.389382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.292 qpair failed and we were unable to recover it. 00:31:22.292 [2024-06-07 21:48:22.389635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.292 [2024-06-07 21:48:22.389665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.292 qpair failed and we were unable to recover it. 00:31:22.292 [2024-06-07 21:48:22.389921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.292 [2024-06-07 21:48:22.389952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.292 qpair failed and we were unable to recover it. 00:31:22.292 [2024-06-07 21:48:22.390262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.292 [2024-06-07 21:48:22.390272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.292 qpair failed and we were unable to recover it. 00:31:22.292 [2024-06-07 21:48:22.390471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.292 [2024-06-07 21:48:22.390480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.292 qpair failed and we were unable to recover it. 00:31:22.292 [2024-06-07 21:48:22.390676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.292 [2024-06-07 21:48:22.390685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.292 qpair failed and we were unable to recover it. 00:31:22.292 [2024-06-07 21:48:22.390934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.292 [2024-06-07 21:48:22.390965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.292 qpair failed and we were unable to recover it. 00:31:22.292 [2024-06-07 21:48:22.391169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.292 [2024-06-07 21:48:22.391200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.292 qpair failed and we were unable to recover it. 00:31:22.292 [2024-06-07 21:48:22.391568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.292 [2024-06-07 21:48:22.391599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.292 qpair failed and we were unable to recover it. 00:31:22.292 [2024-06-07 21:48:22.391914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.292 [2024-06-07 21:48:22.391944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.292 qpair failed and we were unable to recover it. 00:31:22.292 [2024-06-07 21:48:22.392204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.292 [2024-06-07 21:48:22.392235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.292 qpair failed and we were unable to recover it. 00:31:22.292 [2024-06-07 21:48:22.392571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.292 [2024-06-07 21:48:22.392601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.292 qpair failed and we were unable to recover it. 00:31:22.292 [2024-06-07 21:48:22.392911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.292 [2024-06-07 21:48:22.392922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.293 qpair failed and we were unable to recover it. 00:31:22.293 [2024-06-07 21:48:22.393135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.293 [2024-06-07 21:48:22.393145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.293 qpair failed and we were unable to recover it. 00:31:22.293 [2024-06-07 21:48:22.393461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.293 [2024-06-07 21:48:22.393470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.293 qpair failed and we were unable to recover it. 00:31:22.293 [2024-06-07 21:48:22.393629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.293 [2024-06-07 21:48:22.393638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.293 qpair failed and we were unable to recover it. 00:31:22.293 [2024-06-07 21:48:22.393792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.293 [2024-06-07 21:48:22.393802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.293 qpair failed and we were unable to recover it. 00:31:22.293 [2024-06-07 21:48:22.394016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.293 [2024-06-07 21:48:22.394029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.293 qpair failed and we were unable to recover it. 00:31:22.293 [2024-06-07 21:48:22.394186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.293 [2024-06-07 21:48:22.394195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.293 qpair failed and we were unable to recover it. 00:31:22.293 [2024-06-07 21:48:22.394448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.293 [2024-06-07 21:48:22.394478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.293 qpair failed and we were unable to recover it. 00:31:22.293 [2024-06-07 21:48:22.394735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.293 [2024-06-07 21:48:22.394765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.293 qpair failed and we were unable to recover it. 00:31:22.293 [2024-06-07 21:48:22.395045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.293 [2024-06-07 21:48:22.395088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.293 qpair failed and we were unable to recover it. 00:31:22.293 [2024-06-07 21:48:22.395184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.293 [2024-06-07 21:48:22.395192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.293 qpair failed and we were unable to recover it. 00:31:22.293 [2024-06-07 21:48:22.395391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.293 [2024-06-07 21:48:22.395400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.293 qpair failed and we were unable to recover it. 00:31:22.293 [2024-06-07 21:48:22.395669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.293 [2024-06-07 21:48:22.395678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.293 qpair failed and we were unable to recover it. 00:31:22.293 [2024-06-07 21:48:22.395996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.293 [2024-06-07 21:48:22.396006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.293 qpair failed and we were unable to recover it. 00:31:22.293 [2024-06-07 21:48:22.396234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.293 [2024-06-07 21:48:22.396243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.293 qpair failed and we were unable to recover it. 00:31:22.293 [2024-06-07 21:48:22.396484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.293 [2024-06-07 21:48:22.396494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.293 qpair failed and we were unable to recover it. 00:31:22.293 [2024-06-07 21:48:22.396712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.293 [2024-06-07 21:48:22.396722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.293 qpair failed and we were unable to recover it. 00:31:22.293 [2024-06-07 21:48:22.396931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.293 [2024-06-07 21:48:22.396940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.293 qpair failed and we were unable to recover it. 00:31:22.293 [2024-06-07 21:48:22.397210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.293 [2024-06-07 21:48:22.397220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.293 qpair failed and we were unable to recover it. 00:31:22.293 [2024-06-07 21:48:22.397379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.293 [2024-06-07 21:48:22.397389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.293 qpair failed and we were unable to recover it. 00:31:22.293 [2024-06-07 21:48:22.397616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.293 [2024-06-07 21:48:22.397625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.293 qpair failed and we were unable to recover it. 00:31:22.293 [2024-06-07 21:48:22.397837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.293 [2024-06-07 21:48:22.397846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.293 qpair failed and we were unable to recover it. 00:31:22.293 [2024-06-07 21:48:22.398070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.293 [2024-06-07 21:48:22.398080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.293 qpair failed and we were unable to recover it. 00:31:22.293 [2024-06-07 21:48:22.398348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.293 [2024-06-07 21:48:22.398358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.293 qpair failed and we were unable to recover it. 00:31:22.293 [2024-06-07 21:48:22.398591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.293 [2024-06-07 21:48:22.398600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.293 qpair failed and we were unable to recover it. 00:31:22.293 [2024-06-07 21:48:22.398840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.293 [2024-06-07 21:48:22.398850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.293 qpair failed and we were unable to recover it. 00:31:22.293 [2024-06-07 21:48:22.399002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.293 [2024-06-07 21:48:22.399012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.293 qpair failed and we were unable to recover it. 00:31:22.293 [2024-06-07 21:48:22.399224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.293 [2024-06-07 21:48:22.399255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.293 qpair failed and we were unable to recover it. 00:31:22.293 [2024-06-07 21:48:22.399493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.293 [2024-06-07 21:48:22.399524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.293 qpair failed and we were unable to recover it. 00:31:22.293 [2024-06-07 21:48:22.399780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.293 [2024-06-07 21:48:22.399811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.293 qpair failed and we were unable to recover it. 00:31:22.293 [2024-06-07 21:48:22.400001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.293 [2024-06-07 21:48:22.400038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.293 qpair failed and we were unable to recover it. 00:31:22.293 [2024-06-07 21:48:22.400328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.293 [2024-06-07 21:48:22.400359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.293 qpair failed and we were unable to recover it. 00:31:22.293 [2024-06-07 21:48:22.400666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.293 [2024-06-07 21:48:22.400707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.293 qpair failed and we were unable to recover it. 00:31:22.293 [2024-06-07 21:48:22.400915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.293 [2024-06-07 21:48:22.400925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.293 qpair failed and we were unable to recover it. 00:31:22.293 [2024-06-07 21:48:22.401217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.293 [2024-06-07 21:48:22.401226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.293 qpair failed and we were unable to recover it. 00:31:22.293 [2024-06-07 21:48:22.401443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.293 [2024-06-07 21:48:22.401474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.293 qpair failed and we were unable to recover it. 00:31:22.293 [2024-06-07 21:48:22.401813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.293 [2024-06-07 21:48:22.401822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.293 qpair failed and we were unable to recover it. 00:31:22.293 [2024-06-07 21:48:22.402085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.293 [2024-06-07 21:48:22.402095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.294 qpair failed and we were unable to recover it. 00:31:22.294 [2024-06-07 21:48:22.402301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.294 [2024-06-07 21:48:22.402310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.294 qpair failed and we were unable to recover it. 00:31:22.294 [2024-06-07 21:48:22.402524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.294 [2024-06-07 21:48:22.402533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.294 qpair failed and we were unable to recover it. 00:31:22.294 [2024-06-07 21:48:22.402746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.294 [2024-06-07 21:48:22.402756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.294 qpair failed and we were unable to recover it. 00:31:22.294 [2024-06-07 21:48:22.402954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.294 [2024-06-07 21:48:22.402963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.294 qpair failed and we were unable to recover it. 00:31:22.294 [2024-06-07 21:48:22.403198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.294 [2024-06-07 21:48:22.403208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.294 qpair failed and we were unable to recover it. 00:31:22.294 [2024-06-07 21:48:22.403493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.294 [2024-06-07 21:48:22.403502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.294 qpair failed and we were unable to recover it. 00:31:22.294 [2024-06-07 21:48:22.403595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.294 [2024-06-07 21:48:22.403603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.294 qpair failed and we were unable to recover it. 00:31:22.294 [2024-06-07 21:48:22.403868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.294 [2024-06-07 21:48:22.403877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.294 qpair failed and we were unable to recover it. 00:31:22.294 [2024-06-07 21:48:22.404076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.294 [2024-06-07 21:48:22.404087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.294 qpair failed and we were unable to recover it. 00:31:22.294 [2024-06-07 21:48:22.404301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.294 [2024-06-07 21:48:22.404310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.294 qpair failed and we were unable to recover it. 00:31:22.294 [2024-06-07 21:48:22.404603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.294 [2024-06-07 21:48:22.404612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.294 qpair failed and we were unable to recover it. 00:31:22.294 [2024-06-07 21:48:22.404877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.294 [2024-06-07 21:48:22.404887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.294 qpair failed and we were unable to recover it. 00:31:22.294 [2024-06-07 21:48:22.405152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.294 [2024-06-07 21:48:22.405161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.294 qpair failed and we were unable to recover it. 00:31:22.294 [2024-06-07 21:48:22.405426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.294 [2024-06-07 21:48:22.405435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.294 qpair failed and we were unable to recover it. 00:31:22.294 [2024-06-07 21:48:22.405704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.294 [2024-06-07 21:48:22.405713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.294 qpair failed and we were unable to recover it. 00:31:22.294 [2024-06-07 21:48:22.406006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.294 [2024-06-07 21:48:22.406015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.294 qpair failed and we were unable to recover it. 00:31:22.294 [2024-06-07 21:48:22.406234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.294 [2024-06-07 21:48:22.406243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.294 qpair failed and we were unable to recover it. 00:31:22.294 [2024-06-07 21:48:22.406469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.294 [2024-06-07 21:48:22.406478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.294 qpair failed and we were unable to recover it. 00:31:22.294 [2024-06-07 21:48:22.406718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.294 [2024-06-07 21:48:22.406727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.294 qpair failed and we were unable to recover it. 00:31:22.294 [2024-06-07 21:48:22.406996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.294 [2024-06-07 21:48:22.407005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.294 qpair failed and we were unable to recover it. 00:31:22.294 [2024-06-07 21:48:22.407270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.294 [2024-06-07 21:48:22.407280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.294 qpair failed and we were unable to recover it. 00:31:22.294 [2024-06-07 21:48:22.407548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.294 [2024-06-07 21:48:22.407557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.294 qpair failed and we were unable to recover it. 00:31:22.294 [2024-06-07 21:48:22.407757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.294 [2024-06-07 21:48:22.407766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.294 qpair failed and we were unable to recover it. 00:31:22.294 [2024-06-07 21:48:22.408036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.294 [2024-06-07 21:48:22.408045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.294 qpair failed and we were unable to recover it. 00:31:22.294 [2024-06-07 21:48:22.408270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.294 [2024-06-07 21:48:22.408279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.294 qpair failed and we were unable to recover it. 00:31:22.294 [2024-06-07 21:48:22.408486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.294 [2024-06-07 21:48:22.408495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.294 qpair failed and we were unable to recover it. 00:31:22.294 [2024-06-07 21:48:22.408718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.294 [2024-06-07 21:48:22.408728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.294 qpair failed and we were unable to recover it. 00:31:22.294 [2024-06-07 21:48:22.408880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.294 [2024-06-07 21:48:22.408900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.294 qpair failed and we were unable to recover it. 00:31:22.294 [2024-06-07 21:48:22.409133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.294 [2024-06-07 21:48:22.409165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.294 qpair failed and we were unable to recover it. 00:31:22.294 [2024-06-07 21:48:22.409372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.294 [2024-06-07 21:48:22.409403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.294 qpair failed and we were unable to recover it. 00:31:22.294 [2024-06-07 21:48:22.409578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.294 [2024-06-07 21:48:22.409609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.294 qpair failed and we were unable to recover it. 00:31:22.294 [2024-06-07 21:48:22.409918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.295 [2024-06-07 21:48:22.409948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.295 qpair failed and we were unable to recover it. 00:31:22.295 [2024-06-07 21:48:22.410176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.295 [2024-06-07 21:48:22.410186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.295 qpair failed and we were unable to recover it. 00:31:22.295 [2024-06-07 21:48:22.410437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.295 [2024-06-07 21:48:22.410446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.295 qpair failed and we were unable to recover it. 00:31:22.295 [2024-06-07 21:48:22.410593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.295 [2024-06-07 21:48:22.410602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.295 qpair failed and we were unable to recover it. 00:31:22.295 [2024-06-07 21:48:22.410877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.295 [2024-06-07 21:48:22.410908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.295 qpair failed and we were unable to recover it. 00:31:22.295 [2024-06-07 21:48:22.411160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.295 [2024-06-07 21:48:22.411191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.295 qpair failed and we were unable to recover it. 00:31:22.295 [2024-06-07 21:48:22.411536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.295 [2024-06-07 21:48:22.411566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.295 qpair failed and we were unable to recover it. 00:31:22.295 [2024-06-07 21:48:22.411850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.295 [2024-06-07 21:48:22.411881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.295 qpair failed and we were unable to recover it. 00:31:22.295 [2024-06-07 21:48:22.412114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.295 [2024-06-07 21:48:22.412124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.295 qpair failed and we were unable to recover it. 00:31:22.295 [2024-06-07 21:48:22.412349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.295 [2024-06-07 21:48:22.412359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.295 qpair failed and we were unable to recover it. 00:31:22.295 [2024-06-07 21:48:22.412557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.295 [2024-06-07 21:48:22.412567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.295 qpair failed and we were unable to recover it. 00:31:22.295 [2024-06-07 21:48:22.412810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.295 [2024-06-07 21:48:22.412846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.295 qpair failed and we were unable to recover it. 00:31:22.295 [2024-06-07 21:48:22.413106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.295 [2024-06-07 21:48:22.413137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.295 qpair failed and we were unable to recover it. 00:31:22.295 [2024-06-07 21:48:22.413425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.295 [2024-06-07 21:48:22.413456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.295 qpair failed and we were unable to recover it. 00:31:22.295 [2024-06-07 21:48:22.413792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.295 [2024-06-07 21:48:22.413822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.295 qpair failed and we were unable to recover it. 00:31:22.295 [2024-06-07 21:48:22.414105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.295 [2024-06-07 21:48:22.414136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.295 qpair failed and we were unable to recover it. 00:31:22.295 [2024-06-07 21:48:22.414472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.295 [2024-06-07 21:48:22.414503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.295 qpair failed and we were unable to recover it. 00:31:22.295 [2024-06-07 21:48:22.414689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.295 [2024-06-07 21:48:22.414718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.295 qpair failed and we were unable to recover it. 00:31:22.295 [2024-06-07 21:48:22.414952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.295 [2024-06-07 21:48:22.414961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.295 qpair failed and we were unable to recover it. 00:31:22.295 [2024-06-07 21:48:22.415122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.295 [2024-06-07 21:48:22.415132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.295 qpair failed and we were unable to recover it. 00:31:22.295 [2024-06-07 21:48:22.415336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.295 [2024-06-07 21:48:22.415366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.295 qpair failed and we were unable to recover it. 00:31:22.295 [2024-06-07 21:48:22.415692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.295 [2024-06-07 21:48:22.415723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.295 qpair failed and we were unable to recover it. 00:31:22.295 [2024-06-07 21:48:22.415979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.295 [2024-06-07 21:48:22.416009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.295 qpair failed and we were unable to recover it. 00:31:22.295 [2024-06-07 21:48:22.416278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.295 [2024-06-07 21:48:22.416309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.295 qpair failed and we were unable to recover it. 00:31:22.295 [2024-06-07 21:48:22.416642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.295 [2024-06-07 21:48:22.416673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.295 qpair failed and we were unable to recover it. 00:31:22.295 [2024-06-07 21:48:22.416885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.295 [2024-06-07 21:48:22.416915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.295 qpair failed and we were unable to recover it. 00:31:22.295 [2024-06-07 21:48:22.417196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.295 [2024-06-07 21:48:22.417205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.295 qpair failed and we were unable to recover it. 00:31:22.295 [2024-06-07 21:48:22.417413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.295 [2024-06-07 21:48:22.417422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.295 qpair failed and we were unable to recover it. 00:31:22.295 [2024-06-07 21:48:22.417634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.295 [2024-06-07 21:48:22.417643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.295 qpair failed and we were unable to recover it. 00:31:22.295 [2024-06-07 21:48:22.417946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.295 [2024-06-07 21:48:22.417977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.295 qpair failed and we were unable to recover it. 00:31:22.295 [2024-06-07 21:48:22.418180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.295 [2024-06-07 21:48:22.418212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.295 qpair failed and we were unable to recover it. 00:31:22.295 [2024-06-07 21:48:22.418393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.295 [2024-06-07 21:48:22.418424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.295 qpair failed and we were unable to recover it. 00:31:22.295 [2024-06-07 21:48:22.418676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.295 [2024-06-07 21:48:22.418705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.295 qpair failed and we were unable to recover it. 00:31:22.295 [2024-06-07 21:48:22.418953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.295 [2024-06-07 21:48:22.418984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.295 qpair failed and we were unable to recover it. 00:31:22.295 [2024-06-07 21:48:22.419237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.295 [2024-06-07 21:48:22.419268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.295 qpair failed and we were unable to recover it. 00:31:22.295 [2024-06-07 21:48:22.419599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.295 [2024-06-07 21:48:22.419608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.295 qpair failed and we were unable to recover it. 00:31:22.295 [2024-06-07 21:48:22.419871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.295 [2024-06-07 21:48:22.419902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.295 qpair failed and we were unable to recover it. 00:31:22.295 [2024-06-07 21:48:22.420237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.295 [2024-06-07 21:48:22.420247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.295 qpair failed and we were unable to recover it. 00:31:22.295 [2024-06-07 21:48:22.420450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.296 [2024-06-07 21:48:22.420460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.296 qpair failed and we were unable to recover it. 00:31:22.296 [2024-06-07 21:48:22.420671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.296 [2024-06-07 21:48:22.420681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.296 qpair failed and we were unable to recover it. 00:31:22.296 [2024-06-07 21:48:22.420815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.296 [2024-06-07 21:48:22.420825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.296 qpair failed and we were unable to recover it. 00:31:22.296 [2024-06-07 21:48:22.421087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.296 [2024-06-07 21:48:22.421096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.296 qpair failed and we were unable to recover it. 00:31:22.296 [2024-06-07 21:48:22.421363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.296 [2024-06-07 21:48:22.421372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.296 qpair failed and we were unable to recover it. 00:31:22.296 [2024-06-07 21:48:22.421583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.296 [2024-06-07 21:48:22.421593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.296 qpair failed and we were unable to recover it. 00:31:22.296 [2024-06-07 21:48:22.421748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.296 [2024-06-07 21:48:22.421756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.296 qpair failed and we were unable to recover it. 00:31:22.296 [2024-06-07 21:48:22.421968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.296 [2024-06-07 21:48:22.421998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.296 qpair failed and we were unable to recover it. 00:31:22.296 [2024-06-07 21:48:22.422342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.296 [2024-06-07 21:48:22.422374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.296 qpair failed and we were unable to recover it. 00:31:22.296 [2024-06-07 21:48:22.422713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.296 [2024-06-07 21:48:22.422744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.296 qpair failed and we were unable to recover it. 00:31:22.296 [2024-06-07 21:48:22.423070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.296 [2024-06-07 21:48:22.423101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.296 qpair failed and we were unable to recover it. 00:31:22.296 [2024-06-07 21:48:22.423352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.296 [2024-06-07 21:48:22.423383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.296 qpair failed and we were unable to recover it. 00:31:22.296 [2024-06-07 21:48:22.423719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.296 [2024-06-07 21:48:22.423750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.296 qpair failed and we were unable to recover it. 00:31:22.296 [2024-06-07 21:48:22.424012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.296 [2024-06-07 21:48:22.424058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.296 qpair failed and we were unable to recover it. 00:31:22.296 [2024-06-07 21:48:22.424387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.296 [2024-06-07 21:48:22.424418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.296 qpair failed and we were unable to recover it. 00:31:22.296 [2024-06-07 21:48:22.424699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.296 [2024-06-07 21:48:22.424729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.296 qpair failed and we were unable to recover it. 00:31:22.296 [2024-06-07 21:48:22.425034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.296 [2024-06-07 21:48:22.425044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.296 qpair failed and we were unable to recover it. 00:31:22.296 [2024-06-07 21:48:22.425322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.296 [2024-06-07 21:48:22.425331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.296 qpair failed and we were unable to recover it. 00:31:22.296 [2024-06-07 21:48:22.425539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.296 [2024-06-07 21:48:22.425548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.296 qpair failed and we were unable to recover it. 00:31:22.296 [2024-06-07 21:48:22.425764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.296 [2024-06-07 21:48:22.425773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.296 qpair failed and we were unable to recover it. 00:31:22.296 [2024-06-07 21:48:22.426070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.296 [2024-06-07 21:48:22.426102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.296 qpair failed and we were unable to recover it. 00:31:22.296 [2024-06-07 21:48:22.426288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.296 [2024-06-07 21:48:22.426318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.296 qpair failed and we were unable to recover it. 00:31:22.296 [2024-06-07 21:48:22.426634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.296 [2024-06-07 21:48:22.426664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.296 qpair failed and we were unable to recover it. 00:31:22.296 [2024-06-07 21:48:22.426979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.296 [2024-06-07 21:48:22.427009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.296 qpair failed and we were unable to recover it. 00:31:22.296 [2024-06-07 21:48:22.427215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.296 [2024-06-07 21:48:22.427245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.296 qpair failed and we were unable to recover it. 00:31:22.296 [2024-06-07 21:48:22.427513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.296 [2024-06-07 21:48:22.427543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.296 qpair failed and we were unable to recover it. 00:31:22.296 [2024-06-07 21:48:22.427872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.296 [2024-06-07 21:48:22.427902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.296 qpair failed and we were unable to recover it. 00:31:22.296 [2024-06-07 21:48:22.428152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.296 [2024-06-07 21:48:22.428170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.296 qpair failed and we were unable to recover it. 00:31:22.296 [2024-06-07 21:48:22.428330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.296 [2024-06-07 21:48:22.428339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.296 qpair failed and we were unable to recover it. 00:31:22.296 [2024-06-07 21:48:22.428550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.296 [2024-06-07 21:48:22.428560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.296 qpair failed and we were unable to recover it. 00:31:22.296 [2024-06-07 21:48:22.428854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.296 [2024-06-07 21:48:22.428881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.296 qpair failed and we were unable to recover it. 00:31:22.296 [2024-06-07 21:48:22.429067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.296 [2024-06-07 21:48:22.429099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.296 qpair failed and we were unable to recover it. 00:31:22.296 [2024-06-07 21:48:22.429411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.296 [2024-06-07 21:48:22.429441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.296 qpair failed and we were unable to recover it. 00:31:22.296 [2024-06-07 21:48:22.429712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.296 [2024-06-07 21:48:22.429743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.296 qpair failed and we were unable to recover it. 00:31:22.296 [2024-06-07 21:48:22.430020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.296 [2024-06-07 21:48:22.430037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.296 qpair failed and we were unable to recover it. 00:31:22.296 [2024-06-07 21:48:22.430276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.296 [2024-06-07 21:48:22.430285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.296 qpair failed and we were unable to recover it. 00:31:22.296 [2024-06-07 21:48:22.430494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.296 [2024-06-07 21:48:22.430504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.296 qpair failed and we were unable to recover it. 00:31:22.296 [2024-06-07 21:48:22.430702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.296 [2024-06-07 21:48:22.430711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.296 qpair failed and we were unable to recover it. 00:31:22.296 [2024-06-07 21:48:22.430854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.297 [2024-06-07 21:48:22.430864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.297 qpair failed and we were unable to recover it. 00:31:22.297 [2024-06-07 21:48:22.431063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.297 [2024-06-07 21:48:22.431073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.297 qpair failed and we were unable to recover it. 00:31:22.297 [2024-06-07 21:48:22.431368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.297 [2024-06-07 21:48:22.431378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.297 qpair failed and we were unable to recover it. 00:31:22.297 [2024-06-07 21:48:22.431650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.297 [2024-06-07 21:48:22.431659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.297 qpair failed and we were unable to recover it. 00:31:22.297 [2024-06-07 21:48:22.431923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.297 [2024-06-07 21:48:22.431932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.297 qpair failed and we were unable to recover it. 00:31:22.297 [2024-06-07 21:48:22.432136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.297 [2024-06-07 21:48:22.432145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.297 qpair failed and we were unable to recover it. 00:31:22.297 [2024-06-07 21:48:22.432360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.297 [2024-06-07 21:48:22.432370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.297 qpair failed and we were unable to recover it. 00:31:22.297 [2024-06-07 21:48:22.432524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.297 [2024-06-07 21:48:22.432533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.297 qpair failed and we were unable to recover it. 00:31:22.297 [2024-06-07 21:48:22.432745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.297 [2024-06-07 21:48:22.432776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.297 qpair failed and we were unable to recover it. 00:31:22.297 [2024-06-07 21:48:22.433018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.297 [2024-06-07 21:48:22.433060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.297 qpair failed and we were unable to recover it. 00:31:22.297 [2024-06-07 21:48:22.433269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.297 [2024-06-07 21:48:22.433299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.297 qpair failed and we were unable to recover it. 00:31:22.297 [2024-06-07 21:48:22.433611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.297 [2024-06-07 21:48:22.433642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.297 qpair failed and we were unable to recover it. 00:31:22.297 [2024-06-07 21:48:22.433975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.297 [2024-06-07 21:48:22.434005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.297 qpair failed and we were unable to recover it. 00:31:22.297 [2024-06-07 21:48:22.434269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.297 [2024-06-07 21:48:22.434300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.297 qpair failed and we were unable to recover it. 00:31:22.297 [2024-06-07 21:48:22.434651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.297 [2024-06-07 21:48:22.434681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.297 qpair failed and we were unable to recover it. 00:31:22.297 [2024-06-07 21:48:22.434959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.297 [2024-06-07 21:48:22.434995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.297 qpair failed and we were unable to recover it. 00:31:22.297 [2024-06-07 21:48:22.435350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.297 [2024-06-07 21:48:22.435381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.297 qpair failed and we were unable to recover it. 00:31:22.297 [2024-06-07 21:48:22.435639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.297 [2024-06-07 21:48:22.435670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.297 qpair failed and we were unable to recover it. 00:31:22.297 [2024-06-07 21:48:22.435931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.297 [2024-06-07 21:48:22.435962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.297 qpair failed and we were unable to recover it. 00:31:22.297 [2024-06-07 21:48:22.436267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.297 [2024-06-07 21:48:22.436276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.297 qpair failed and we were unable to recover it. 00:31:22.297 [2024-06-07 21:48:22.436587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.297 [2024-06-07 21:48:22.436597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.297 qpair failed and we were unable to recover it. 00:31:22.297 [2024-06-07 21:48:22.436795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.297 [2024-06-07 21:48:22.436804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.297 qpair failed and we were unable to recover it. 00:31:22.297 [2024-06-07 21:48:22.436948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.297 [2024-06-07 21:48:22.436958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.297 qpair failed and we were unable to recover it. 00:31:22.297 [2024-06-07 21:48:22.437264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.297 [2024-06-07 21:48:22.437274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.297 qpair failed and we were unable to recover it. 00:31:22.297 [2024-06-07 21:48:22.437510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.297 [2024-06-07 21:48:22.437519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.297 qpair failed and we were unable to recover it. 00:31:22.297 [2024-06-07 21:48:22.437805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.297 [2024-06-07 21:48:22.437814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.297 qpair failed and we were unable to recover it. 00:31:22.297 [2024-06-07 21:48:22.438022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.297 [2024-06-07 21:48:22.438037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.297 qpair failed and we were unable to recover it. 00:31:22.297 [2024-06-07 21:48:22.438264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.297 [2024-06-07 21:48:22.438274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.297 qpair failed and we were unable to recover it. 00:31:22.297 [2024-06-07 21:48:22.438507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.297 [2024-06-07 21:48:22.438516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.297 qpair failed and we were unable to recover it. 00:31:22.297 [2024-06-07 21:48:22.438669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.297 [2024-06-07 21:48:22.438678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.297 qpair failed and we were unable to recover it. 00:31:22.297 [2024-06-07 21:48:22.438827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.297 [2024-06-07 21:48:22.438837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.297 qpair failed and we were unable to recover it. 00:31:22.297 [2024-06-07 21:48:22.439059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.297 [2024-06-07 21:48:22.439069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.297 qpair failed and we were unable to recover it. 00:31:22.297 [2024-06-07 21:48:22.439297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.297 [2024-06-07 21:48:22.439306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.297 qpair failed and we were unable to recover it. 00:31:22.297 [2024-06-07 21:48:22.439464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.297 [2024-06-07 21:48:22.439473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.297 qpair failed and we were unable to recover it. 00:31:22.297 [2024-06-07 21:48:22.439764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.297 [2024-06-07 21:48:22.439794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.297 qpair failed and we were unable to recover it. 00:31:22.297 [2024-06-07 21:48:22.440064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.297 [2024-06-07 21:48:22.440096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.297 qpair failed and we were unable to recover it. 00:31:22.297 [2024-06-07 21:48:22.440348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.297 [2024-06-07 21:48:22.440358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.297 qpair failed and we were unable to recover it. 00:31:22.297 [2024-06-07 21:48:22.440641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.298 [2024-06-07 21:48:22.440650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.298 qpair failed and we were unable to recover it. 00:31:22.298 [2024-06-07 21:48:22.440865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.298 [2024-06-07 21:48:22.440874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.298 qpair failed and we were unable to recover it. 00:31:22.298 [2024-06-07 21:48:22.441096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.298 [2024-06-07 21:48:22.441106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.298 qpair failed and we were unable to recover it. 00:31:22.298 [2024-06-07 21:48:22.441320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.298 [2024-06-07 21:48:22.441329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.298 qpair failed and we were unable to recover it. 00:31:22.298 [2024-06-07 21:48:22.441598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.298 [2024-06-07 21:48:22.441607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.298 qpair failed and we were unable to recover it. 00:31:22.298 [2024-06-07 21:48:22.441807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.298 [2024-06-07 21:48:22.441816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.298 qpair failed and we were unable to recover it. 00:31:22.298 [2024-06-07 21:48:22.441971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.298 [2024-06-07 21:48:22.441981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.298 qpair failed and we were unable to recover it. 00:31:22.298 [2024-06-07 21:48:22.442252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.298 [2024-06-07 21:48:22.442284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.298 qpair failed and we were unable to recover it. 00:31:22.298 [2024-06-07 21:48:22.442470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.298 [2024-06-07 21:48:22.442500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.298 qpair failed and we were unable to recover it. 00:31:22.298 [2024-06-07 21:48:22.442692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.298 [2024-06-07 21:48:22.442722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.298 qpair failed and we were unable to recover it. 00:31:22.298 [2024-06-07 21:48:22.442959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.298 [2024-06-07 21:48:22.442968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.298 qpair failed and we were unable to recover it. 00:31:22.298 [2024-06-07 21:48:22.443128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.298 [2024-06-07 21:48:22.443138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.298 qpair failed and we were unable to recover it. 00:31:22.298 [2024-06-07 21:48:22.443294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.298 [2024-06-07 21:48:22.443303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.298 qpair failed and we were unable to recover it. 00:31:22.298 [2024-06-07 21:48:22.443572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.298 [2024-06-07 21:48:22.443602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.298 qpair failed and we were unable to recover it. 00:31:22.298 [2024-06-07 21:48:22.443787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.298 [2024-06-07 21:48:22.443818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.298 qpair failed and we were unable to recover it. 00:31:22.298 [2024-06-07 21:48:22.444069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.298 [2024-06-07 21:48:22.444078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.298 qpair failed and we were unable to recover it. 00:31:22.298 [2024-06-07 21:48:22.444355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.298 [2024-06-07 21:48:22.444364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.298 qpair failed and we were unable to recover it. 00:31:22.298 [2024-06-07 21:48:22.444592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.298 [2024-06-07 21:48:22.444602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.298 qpair failed and we were unable to recover it. 00:31:22.298 [2024-06-07 21:48:22.444798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.298 [2024-06-07 21:48:22.444809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.298 qpair failed and we were unable to recover it. 00:31:22.298 [2024-06-07 21:48:22.445111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.298 [2024-06-07 21:48:22.445121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.298 qpair failed and we were unable to recover it. 00:31:22.298 [2024-06-07 21:48:22.445331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.298 [2024-06-07 21:48:22.445340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.298 qpair failed and we were unable to recover it. 00:31:22.298 [2024-06-07 21:48:22.445614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.298 [2024-06-07 21:48:22.445623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.298 qpair failed and we were unable to recover it. 00:31:22.298 [2024-06-07 21:48:22.445831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.298 [2024-06-07 21:48:22.445840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.298 qpair failed and we were unable to recover it. 00:31:22.298 [2024-06-07 21:48:22.446138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.298 [2024-06-07 21:48:22.446147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.298 qpair failed and we were unable to recover it. 00:31:22.298 [2024-06-07 21:48:22.446356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.298 [2024-06-07 21:48:22.446365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.298 qpair failed and we were unable to recover it. 00:31:22.298 [2024-06-07 21:48:22.446569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.298 [2024-06-07 21:48:22.446599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.298 qpair failed and we were unable to recover it. 00:31:22.298 [2024-06-07 21:48:22.446877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.298 [2024-06-07 21:48:22.446908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.298 qpair failed and we were unable to recover it. 00:31:22.298 [2024-06-07 21:48:22.447097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.298 [2024-06-07 21:48:22.447128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.298 qpair failed and we were unable to recover it. 00:31:22.298 [2024-06-07 21:48:22.447369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.298 [2024-06-07 21:48:22.447379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.298 qpair failed and we were unable to recover it. 00:31:22.298 [2024-06-07 21:48:22.447526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.298 [2024-06-07 21:48:22.447535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.298 qpair failed and we were unable to recover it. 00:31:22.298 [2024-06-07 21:48:22.447809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.298 [2024-06-07 21:48:22.447839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.298 qpair failed and we were unable to recover it. 00:31:22.298 [2024-06-07 21:48:22.448081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.298 [2024-06-07 21:48:22.448112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.298 qpair failed and we were unable to recover it. 00:31:22.298 [2024-06-07 21:48:22.448453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.298 [2024-06-07 21:48:22.448485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.298 qpair failed and we were unable to recover it. 00:31:22.298 [2024-06-07 21:48:22.448825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.298 [2024-06-07 21:48:22.448856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.298 qpair failed and we were unable to recover it. 00:31:22.298 [2024-06-07 21:48:22.449197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.298 [2024-06-07 21:48:22.449228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.298 qpair failed and we were unable to recover it. 00:31:22.298 [2024-06-07 21:48:22.449539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.298 [2024-06-07 21:48:22.449569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.298 qpair failed and we were unable to recover it. 00:31:22.298 [2024-06-07 21:48:22.449845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.298 [2024-06-07 21:48:22.449875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.298 qpair failed and we were unable to recover it. 00:31:22.298 [2024-06-07 21:48:22.450135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.298 [2024-06-07 21:48:22.450167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.299 qpair failed and we were unable to recover it. 00:31:22.299 [2024-06-07 21:48:22.450503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.299 [2024-06-07 21:48:22.450534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.299 qpair failed and we were unable to recover it. 00:31:22.299 [2024-06-07 21:48:22.450789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.299 [2024-06-07 21:48:22.450819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.299 qpair failed and we were unable to recover it. 00:31:22.299 [2024-06-07 21:48:22.451014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.299 [2024-06-07 21:48:22.451023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.299 qpair failed and we were unable to recover it. 00:31:22.299 [2024-06-07 21:48:22.451327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.299 [2024-06-07 21:48:22.451337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.299 qpair failed and we were unable to recover it. 00:31:22.299 [2024-06-07 21:48:22.451537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.299 [2024-06-07 21:48:22.451547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.299 qpair failed and we were unable to recover it. 00:31:22.299 [2024-06-07 21:48:22.451761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.299 [2024-06-07 21:48:22.451770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.299 qpair failed and we were unable to recover it. 00:31:22.299 [2024-06-07 21:48:22.451994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.299 [2024-06-07 21:48:22.452003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.299 qpair failed and we were unable to recover it. 00:31:22.299 [2024-06-07 21:48:22.452162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.299 [2024-06-07 21:48:22.452172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.299 qpair failed and we were unable to recover it. 00:31:22.299 [2024-06-07 21:48:22.452411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.299 [2024-06-07 21:48:22.452420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.299 qpair failed and we were unable to recover it. 00:31:22.299 [2024-06-07 21:48:22.452566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.299 [2024-06-07 21:48:22.452575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.299 qpair failed and we were unable to recover it. 00:31:22.299 [2024-06-07 21:48:22.452792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.299 [2024-06-07 21:48:22.452801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.299 qpair failed and we were unable to recover it. 00:31:22.299 [2024-06-07 21:48:22.452941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.299 [2024-06-07 21:48:22.452951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.299 qpair failed and we were unable to recover it. 00:31:22.299 [2024-06-07 21:48:22.453160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.299 [2024-06-07 21:48:22.453170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.299 qpair failed and we were unable to recover it. 00:31:22.299 [2024-06-07 21:48:22.453409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.299 [2024-06-07 21:48:22.453418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.299 qpair failed and we were unable to recover it. 00:31:22.299 [2024-06-07 21:48:22.453743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.299 [2024-06-07 21:48:22.453752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.299 qpair failed and we were unable to recover it. 00:31:22.299 [2024-06-07 21:48:22.453911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.299 [2024-06-07 21:48:22.453920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.299 qpair failed and we were unable to recover it. 00:31:22.299 [2024-06-07 21:48:22.454225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.299 [2024-06-07 21:48:22.454256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.299 qpair failed and we were unable to recover it. 00:31:22.299 [2024-06-07 21:48:22.454539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.299 [2024-06-07 21:48:22.454569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.299 qpair failed and we were unable to recover it. 00:31:22.299 [2024-06-07 21:48:22.454815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.299 [2024-06-07 21:48:22.454845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.299 qpair failed and we were unable to recover it. 00:31:22.299 [2024-06-07 21:48:22.455049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.299 [2024-06-07 21:48:22.455080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.299 qpair failed and we were unable to recover it. 00:31:22.299 [2024-06-07 21:48:22.455403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.299 [2024-06-07 21:48:22.455414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.299 qpair failed and we were unable to recover it. 00:31:22.299 [2024-06-07 21:48:22.455623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.299 [2024-06-07 21:48:22.455633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.299 qpair failed and we were unable to recover it. 00:31:22.299 [2024-06-07 21:48:22.455900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.299 [2024-06-07 21:48:22.455909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.299 qpair failed and we were unable to recover it. 00:31:22.299 [2024-06-07 21:48:22.456110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.299 [2024-06-07 21:48:22.456119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.299 qpair failed and we were unable to recover it. 00:31:22.299 [2024-06-07 21:48:22.456272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.299 [2024-06-07 21:48:22.456281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.299 qpair failed and we were unable to recover it. 00:31:22.299 [2024-06-07 21:48:22.456571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.299 [2024-06-07 21:48:22.456602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.299 qpair failed and we were unable to recover it. 00:31:22.299 [2024-06-07 21:48:22.456805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.299 [2024-06-07 21:48:22.456836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.299 qpair failed and we were unable to recover it. 00:31:22.299 [2024-06-07 21:48:22.457041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.299 [2024-06-07 21:48:22.457076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.299 qpair failed and we were unable to recover it. 00:31:22.299 [2024-06-07 21:48:22.457367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.299 [2024-06-07 21:48:22.457377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.299 qpair failed and we were unable to recover it. 00:31:22.299 [2024-06-07 21:48:22.457540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.299 [2024-06-07 21:48:22.457549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.299 qpair failed and we were unable to recover it. 00:31:22.299 [2024-06-07 21:48:22.457747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.299 [2024-06-07 21:48:22.457756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.299 qpair failed and we were unable to recover it. 00:31:22.299 [2024-06-07 21:48:22.457899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.299 [2024-06-07 21:48:22.457908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.299 qpair failed and we were unable to recover it. 00:31:22.300 [2024-06-07 21:48:22.458172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.300 [2024-06-07 21:48:22.458181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.300 qpair failed and we were unable to recover it. 00:31:22.300 [2024-06-07 21:48:22.458440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.300 [2024-06-07 21:48:22.458449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.300 qpair failed and we were unable to recover it. 00:31:22.300 [2024-06-07 21:48:22.458746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.300 [2024-06-07 21:48:22.458756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.300 qpair failed and we were unable to recover it. 00:31:22.300 [2024-06-07 21:48:22.458959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.300 [2024-06-07 21:48:22.458968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.300 qpair failed and we were unable to recover it. 00:31:22.300 [2024-06-07 21:48:22.459116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.300 [2024-06-07 21:48:22.459126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.300 qpair failed and we were unable to recover it. 00:31:22.300 [2024-06-07 21:48:22.459364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.300 [2024-06-07 21:48:22.459373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.300 qpair failed and we were unable to recover it. 00:31:22.300 [2024-06-07 21:48:22.459636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.300 [2024-06-07 21:48:22.459673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.300 qpair failed and we were unable to recover it. 00:31:22.300 [2024-06-07 21:48:22.459879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.300 [2024-06-07 21:48:22.459909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.300 qpair failed and we were unable to recover it. 00:31:22.300 [2024-06-07 21:48:22.460109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.300 [2024-06-07 21:48:22.460140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.300 qpair failed and we were unable to recover it. 00:31:22.300 [2024-06-07 21:48:22.460467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.300 [2024-06-07 21:48:22.460477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.300 qpair failed and we were unable to recover it. 00:31:22.300 [2024-06-07 21:48:22.460767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.300 [2024-06-07 21:48:22.460777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.300 qpair failed and we were unable to recover it. 00:31:22.300 [2024-06-07 21:48:22.460994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.300 [2024-06-07 21:48:22.461003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.300 qpair failed and we were unable to recover it. 00:31:22.300 [2024-06-07 21:48:22.461273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.300 [2024-06-07 21:48:22.461283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.300 qpair failed and we were unable to recover it. 00:31:22.300 [2024-06-07 21:48:22.461583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.300 [2024-06-07 21:48:22.461592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.300 qpair failed and we were unable to recover it. 00:31:22.300 [2024-06-07 21:48:22.461870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.300 [2024-06-07 21:48:22.461879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.300 qpair failed and we were unable to recover it. 00:31:22.300 [2024-06-07 21:48:22.462053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.300 [2024-06-07 21:48:22.462063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.300 qpair failed and we were unable to recover it. 00:31:22.300 [2024-06-07 21:48:22.462222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.300 [2024-06-07 21:48:22.462232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.300 qpair failed and we were unable to recover it. 00:31:22.300 [2024-06-07 21:48:22.462471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.300 [2024-06-07 21:48:22.462502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.300 qpair failed and we were unable to recover it. 00:31:22.300 [2024-06-07 21:48:22.462810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.300 [2024-06-07 21:48:22.462839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.300 qpair failed and we were unable to recover it. 00:31:22.300 [2024-06-07 21:48:22.463116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.300 [2024-06-07 21:48:22.463125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.300 qpair failed and we were unable to recover it. 00:31:22.300 [2024-06-07 21:48:22.463261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.300 [2024-06-07 21:48:22.463271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.300 qpair failed and we were unable to recover it. 00:31:22.300 [2024-06-07 21:48:22.463488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.300 [2024-06-07 21:48:22.463497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.300 qpair failed and we were unable to recover it. 00:31:22.300 [2024-06-07 21:48:22.463639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.300 [2024-06-07 21:48:22.463648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.300 qpair failed and we were unable to recover it. 00:31:22.300 [2024-06-07 21:48:22.463880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.300 [2024-06-07 21:48:22.463890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.300 qpair failed and we were unable to recover it. 00:31:22.300 [2024-06-07 21:48:22.464157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.300 [2024-06-07 21:48:22.464167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.300 qpair failed and we were unable to recover it. 00:31:22.300 [2024-06-07 21:48:22.464377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.300 [2024-06-07 21:48:22.464386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.300 qpair failed and we were unable to recover it. 00:31:22.300 [2024-06-07 21:48:22.464663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.300 [2024-06-07 21:48:22.464672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.300 qpair failed and we were unable to recover it. 00:31:22.300 [2024-06-07 21:48:22.464936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.300 [2024-06-07 21:48:22.464945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.300 qpair failed and we were unable to recover it. 00:31:22.300 [2024-06-07 21:48:22.465182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.300 [2024-06-07 21:48:22.465193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.300 qpair failed and we were unable to recover it. 00:31:22.300 [2024-06-07 21:48:22.465338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.300 [2024-06-07 21:48:22.465348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.300 qpair failed and we were unable to recover it. 00:31:22.300 [2024-06-07 21:48:22.465547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.300 [2024-06-07 21:48:22.465557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.300 qpair failed and we were unable to recover it. 00:31:22.300 [2024-06-07 21:48:22.465795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.300 [2024-06-07 21:48:22.465804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.300 qpair failed and we were unable to recover it. 00:31:22.300 [2024-06-07 21:48:22.466041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.300 [2024-06-07 21:48:22.466051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.300 qpair failed and we were unable to recover it. 00:31:22.300 [2024-06-07 21:48:22.466210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.300 [2024-06-07 21:48:22.466220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.300 qpair failed and we were unable to recover it. 00:31:22.300 [2024-06-07 21:48:22.466373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.300 [2024-06-07 21:48:22.466382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.300 qpair failed and we were unable to recover it. 00:31:22.300 [2024-06-07 21:48:22.466534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.300 [2024-06-07 21:48:22.466543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.300 qpair failed and we were unable to recover it. 00:31:22.300 [2024-06-07 21:48:22.466836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.300 [2024-06-07 21:48:22.466845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.300 qpair failed and we were unable to recover it. 00:31:22.300 [2024-06-07 21:48:22.467084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.300 [2024-06-07 21:48:22.467093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.300 qpair failed and we were unable to recover it. 00:31:22.301 [2024-06-07 21:48:22.467294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.301 [2024-06-07 21:48:22.467303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.301 qpair failed and we were unable to recover it. 00:31:22.301 [2024-06-07 21:48:22.467512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.301 [2024-06-07 21:48:22.467521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.301 qpair failed and we were unable to recover it. 00:31:22.301 [2024-06-07 21:48:22.467727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.301 [2024-06-07 21:48:22.467737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.301 qpair failed and we were unable to recover it. 00:31:22.301 [2024-06-07 21:48:22.467879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.301 [2024-06-07 21:48:22.467889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.301 qpair failed and we were unable to recover it. 00:31:22.301 [2024-06-07 21:48:22.468186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.301 [2024-06-07 21:48:22.468195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.301 qpair failed and we were unable to recover it. 00:31:22.301 [2024-06-07 21:48:22.468405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.301 [2024-06-07 21:48:22.468414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.301 qpair failed and we were unable to recover it. 00:31:22.301 [2024-06-07 21:48:22.468574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.301 [2024-06-07 21:48:22.468583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.301 qpair failed and we were unable to recover it. 00:31:22.301 [2024-06-07 21:48:22.468821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.301 [2024-06-07 21:48:22.468851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.301 qpair failed and we were unable to recover it. 00:31:22.301 [2024-06-07 21:48:22.469029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.301 [2024-06-07 21:48:22.469039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.301 qpair failed and we were unable to recover it. 00:31:22.301 [2024-06-07 21:48:22.469360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.301 [2024-06-07 21:48:22.469369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.301 qpair failed and we were unable to recover it. 00:31:22.301 [2024-06-07 21:48:22.469610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.301 [2024-06-07 21:48:22.469619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.301 qpair failed and we were unable to recover it. 00:31:22.301 [2024-06-07 21:48:22.469842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.301 [2024-06-07 21:48:22.469851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.301 qpair failed and we were unable to recover it. 00:31:22.301 [2024-06-07 21:48:22.470006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.301 [2024-06-07 21:48:22.470015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.301 qpair failed and we were unable to recover it. 00:31:22.301 [2024-06-07 21:48:22.470243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.301 [2024-06-07 21:48:22.470274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.301 qpair failed and we were unable to recover it. 00:31:22.301 [2024-06-07 21:48:22.470605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.301 [2024-06-07 21:48:22.470635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.301 qpair failed and we were unable to recover it. 00:31:22.301 [2024-06-07 21:48:22.470821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.301 [2024-06-07 21:48:22.470830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.301 qpair failed and we were unable to recover it. 00:31:22.301 [2024-06-07 21:48:22.471067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.301 [2024-06-07 21:48:22.471098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.301 qpair failed and we were unable to recover it. 00:31:22.301 [2024-06-07 21:48:22.471398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.301 [2024-06-07 21:48:22.471466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.301 qpair failed and we were unable to recover it. 00:31:22.301 [2024-06-07 21:48:22.471673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.301 [2024-06-07 21:48:22.471707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.301 qpair failed and we were unable to recover it. 00:31:22.301 [2024-06-07 21:48:22.472075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.301 [2024-06-07 21:48:22.472094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.301 qpair failed and we were unable to recover it. 00:31:22.301 [2024-06-07 21:48:22.472251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.301 [2024-06-07 21:48:22.472269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.301 qpair failed and we were unable to recover it. 00:31:22.301 [2024-06-07 21:48:22.472431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.301 [2024-06-07 21:48:22.472442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.301 qpair failed and we were unable to recover it. 00:31:22.301 [2024-06-07 21:48:22.472693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.301 [2024-06-07 21:48:22.472723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.301 qpair failed and we were unable to recover it. 00:31:22.301 [2024-06-07 21:48:22.472967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.301 [2024-06-07 21:48:22.472997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.301 qpair failed and we were unable to recover it. 00:31:22.301 [2024-06-07 21:48:22.473451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.301 [2024-06-07 21:48:22.473522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b4d60 with addr=10.0.0.2, port=4420 00:31:22.301 qpair failed and we were unable to recover it. 00:31:22.301 [2024-06-07 21:48:22.473928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.301 [2024-06-07 21:48:22.473963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.301 qpair failed and we were unable to recover it. 00:31:22.301 [2024-06-07 21:48:22.474212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.301 [2024-06-07 21:48:22.474243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.301 qpair failed and we were unable to recover it. 00:31:22.301 [2024-06-07 21:48:22.474502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.301 [2024-06-07 21:48:22.474519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.301 qpair failed and we were unable to recover it. 00:31:22.301 [2024-06-07 21:48:22.474683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.301 [2024-06-07 21:48:22.474700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.301 qpair failed and we were unable to recover it. 00:31:22.301 [2024-06-07 21:48:22.474937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.301 [2024-06-07 21:48:22.474954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.301 qpair failed and we were unable to recover it. 00:31:22.301 [2024-06-07 21:48:22.475190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.301 [2024-06-07 21:48:22.475230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.301 qpair failed and we were unable to recover it. 00:31:22.301 [2024-06-07 21:48:22.475478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.301 [2024-06-07 21:48:22.475508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.301 qpair failed and we were unable to recover it. 00:31:22.301 [2024-06-07 21:48:22.475762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.301 [2024-06-07 21:48:22.475792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.301 qpair failed and we were unable to recover it. 00:31:22.301 [2024-06-07 21:48:22.476062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.301 [2024-06-07 21:48:22.476093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.301 qpair failed and we were unable to recover it. 00:31:22.301 [2024-06-07 21:48:22.476273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.301 [2024-06-07 21:48:22.476303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.301 qpair failed and we were unable to recover it. 00:31:22.301 [2024-06-07 21:48:22.476507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.301 [2024-06-07 21:48:22.476524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.301 qpair failed and we were unable to recover it. 00:31:22.301 [2024-06-07 21:48:22.476752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.301 [2024-06-07 21:48:22.476763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.301 qpair failed and we were unable to recover it. 00:31:22.301 [2024-06-07 21:48:22.476966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.301 [2024-06-07 21:48:22.476975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.301 qpair failed and we were unable to recover it. 00:31:22.302 [2024-06-07 21:48:22.477133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.302 [2024-06-07 21:48:22.477142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.302 qpair failed and we were unable to recover it. 00:31:22.302 [2024-06-07 21:48:22.477388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.302 [2024-06-07 21:48:22.477398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.302 qpair failed and we were unable to recover it. 00:31:22.302 [2024-06-07 21:48:22.477613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.302 [2024-06-07 21:48:22.477622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.302 qpair failed and we were unable to recover it. 00:31:22.302 [2024-06-07 21:48:22.477941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.302 [2024-06-07 21:48:22.477951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.302 qpair failed and we were unable to recover it. 00:31:22.302 [2024-06-07 21:48:22.478216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.302 [2024-06-07 21:48:22.478225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.302 qpair failed and we were unable to recover it. 00:31:22.302 [2024-06-07 21:48:22.478499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.302 [2024-06-07 21:48:22.478508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.302 qpair failed and we were unable to recover it. 00:31:22.302 [2024-06-07 21:48:22.478779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.302 [2024-06-07 21:48:22.478788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.302 qpair failed and we were unable to recover it. 00:31:22.302 [2024-06-07 21:48:22.478938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.302 [2024-06-07 21:48:22.478947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.302 qpair failed and we were unable to recover it. 00:31:22.302 [2024-06-07 21:48:22.479213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.302 [2024-06-07 21:48:22.479223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.302 qpair failed and we were unable to recover it. 00:31:22.302 [2024-06-07 21:48:22.479457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.302 [2024-06-07 21:48:22.479466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.302 qpair failed and we were unable to recover it. 00:31:22.302 [2024-06-07 21:48:22.479676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.302 [2024-06-07 21:48:22.479706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.302 qpair failed and we were unable to recover it. 00:31:22.302 [2024-06-07 21:48:22.480071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.302 [2024-06-07 21:48:22.480103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.302 qpair failed and we were unable to recover it. 00:31:22.302 [2024-06-07 21:48:22.480430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.302 [2024-06-07 21:48:22.480461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.302 qpair failed and we were unable to recover it. 00:31:22.302 [2024-06-07 21:48:22.480796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.302 [2024-06-07 21:48:22.480826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.302 qpair failed and we were unable to recover it. 00:31:22.302 [2024-06-07 21:48:22.481079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.302 [2024-06-07 21:48:22.481089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.302 qpair failed and we were unable to recover it. 00:31:22.302 [2024-06-07 21:48:22.481233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.302 [2024-06-07 21:48:22.481242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.302 qpair failed and we were unable to recover it. 00:31:22.302 [2024-06-07 21:48:22.481480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.302 [2024-06-07 21:48:22.481510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.302 qpair failed and we were unable to recover it. 00:31:22.302 [2024-06-07 21:48:22.481709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.302 [2024-06-07 21:48:22.481739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.302 qpair failed and we were unable to recover it. 00:31:22.302 [2024-06-07 21:48:22.481985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.302 [2024-06-07 21:48:22.482015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.302 qpair failed and we were unable to recover it. 00:31:22.302 [2024-06-07 21:48:22.482194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.302 [2024-06-07 21:48:22.482203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.302 qpair failed and we were unable to recover it. 00:31:22.302 [2024-06-07 21:48:22.482526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.302 [2024-06-07 21:48:22.482557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.302 qpair failed and we were unable to recover it. 00:31:22.302 [2024-06-07 21:48:22.482733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.302 [2024-06-07 21:48:22.482763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.302 qpair failed and we were unable to recover it. 00:31:22.302 [2024-06-07 21:48:22.483005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.302 [2024-06-07 21:48:22.483044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.302 qpair failed and we were unable to recover it. 00:31:22.302 [2024-06-07 21:48:22.483226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.302 [2024-06-07 21:48:22.483256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.302 qpair failed and we were unable to recover it. 00:31:22.302 [2024-06-07 21:48:22.483502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.302 [2024-06-07 21:48:22.483532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.302 qpair failed and we were unable to recover it. 00:31:22.302 [2024-06-07 21:48:22.483789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.302 [2024-06-07 21:48:22.483820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.302 qpair failed and we were unable to recover it. 00:31:22.302 [2024-06-07 21:48:22.484144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.302 [2024-06-07 21:48:22.484153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.302 qpair failed and we were unable to recover it. 00:31:22.302 [2024-06-07 21:48:22.484291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.302 [2024-06-07 21:48:22.484300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.302 qpair failed and we were unable to recover it. 00:31:22.302 [2024-06-07 21:48:22.484526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.302 [2024-06-07 21:48:22.484556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.302 qpair failed and we were unable to recover it. 00:31:22.302 [2024-06-07 21:48:22.484840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.302 [2024-06-07 21:48:22.484871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.302 qpair failed and we were unable to recover it. 00:31:22.302 [2024-06-07 21:48:22.485119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.302 [2024-06-07 21:48:22.485128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.302 qpair failed and we were unable to recover it. 00:31:22.302 [2024-06-07 21:48:22.485344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.302 [2024-06-07 21:48:22.485353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.302 qpair failed and we were unable to recover it. 00:31:22.302 [2024-06-07 21:48:22.485512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.302 [2024-06-07 21:48:22.485523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.302 qpair failed and we were unable to recover it. 00:31:22.302 [2024-06-07 21:48:22.485744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.302 [2024-06-07 21:48:22.485753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.302 qpair failed and we were unable to recover it. 00:31:22.302 [2024-06-07 21:48:22.485904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.302 [2024-06-07 21:48:22.485913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.302 qpair failed and we were unable to recover it. 00:31:22.302 [2024-06-07 21:48:22.486114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.302 [2024-06-07 21:48:22.486124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.302 qpair failed and we were unable to recover it. 00:31:22.302 [2024-06-07 21:48:22.486361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.302 [2024-06-07 21:48:22.486371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.302 qpair failed and we were unable to recover it. 00:31:22.302 [2024-06-07 21:48:22.486522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.302 [2024-06-07 21:48:22.486532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.302 qpair failed and we were unable to recover it. 00:31:22.303 [2024-06-07 21:48:22.486824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.303 [2024-06-07 21:48:22.486834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.303 qpair failed and we were unable to recover it. 00:31:22.303 [2024-06-07 21:48:22.487046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.303 [2024-06-07 21:48:22.487077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.303 qpair failed and we were unable to recover it. 00:31:22.303 [2024-06-07 21:48:22.487327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.303 [2024-06-07 21:48:22.487357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.303 qpair failed and we were unable to recover it. 00:31:22.303 [2024-06-07 21:48:22.487639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.303 [2024-06-07 21:48:22.487669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.303 qpair failed and we were unable to recover it. 00:31:22.303 [2024-06-07 21:48:22.487862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.303 [2024-06-07 21:48:22.487892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.303 qpair failed and we were unable to recover it. 00:31:22.303 [2024-06-07 21:48:22.488090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.303 [2024-06-07 21:48:22.488122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.303 qpair failed and we were unable to recover it. 00:31:22.303 [2024-06-07 21:48:22.488378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.303 [2024-06-07 21:48:22.488409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.303 qpair failed and we were unable to recover it. 00:31:22.303 [2024-06-07 21:48:22.488600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.303 [2024-06-07 21:48:22.488630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.303 qpair failed and we were unable to recover it. 00:31:22.303 [2024-06-07 21:48:22.488834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.303 [2024-06-07 21:48:22.488876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.303 qpair failed and we were unable to recover it. 00:31:22.303 [2024-06-07 21:48:22.489210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.303 [2024-06-07 21:48:22.489234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.303 qpair failed and we were unable to recover it. 00:31:22.303 [2024-06-07 21:48:22.489579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.303 [2024-06-07 21:48:22.489610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.303 qpair failed and we were unable to recover it. 00:31:22.303 [2024-06-07 21:48:22.489868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.303 [2024-06-07 21:48:22.489898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.303 qpair failed and we were unable to recover it. 00:31:22.303 [2024-06-07 21:48:22.490152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.303 [2024-06-07 21:48:22.490183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.303 qpair failed and we were unable to recover it. 00:31:22.303 [2024-06-07 21:48:22.490514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.303 [2024-06-07 21:48:22.490524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.303 qpair failed and we were unable to recover it. 00:31:22.303 [2024-06-07 21:48:22.490654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.303 [2024-06-07 21:48:22.490663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.303 qpair failed and we were unable to recover it. 00:31:22.303 [2024-06-07 21:48:22.490816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.303 [2024-06-07 21:48:22.490825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.303 qpair failed and we were unable to recover it. 00:31:22.303 [2024-06-07 21:48:22.490969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.303 [2024-06-07 21:48:22.490978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.303 qpair failed and we were unable to recover it. 00:31:22.303 [2024-06-07 21:48:22.491248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.303 [2024-06-07 21:48:22.491279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.303 qpair failed and we were unable to recover it. 00:31:22.303 [2024-06-07 21:48:22.491531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.303 [2024-06-07 21:48:22.491561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.303 qpair failed and we were unable to recover it. 00:31:22.303 [2024-06-07 21:48:22.491820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.303 [2024-06-07 21:48:22.491850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.303 qpair failed and we were unable to recover it. 00:31:22.303 [2024-06-07 21:48:22.492121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.303 [2024-06-07 21:48:22.492130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.303 qpair failed and we were unable to recover it. 00:31:22.303 [2024-06-07 21:48:22.492391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.303 [2024-06-07 21:48:22.492400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.303 qpair failed and we were unable to recover it. 00:31:22.303 [2024-06-07 21:48:22.492561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.303 [2024-06-07 21:48:22.492570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.303 qpair failed and we were unable to recover it. 00:31:22.303 [2024-06-07 21:48:22.492808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.303 [2024-06-07 21:48:22.492838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.303 qpair failed and we were unable to recover it. 00:31:22.303 [2024-06-07 21:48:22.493117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.303 [2024-06-07 21:48:22.493149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.303 qpair failed and we were unable to recover it. 00:31:22.303 [2024-06-07 21:48:22.493355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.303 [2024-06-07 21:48:22.493385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.303 qpair failed and we were unable to recover it. 00:31:22.303 [2024-06-07 21:48:22.493661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.303 [2024-06-07 21:48:22.493692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.303 qpair failed and we were unable to recover it. 00:31:22.303 [2024-06-07 21:48:22.493880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.303 [2024-06-07 21:48:22.493922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.303 qpair failed and we were unable to recover it. 00:31:22.303 [2024-06-07 21:48:22.494118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.303 [2024-06-07 21:48:22.494128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.303 qpair failed and we were unable to recover it. 00:31:22.303 [2024-06-07 21:48:22.494273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.303 [2024-06-07 21:48:22.494282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.303 qpair failed and we were unable to recover it. 00:31:22.303 [2024-06-07 21:48:22.494456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.303 [2024-06-07 21:48:22.494465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.303 qpair failed and we were unable to recover it. 00:31:22.303 [2024-06-07 21:48:22.494761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.303 [2024-06-07 21:48:22.494791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.303 qpair failed and we were unable to recover it. 00:31:22.303 [2024-06-07 21:48:22.495044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.304 [2024-06-07 21:48:22.495075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.304 qpair failed and we were unable to recover it. 00:31:22.304 [2024-06-07 21:48:22.495316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.304 [2024-06-07 21:48:22.495325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.304 qpair failed and we were unable to recover it. 00:31:22.304 [2024-06-07 21:48:22.495590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.304 [2024-06-07 21:48:22.495601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.304 qpair failed and we were unable to recover it. 00:31:22.304 [2024-06-07 21:48:22.495751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.304 [2024-06-07 21:48:22.495760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.304 qpair failed and we were unable to recover it. 00:31:22.304 [2024-06-07 21:48:22.495891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.304 [2024-06-07 21:48:22.495900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.304 qpair failed and we were unable to recover it. 00:31:22.304 [2024-06-07 21:48:22.496043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.304 [2024-06-07 21:48:22.496054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.304 qpair failed and we were unable to recover it. 00:31:22.304 [2024-06-07 21:48:22.496215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.304 [2024-06-07 21:48:22.496224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.304 qpair failed and we were unable to recover it. 00:31:22.304 [2024-06-07 21:48:22.496454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.304 [2024-06-07 21:48:22.496463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.304 qpair failed and we were unable to recover it. 00:31:22.304 [2024-06-07 21:48:22.496688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.304 [2024-06-07 21:48:22.496697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.304 qpair failed and we were unable to recover it. 00:31:22.304 [2024-06-07 21:48:22.496903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.304 [2024-06-07 21:48:22.496934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.304 qpair failed and we were unable to recover it. 00:31:22.304 [2024-06-07 21:48:22.497257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.304 [2024-06-07 21:48:22.497288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.304 qpair failed and we were unable to recover it. 00:31:22.304 [2024-06-07 21:48:22.497548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.304 [2024-06-07 21:48:22.497558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.304 qpair failed and we were unable to recover it. 00:31:22.304 [2024-06-07 21:48:22.497705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.304 [2024-06-07 21:48:22.497715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.304 qpair failed and we were unable to recover it. 00:31:22.304 [2024-06-07 21:48:22.497886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.304 [2024-06-07 21:48:22.497896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.304 qpair failed and we were unable to recover it. 00:31:22.304 [2024-06-07 21:48:22.498125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.304 [2024-06-07 21:48:22.498134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.304 qpair failed and we were unable to recover it. 00:31:22.304 [2024-06-07 21:48:22.498365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.304 [2024-06-07 21:48:22.498375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.304 qpair failed and we were unable to recover it. 00:31:22.304 [2024-06-07 21:48:22.498643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.304 [2024-06-07 21:48:22.498652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.304 qpair failed and we were unable to recover it. 00:31:22.304 [2024-06-07 21:48:22.498866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.304 [2024-06-07 21:48:22.498876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.304 qpair failed and we were unable to recover it. 00:31:22.304 [2024-06-07 21:48:22.499119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.304 [2024-06-07 21:48:22.499128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.304 qpair failed and we were unable to recover it. 00:31:22.304 [2024-06-07 21:48:22.499360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.304 [2024-06-07 21:48:22.499370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.304 qpair failed and we were unable to recover it. 00:31:22.304 [2024-06-07 21:48:22.499699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.304 [2024-06-07 21:48:22.499708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.304 qpair failed and we were unable to recover it. 00:31:22.304 [2024-06-07 21:48:22.499977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.304 [2024-06-07 21:48:22.499986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.304 qpair failed and we were unable to recover it. 00:31:22.304 [2024-06-07 21:48:22.500209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.304 [2024-06-07 21:48:22.500218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.304 qpair failed and we were unable to recover it. 00:31:22.304 [2024-06-07 21:48:22.500420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.304 [2024-06-07 21:48:22.500430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.304 qpair failed and we were unable to recover it. 00:31:22.304 [2024-06-07 21:48:22.500694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.304 [2024-06-07 21:48:22.500704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.304 qpair failed and we were unable to recover it. 00:31:22.304 [2024-06-07 21:48:22.500917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.304 [2024-06-07 21:48:22.500926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.304 qpair failed and we were unable to recover it. 00:31:22.304 [2024-06-07 21:48:22.501070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.304 [2024-06-07 21:48:22.501079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.304 qpair failed and we were unable to recover it. 00:31:22.304 [2024-06-07 21:48:22.501293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.304 [2024-06-07 21:48:22.501302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.304 qpair failed and we were unable to recover it. 00:31:22.304 [2024-06-07 21:48:22.501500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.304 [2024-06-07 21:48:22.501510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.304 qpair failed and we were unable to recover it. 00:31:22.304 [2024-06-07 21:48:22.501816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.304 [2024-06-07 21:48:22.501828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.304 qpair failed and we were unable to recover it. 00:31:22.304 [2024-06-07 21:48:22.502035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.304 [2024-06-07 21:48:22.502045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.304 qpair failed and we were unable to recover it. 00:31:22.304 [2024-06-07 21:48:22.502339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.304 [2024-06-07 21:48:22.502369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.304 qpair failed and we were unable to recover it. 00:31:22.304 [2024-06-07 21:48:22.502552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.304 [2024-06-07 21:48:22.502583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.304 qpair failed and we were unable to recover it. 00:31:22.304 [2024-06-07 21:48:22.502946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.304 [2024-06-07 21:48:22.502977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.304 qpair failed and we were unable to recover it. 00:31:22.304 [2024-06-07 21:48:22.503245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.304 [2024-06-07 21:48:22.503277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.304 qpair failed and we were unable to recover it. 00:31:22.304 [2024-06-07 21:48:22.503528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.304 [2024-06-07 21:48:22.503558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.304 qpair failed and we were unable to recover it. 00:31:22.304 [2024-06-07 21:48:22.503815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.304 [2024-06-07 21:48:22.503846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.304 qpair failed and we were unable to recover it. 00:31:22.304 [2024-06-07 21:48:22.504175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.304 [2024-06-07 21:48:22.504184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.304 qpair failed and we were unable to recover it. 00:31:22.304 [2024-06-07 21:48:22.504419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.304 [2024-06-07 21:48:22.504428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.305 qpair failed and we were unable to recover it. 00:31:22.305 [2024-06-07 21:48:22.504636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.305 [2024-06-07 21:48:22.504645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.305 qpair failed and we were unable to recover it. 00:31:22.305 [2024-06-07 21:48:22.504876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.305 [2024-06-07 21:48:22.504885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.305 qpair failed and we were unable to recover it. 00:31:22.305 [2024-06-07 21:48:22.505037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.305 [2024-06-07 21:48:22.505047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.305 qpair failed and we were unable to recover it. 00:31:22.305 [2024-06-07 21:48:22.505204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.305 [2024-06-07 21:48:22.505214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.305 qpair failed and we were unable to recover it. 00:31:22.305 [2024-06-07 21:48:22.505416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.305 [2024-06-07 21:48:22.505425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.305 qpair failed and we were unable to recover it. 00:31:22.305 [2024-06-07 21:48:22.505568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.305 [2024-06-07 21:48:22.505578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.305 qpair failed and we were unable to recover it. 00:31:22.305 [2024-06-07 21:48:22.505816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.305 [2024-06-07 21:48:22.505825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.305 qpair failed and we were unable to recover it. 00:31:22.305 [2024-06-07 21:48:22.506050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.305 [2024-06-07 21:48:22.506060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.305 qpair failed and we were unable to recover it. 00:31:22.305 [2024-06-07 21:48:22.506205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.305 [2024-06-07 21:48:22.506214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.305 qpair failed and we were unable to recover it. 00:31:22.305 [2024-06-07 21:48:22.506413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.305 [2024-06-07 21:48:22.506423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.305 qpair failed and we were unable to recover it. 00:31:22.305 [2024-06-07 21:48:22.506557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.305 [2024-06-07 21:48:22.506566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.305 qpair failed and we were unable to recover it. 00:31:22.305 [2024-06-07 21:48:22.506719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.305 [2024-06-07 21:48:22.506728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.305 qpair failed and we were unable to recover it. 00:31:22.305 [2024-06-07 21:48:22.506932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.305 [2024-06-07 21:48:22.506942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.305 qpair failed and we were unable to recover it. 00:31:22.305 [2024-06-07 21:48:22.507163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.305 [2024-06-07 21:48:22.507173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.305 qpair failed and we were unable to recover it. 00:31:22.305 [2024-06-07 21:48:22.507382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.305 [2024-06-07 21:48:22.507412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.305 qpair failed and we were unable to recover it. 00:31:22.305 [2024-06-07 21:48:22.507781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.305 [2024-06-07 21:48:22.507811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.305 qpair failed and we were unable to recover it. 00:31:22.305 [2024-06-07 21:48:22.508126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.305 [2024-06-07 21:48:22.508157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.305 qpair failed and we were unable to recover it. 00:31:22.305 [2024-06-07 21:48:22.508530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.305 [2024-06-07 21:48:22.508560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.305 qpair failed and we were unable to recover it. 00:31:22.305 [2024-06-07 21:48:22.508806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.305 [2024-06-07 21:48:22.508836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.305 qpair failed and we were unable to recover it. 00:31:22.305 [2024-06-07 21:48:22.509072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.305 [2024-06-07 21:48:22.509104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.305 qpair failed and we were unable to recover it. 00:31:22.305 [2024-06-07 21:48:22.509471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.305 [2024-06-07 21:48:22.509501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.305 qpair failed and we were unable to recover it. 00:31:22.305 [2024-06-07 21:48:22.509746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.305 [2024-06-07 21:48:22.509776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.305 qpair failed and we were unable to recover it. 00:31:22.305 [2024-06-07 21:48:22.510091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.305 [2024-06-07 21:48:22.510122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.305 qpair failed and we were unable to recover it. 00:31:22.305 [2024-06-07 21:48:22.510324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.305 [2024-06-07 21:48:22.510363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.305 qpair failed and we were unable to recover it. 00:31:22.305 [2024-06-07 21:48:22.510523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.305 [2024-06-07 21:48:22.510532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.305 qpair failed and we were unable to recover it. 00:31:22.305 [2024-06-07 21:48:22.510781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.305 [2024-06-07 21:48:22.510790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.305 qpair failed and we were unable to recover it. 00:31:22.305 [2024-06-07 21:48:22.510954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.305 [2024-06-07 21:48:22.510990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.305 qpair failed and we were unable to recover it. 00:31:22.305 [2024-06-07 21:48:22.511351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.305 [2024-06-07 21:48:22.511382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.305 qpair failed and we were unable to recover it. 00:31:22.305 [2024-06-07 21:48:22.511732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.305 [2024-06-07 21:48:22.511763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.305 qpair failed and we were unable to recover it. 00:31:22.305 [2024-06-07 21:48:22.512099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.305 [2024-06-07 21:48:22.512130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.305 qpair failed and we were unable to recover it. 00:31:22.305 [2024-06-07 21:48:22.512471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.305 [2024-06-07 21:48:22.512507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.305 qpair failed and we were unable to recover it. 00:31:22.305 [2024-06-07 21:48:22.512757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.305 [2024-06-07 21:48:22.512788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.305 qpair failed and we were unable to recover it. 00:31:22.305 [2024-06-07 21:48:22.513063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.305 [2024-06-07 21:48:22.513095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.305 qpair failed and we were unable to recover it. 00:31:22.305 [2024-06-07 21:48:22.513415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.305 [2024-06-07 21:48:22.513446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.305 qpair failed and we were unable to recover it. 00:31:22.305 [2024-06-07 21:48:22.513725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.305 [2024-06-07 21:48:22.513756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.305 qpair failed and we were unable to recover it. 00:31:22.305 [2024-06-07 21:48:22.514063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.305 [2024-06-07 21:48:22.514072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.305 qpair failed and we were unable to recover it. 00:31:22.305 [2024-06-07 21:48:22.514379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.305 [2024-06-07 21:48:22.514389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.305 qpair failed and we were unable to recover it. 00:31:22.305 [2024-06-07 21:48:22.514626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.305 [2024-06-07 21:48:22.514635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.305 qpair failed and we were unable to recover it. 00:31:22.306 [2024-06-07 21:48:22.514771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.306 [2024-06-07 21:48:22.514780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.306 qpair failed and we were unable to recover it. 00:31:22.306 [2024-06-07 21:48:22.514989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.306 [2024-06-07 21:48:22.514998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.306 qpair failed and we were unable to recover it. 00:31:22.306 [2024-06-07 21:48:22.515141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.306 [2024-06-07 21:48:22.515151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.306 qpair failed and we were unable to recover it. 00:31:22.306 [2024-06-07 21:48:22.515361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.306 [2024-06-07 21:48:22.515370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.306 qpair failed and we were unable to recover it. 00:31:22.306 [2024-06-07 21:48:22.515509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.306 [2024-06-07 21:48:22.515518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.306 qpair failed and we were unable to recover it. 00:31:22.306 [2024-06-07 21:48:22.515806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.306 [2024-06-07 21:48:22.515815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.306 qpair failed and we were unable to recover it. 00:31:22.306 [2024-06-07 21:48:22.516136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.306 [2024-06-07 21:48:22.516145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.306 qpair failed and we were unable to recover it. 00:31:22.306 [2024-06-07 21:48:22.516299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.306 [2024-06-07 21:48:22.516308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.306 qpair failed and we were unable to recover it. 00:31:22.306 [2024-06-07 21:48:22.516576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.306 [2024-06-07 21:48:22.516586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.306 qpair failed and we were unable to recover it. 00:31:22.306 [2024-06-07 21:48:22.516735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.306 [2024-06-07 21:48:22.516745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.306 qpair failed and we were unable to recover it. 00:31:22.306 [2024-06-07 21:48:22.516929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.306 [2024-06-07 21:48:22.516938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.306 qpair failed and we were unable to recover it. 00:31:22.306 [2024-06-07 21:48:22.517204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.306 [2024-06-07 21:48:22.517214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.306 qpair failed and we were unable to recover it. 00:31:22.306 [2024-06-07 21:48:22.517503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.306 [2024-06-07 21:48:22.517513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.306 qpair failed and we were unable to recover it. 00:31:22.306 [2024-06-07 21:48:22.517810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.306 [2024-06-07 21:48:22.517820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.306 qpair failed and we were unable to recover it. 00:31:22.306 [2024-06-07 21:48:22.517975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.306 [2024-06-07 21:48:22.517984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.306 qpair failed and we were unable to recover it. 00:31:22.306 [2024-06-07 21:48:22.518190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.306 [2024-06-07 21:48:22.518200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.306 qpair failed and we were unable to recover it. 00:31:22.306 [2024-06-07 21:48:22.518353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.306 [2024-06-07 21:48:22.518362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.306 qpair failed and we were unable to recover it. 00:31:22.306 [2024-06-07 21:48:22.518582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.306 [2024-06-07 21:48:22.518592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.306 qpair failed and we were unable to recover it. 00:31:22.586 [2024-06-07 21:48:22.518859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.586 [2024-06-07 21:48:22.518868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.587 qpair failed and we were unable to recover it. 00:31:22.587 [2024-06-07 21:48:22.519153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.587 [2024-06-07 21:48:22.519163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.587 qpair failed and we were unable to recover it. 00:31:22.587 [2024-06-07 21:48:22.519403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.587 [2024-06-07 21:48:22.519413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.587 qpair failed and we were unable to recover it. 00:31:22.587 [2024-06-07 21:48:22.519614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.587 [2024-06-07 21:48:22.519623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.587 qpair failed and we were unable to recover it. 00:31:22.587 [2024-06-07 21:48:22.519886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.587 [2024-06-07 21:48:22.519895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.587 qpair failed and we were unable to recover it. 00:31:22.587 [2024-06-07 21:48:22.520107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.587 [2024-06-07 21:48:22.520116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.587 qpair failed and we were unable to recover it. 00:31:22.587 [2024-06-07 21:48:22.520317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.587 [2024-06-07 21:48:22.520326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.587 qpair failed and we were unable to recover it. 00:31:22.587 [2024-06-07 21:48:22.520470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.587 [2024-06-07 21:48:22.520480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.587 qpair failed and we were unable to recover it. 00:31:22.587 [2024-06-07 21:48:22.520678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.587 [2024-06-07 21:48:22.520687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.587 qpair failed and we were unable to recover it. 00:31:22.587 [2024-06-07 21:48:22.520890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.587 [2024-06-07 21:48:22.520899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.587 qpair failed and we were unable to recover it. 00:31:22.587 [2024-06-07 21:48:22.521055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.587 [2024-06-07 21:48:22.521065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.587 qpair failed and we were unable to recover it. 00:31:22.587 [2024-06-07 21:48:22.521278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.587 [2024-06-07 21:48:22.521287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.587 qpair failed and we were unable to recover it. 00:31:22.587 [2024-06-07 21:48:22.521487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.587 [2024-06-07 21:48:22.521497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.587 qpair failed and we were unable to recover it. 00:31:22.587 [2024-06-07 21:48:22.521700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.587 [2024-06-07 21:48:22.521710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.587 qpair failed and we were unable to recover it. 00:31:22.587 [2024-06-07 21:48:22.521804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.587 [2024-06-07 21:48:22.521815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.587 qpair failed and we were unable to recover it. 00:31:22.587 [2024-06-07 21:48:22.522057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.587 [2024-06-07 21:48:22.522066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.587 qpair failed and we were unable to recover it. 00:31:22.587 [2024-06-07 21:48:22.522312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.587 [2024-06-07 21:48:22.522322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.587 qpair failed and we were unable to recover it. 00:31:22.587 [2024-06-07 21:48:22.522449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.587 [2024-06-07 21:48:22.522458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.587 qpair failed and we were unable to recover it. 00:31:22.587 [2024-06-07 21:48:22.522694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.587 [2024-06-07 21:48:22.522704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.587 qpair failed and we were unable to recover it. 00:31:22.587 [2024-06-07 21:48:22.522919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.587 [2024-06-07 21:48:22.522928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.587 qpair failed and we were unable to recover it. 00:31:22.587 [2024-06-07 21:48:22.523135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.587 [2024-06-07 21:48:22.523145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.587 qpair failed and we were unable to recover it. 00:31:22.587 [2024-06-07 21:48:22.523357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.587 [2024-06-07 21:48:22.523366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.587 qpair failed and we were unable to recover it. 00:31:22.587 [2024-06-07 21:48:22.523564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.587 [2024-06-07 21:48:22.523573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.587 qpair failed and we were unable to recover it. 00:31:22.587 [2024-06-07 21:48:22.523771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.587 [2024-06-07 21:48:22.523780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.587 qpair failed and we were unable to recover it. 00:31:22.587 [2024-06-07 21:48:22.524011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.587 [2024-06-07 21:48:22.524020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.587 qpair failed and we were unable to recover it. 00:31:22.587 [2024-06-07 21:48:22.524161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.587 [2024-06-07 21:48:22.524171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.587 qpair failed and we were unable to recover it. 00:31:22.587 [2024-06-07 21:48:22.524399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.587 [2024-06-07 21:48:22.524408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.587 qpair failed and we were unable to recover it. 00:31:22.587 [2024-06-07 21:48:22.524619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.587 [2024-06-07 21:48:22.524628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.587 qpair failed and we were unable to recover it. 00:31:22.587 [2024-06-07 21:48:22.524897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.587 [2024-06-07 21:48:22.524906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.587 qpair failed and we were unable to recover it. 00:31:22.587 [2024-06-07 21:48:22.525162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.587 [2024-06-07 21:48:22.525172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.587 qpair failed and we were unable to recover it. 00:31:22.587 [2024-06-07 21:48:22.525385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.587 [2024-06-07 21:48:22.525394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.587 qpair failed and we were unable to recover it. 00:31:22.587 [2024-06-07 21:48:22.525618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.587 [2024-06-07 21:48:22.525627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.587 qpair failed and we were unable to recover it. 00:31:22.587 [2024-06-07 21:48:22.525849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.587 [2024-06-07 21:48:22.525858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.587 qpair failed and we were unable to recover it. 00:31:22.587 [2024-06-07 21:48:22.526082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.587 [2024-06-07 21:48:22.526092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.587 qpair failed and we were unable to recover it. 00:31:22.587 [2024-06-07 21:48:22.526392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.587 [2024-06-07 21:48:22.526422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.587 qpair failed and we were unable to recover it. 00:31:22.587 [2024-06-07 21:48:22.526632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.587 [2024-06-07 21:48:22.526662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.587 qpair failed and we were unable to recover it. 00:31:22.587 [2024-06-07 21:48:22.526913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.588 [2024-06-07 21:48:22.526944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.588 qpair failed and we were unable to recover it. 00:31:22.588 [2024-06-07 21:48:22.527211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.588 [2024-06-07 21:48:22.527220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.588 qpair failed and we were unable to recover it. 00:31:22.588 [2024-06-07 21:48:22.527436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.588 [2024-06-07 21:48:22.527445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.588 qpair failed and we were unable to recover it. 00:31:22.588 [2024-06-07 21:48:22.527581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.588 [2024-06-07 21:48:22.527592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.588 qpair failed and we were unable to recover it. 00:31:22.588 [2024-06-07 21:48:22.527789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.588 [2024-06-07 21:48:22.527799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.588 qpair failed and we were unable to recover it. 00:31:22.588 [2024-06-07 21:48:22.528066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.588 [2024-06-07 21:48:22.528076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.588 qpair failed and we were unable to recover it. 00:31:22.588 [2024-06-07 21:48:22.528283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.588 [2024-06-07 21:48:22.528292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.588 qpair failed and we were unable to recover it. 00:31:22.588 [2024-06-07 21:48:22.528522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.588 [2024-06-07 21:48:22.528531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.588 qpair failed and we were unable to recover it. 00:31:22.588 [2024-06-07 21:48:22.528832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.588 [2024-06-07 21:48:22.528842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.588 qpair failed and we were unable to recover it. 00:31:22.588 [2024-06-07 21:48:22.529039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.588 [2024-06-07 21:48:22.529048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.588 qpair failed and we were unable to recover it. 00:31:22.588 [2024-06-07 21:48:22.529314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.588 [2024-06-07 21:48:22.529324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.588 qpair failed and we were unable to recover it. 00:31:22.588 [2024-06-07 21:48:22.529467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.588 [2024-06-07 21:48:22.529477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.588 qpair failed and we were unable to recover it. 00:31:22.588 [2024-06-07 21:48:22.529732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.588 [2024-06-07 21:48:22.529763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.588 qpair failed and we were unable to recover it. 00:31:22.588 [2024-06-07 21:48:22.530101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.588 [2024-06-07 21:48:22.530133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.588 qpair failed and we were unable to recover it. 00:31:22.588 [2024-06-07 21:48:22.530380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.588 [2024-06-07 21:48:22.530410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.588 qpair failed and we were unable to recover it. 00:31:22.588 [2024-06-07 21:48:22.530690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.588 [2024-06-07 21:48:22.530720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.588 qpair failed and we were unable to recover it. 00:31:22.588 [2024-06-07 21:48:22.530975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.588 [2024-06-07 21:48:22.531005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.588 qpair failed and we were unable to recover it. 00:31:22.588 [2024-06-07 21:48:22.531287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.588 [2024-06-07 21:48:22.531317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.588 qpair failed and we were unable to recover it. 00:31:22.588 [2024-06-07 21:48:22.531579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.588 [2024-06-07 21:48:22.531614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.588 qpair failed and we were unable to recover it. 00:31:22.588 [2024-06-07 21:48:22.531809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.588 [2024-06-07 21:48:22.531839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.588 qpair failed and we were unable to recover it. 00:31:22.588 [2024-06-07 21:48:22.532110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.588 [2024-06-07 21:48:22.532159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.588 qpair failed and we were unable to recover it. 00:31:22.588 [2024-06-07 21:48:22.532359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.588 [2024-06-07 21:48:22.532390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.588 qpair failed and we were unable to recover it. 00:31:22.588 [2024-06-07 21:48:22.532721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.588 [2024-06-07 21:48:22.532730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.588 qpair failed and we were unable to recover it. 00:31:22.588 [2024-06-07 21:48:22.533106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.588 [2024-06-07 21:48:22.533137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.588 qpair failed and we were unable to recover it. 00:31:22.588 [2024-06-07 21:48:22.533446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.588 [2024-06-07 21:48:22.533476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.588 qpair failed and we were unable to recover it. 00:31:22.588 [2024-06-07 21:48:22.533719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.588 [2024-06-07 21:48:22.533749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.588 qpair failed and we were unable to recover it. 00:31:22.588 [2024-06-07 21:48:22.534059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.588 [2024-06-07 21:48:22.534091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.588 qpair failed and we were unable to recover it. 00:31:22.588 [2024-06-07 21:48:22.534291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.588 [2024-06-07 21:48:22.534321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.588 qpair failed and we were unable to recover it. 00:31:22.588 [2024-06-07 21:48:22.534551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.588 [2024-06-07 21:48:22.534560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.588 qpair failed and we were unable to recover it. 00:31:22.588 [2024-06-07 21:48:22.534783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.588 [2024-06-07 21:48:22.534792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.588 qpair failed and we were unable to recover it. 00:31:22.588 [2024-06-07 21:48:22.534999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.588 [2024-06-07 21:48:22.535008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.588 qpair failed and we were unable to recover it. 00:31:22.588 [2024-06-07 21:48:22.535324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.588 [2024-06-07 21:48:22.535333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.588 qpair failed and we were unable to recover it. 00:31:22.588 [2024-06-07 21:48:22.535622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.588 [2024-06-07 21:48:22.535631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.588 qpair failed and we were unable to recover it. 00:31:22.588 [2024-06-07 21:48:22.535871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.588 [2024-06-07 21:48:22.535880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.588 qpair failed and we were unable to recover it. 00:31:22.588 [2024-06-07 21:48:22.536011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.588 [2024-06-07 21:48:22.536020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.588 qpair failed and we were unable to recover it. 00:31:22.588 [2024-06-07 21:48:22.536218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.588 [2024-06-07 21:48:22.536228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.588 qpair failed and we were unable to recover it. 00:31:22.588 [2024-06-07 21:48:22.536455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.588 [2024-06-07 21:48:22.536464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.588 qpair failed and we were unable to recover it. 00:31:22.589 [2024-06-07 21:48:22.536707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.589 [2024-06-07 21:48:22.536716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.589 qpair failed and we were unable to recover it. 00:31:22.589 [2024-06-07 21:48:22.536960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.589 [2024-06-07 21:48:22.536970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.589 qpair failed and we were unable to recover it. 00:31:22.589 [2024-06-07 21:48:22.537167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.589 [2024-06-07 21:48:22.537176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.589 qpair failed and we were unable to recover it. 00:31:22.589 [2024-06-07 21:48:22.537398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.589 [2024-06-07 21:48:22.537429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.589 qpair failed and we were unable to recover it. 00:31:22.589 [2024-06-07 21:48:22.537627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.589 [2024-06-07 21:48:22.537657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.589 qpair failed and we were unable to recover it. 00:31:22.589 [2024-06-07 21:48:22.537896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.589 [2024-06-07 21:48:22.537927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.589 qpair failed and we were unable to recover it. 00:31:22.589 [2024-06-07 21:48:22.538117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.589 [2024-06-07 21:48:22.538148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.589 qpair failed and we were unable to recover it. 00:31:22.589 [2024-06-07 21:48:22.538397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.589 [2024-06-07 21:48:22.538427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.589 qpair failed and we were unable to recover it. 00:31:22.589 [2024-06-07 21:48:22.538768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.589 [2024-06-07 21:48:22.538798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.589 qpair failed and we were unable to recover it. 00:31:22.589 [2024-06-07 21:48:22.539077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.589 [2024-06-07 21:48:22.539108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.589 qpair failed and we were unable to recover it. 00:31:22.589 [2024-06-07 21:48:22.539374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.589 [2024-06-07 21:48:22.539383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.589 qpair failed and we were unable to recover it. 00:31:22.589 [2024-06-07 21:48:22.539580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.589 [2024-06-07 21:48:22.539589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.589 qpair failed and we were unable to recover it. 00:31:22.589 [2024-06-07 21:48:22.539815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.589 [2024-06-07 21:48:22.539845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.589 qpair failed and we were unable to recover it. 00:31:22.589 [2024-06-07 21:48:22.540132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.589 [2024-06-07 21:48:22.540163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.589 qpair failed and we were unable to recover it. 00:31:22.589 [2024-06-07 21:48:22.540496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.589 [2024-06-07 21:48:22.540526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.589 qpair failed and we were unable to recover it. 00:31:22.589 [2024-06-07 21:48:22.540813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.589 [2024-06-07 21:48:22.540844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.589 qpair failed and we were unable to recover it. 00:31:22.589 [2024-06-07 21:48:22.541125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.589 [2024-06-07 21:48:22.541157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.589 qpair failed and we were unable to recover it. 00:31:22.589 [2024-06-07 21:48:22.541392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.589 [2024-06-07 21:48:22.541401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.589 qpair failed and we were unable to recover it. 00:31:22.589 [2024-06-07 21:48:22.541542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.589 [2024-06-07 21:48:22.541551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.589 qpair failed and we were unable to recover it. 00:31:22.589 [2024-06-07 21:48:22.541775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.589 [2024-06-07 21:48:22.541805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.589 qpair failed and we were unable to recover it. 00:31:22.589 [2024-06-07 21:48:22.542112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.589 [2024-06-07 21:48:22.542143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.589 qpair failed and we were unable to recover it. 00:31:22.589 [2024-06-07 21:48:22.542399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.589 [2024-06-07 21:48:22.542435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.589 qpair failed and we were unable to recover it. 00:31:22.589 [2024-06-07 21:48:22.542724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.589 [2024-06-07 21:48:22.542754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.589 qpair failed and we were unable to recover it. 00:31:22.589 [2024-06-07 21:48:22.543092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.589 [2024-06-07 21:48:22.543123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.589 qpair failed and we were unable to recover it. 00:31:22.589 [2024-06-07 21:48:22.543432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.589 [2024-06-07 21:48:22.543462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.589 qpair failed and we were unable to recover it. 00:31:22.589 [2024-06-07 21:48:22.543712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.589 [2024-06-07 21:48:22.543742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.589 qpair failed and we were unable to recover it. 00:31:22.589 [2024-06-07 21:48:22.543990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.589 [2024-06-07 21:48:22.544020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.589 qpair failed and we were unable to recover it. 00:31:22.589 [2024-06-07 21:48:22.544282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.589 [2024-06-07 21:48:22.544291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.589 qpair failed and we were unable to recover it. 00:31:22.589 [2024-06-07 21:48:22.544432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.589 [2024-06-07 21:48:22.544441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.589 qpair failed and we were unable to recover it. 00:31:22.589 [2024-06-07 21:48:22.544587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.589 [2024-06-07 21:48:22.544596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.589 qpair failed and we were unable to recover it. 00:31:22.589 [2024-06-07 21:48:22.544883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.589 [2024-06-07 21:48:22.544893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.589 qpair failed and we were unable to recover it. 00:31:22.589 [2024-06-07 21:48:22.545119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.589 [2024-06-07 21:48:22.545129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.589 qpair failed and we were unable to recover it. 00:31:22.589 [2024-06-07 21:48:22.545344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.589 [2024-06-07 21:48:22.545353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.589 qpair failed and we were unable to recover it. 00:31:22.589 [2024-06-07 21:48:22.545669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.589 [2024-06-07 21:48:22.545678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.589 qpair failed and we were unable to recover it. 00:31:22.589 [2024-06-07 21:48:22.545903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.589 [2024-06-07 21:48:22.545912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.589 qpair failed and we were unable to recover it. 00:31:22.589 [2024-06-07 21:48:22.546110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.589 [2024-06-07 21:48:22.546120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.589 qpair failed and we were unable to recover it. 00:31:22.590 [2024-06-07 21:48:22.546309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.590 [2024-06-07 21:48:22.546318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.590 qpair failed and we were unable to recover it. 00:31:22.590 [2024-06-07 21:48:22.546527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.590 [2024-06-07 21:48:22.546557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.590 qpair failed and we were unable to recover it. 00:31:22.590 [2024-06-07 21:48:22.546894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.590 [2024-06-07 21:48:22.546925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.590 qpair failed and we were unable to recover it. 00:31:22.590 [2024-06-07 21:48:22.547206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.590 [2024-06-07 21:48:22.547237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.590 qpair failed and we were unable to recover it. 00:31:22.590 [2024-06-07 21:48:22.547493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.590 [2024-06-07 21:48:22.547534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.590 qpair failed and we were unable to recover it. 00:31:22.590 [2024-06-07 21:48:22.547750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.590 [2024-06-07 21:48:22.547759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.590 qpair failed and we were unable to recover it. 00:31:22.590 [2024-06-07 21:48:22.547905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.590 [2024-06-07 21:48:22.547915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.590 qpair failed and we were unable to recover it. 00:31:22.590 [2024-06-07 21:48:22.548133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.590 [2024-06-07 21:48:22.548165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.590 qpair failed and we were unable to recover it. 00:31:22.590 [2024-06-07 21:48:22.548337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.590 [2024-06-07 21:48:22.548367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.590 qpair failed and we were unable to recover it. 00:31:22.590 [2024-06-07 21:48:22.548702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.590 [2024-06-07 21:48:22.548732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.590 qpair failed and we were unable to recover it. 00:31:22.590 [2024-06-07 21:48:22.548940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.590 [2024-06-07 21:48:22.548971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.590 qpair failed and we were unable to recover it. 00:31:22.590 [2024-06-07 21:48:22.549371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.590 [2024-06-07 21:48:22.549403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.590 qpair failed and we were unable to recover it. 00:31:22.590 [2024-06-07 21:48:22.549670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.590 [2024-06-07 21:48:22.549701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.590 qpair failed and we were unable to recover it. 00:31:22.590 [2024-06-07 21:48:22.549875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.590 [2024-06-07 21:48:22.549905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.590 qpair failed and we were unable to recover it. 00:31:22.590 [2024-06-07 21:48:22.550104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.590 [2024-06-07 21:48:22.550136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.590 qpair failed and we were unable to recover it. 00:31:22.590 [2024-06-07 21:48:22.550417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.590 [2024-06-07 21:48:22.550447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.590 qpair failed and we were unable to recover it. 00:31:22.590 [2024-06-07 21:48:22.550700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.590 [2024-06-07 21:48:22.550730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.590 qpair failed and we were unable to recover it. 00:31:22.590 [2024-06-07 21:48:22.550986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.590 [2024-06-07 21:48:22.551017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.590 qpair failed and we were unable to recover it. 00:31:22.590 [2024-06-07 21:48:22.551377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.590 [2024-06-07 21:48:22.551408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.590 qpair failed and we were unable to recover it. 00:31:22.590 [2024-06-07 21:48:22.551731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.590 [2024-06-07 21:48:22.551740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.590 qpair failed and we were unable to recover it. 00:31:22.590 [2024-06-07 21:48:22.551939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.590 [2024-06-07 21:48:22.551947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.590 qpair failed and we were unable to recover it. 00:31:22.590 [2024-06-07 21:48:22.552162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.590 [2024-06-07 21:48:22.552172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.590 qpair failed and we were unable to recover it. 00:31:22.590 [2024-06-07 21:48:22.552342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.590 [2024-06-07 21:48:22.552372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.590 qpair failed and we were unable to recover it. 00:31:22.590 [2024-06-07 21:48:22.552577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.590 [2024-06-07 21:48:22.552607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.590 qpair failed and we were unable to recover it. 00:31:22.590 [2024-06-07 21:48:22.552947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.590 [2024-06-07 21:48:22.552977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.590 qpair failed and we were unable to recover it. 00:31:22.590 [2024-06-07 21:48:22.553271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.590 [2024-06-07 21:48:22.553312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.590 qpair failed and we were unable to recover it. 00:31:22.590 [2024-06-07 21:48:22.553650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.590 [2024-06-07 21:48:22.553659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.590 qpair failed and we were unable to recover it. 00:31:22.590 [2024-06-07 21:48:22.553921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.590 [2024-06-07 21:48:22.553951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.590 qpair failed and we were unable to recover it. 00:31:22.590 [2024-06-07 21:48:22.554204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.590 [2024-06-07 21:48:22.554236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.590 qpair failed and we were unable to recover it. 00:31:22.590 [2024-06-07 21:48:22.554536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.590 [2024-06-07 21:48:22.554545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.590 qpair failed and we were unable to recover it. 00:31:22.590 [2024-06-07 21:48:22.554810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.590 [2024-06-07 21:48:22.554820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.590 qpair failed and we were unable to recover it. 00:31:22.590 [2024-06-07 21:48:22.555086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.590 [2024-06-07 21:48:22.555096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.590 qpair failed and we were unable to recover it. 00:31:22.590 [2024-06-07 21:48:22.555414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.590 [2024-06-07 21:48:22.555423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.590 qpair failed and we were unable to recover it. 00:31:22.590 [2024-06-07 21:48:22.555622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.591 [2024-06-07 21:48:22.555631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.591 qpair failed and we were unable to recover it. 00:31:22.591 [2024-06-07 21:48:22.555842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.591 [2024-06-07 21:48:22.555851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.591 qpair failed and we were unable to recover it. 00:31:22.591 [2024-06-07 21:48:22.556120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.591 [2024-06-07 21:48:22.556130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.591 qpair failed and we were unable to recover it. 00:31:22.591 [2024-06-07 21:48:22.556403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.591 [2024-06-07 21:48:22.556412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.591 qpair failed and we were unable to recover it. 00:31:22.591 [2024-06-07 21:48:22.556571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.591 [2024-06-07 21:48:22.556580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.591 qpair failed and we were unable to recover it. 00:31:22.591 [2024-06-07 21:48:22.556891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.591 [2024-06-07 21:48:22.556922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.591 qpair failed and we were unable to recover it. 00:31:22.591 [2024-06-07 21:48:22.557189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.591 [2024-06-07 21:48:22.557220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.591 qpair failed and we were unable to recover it. 00:31:22.591 [2024-06-07 21:48:22.557531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.591 [2024-06-07 21:48:22.557561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.591 qpair failed and we were unable to recover it. 00:31:22.591 [2024-06-07 21:48:22.557915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.591 [2024-06-07 21:48:22.557946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.591 qpair failed and we were unable to recover it. 00:31:22.591 [2024-06-07 21:48:22.558201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.591 [2024-06-07 21:48:22.558211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.591 qpair failed and we were unable to recover it. 00:31:22.591 [2024-06-07 21:48:22.558494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.591 [2024-06-07 21:48:22.558503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.591 qpair failed and we were unable to recover it. 00:31:22.591 [2024-06-07 21:48:22.558772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.591 [2024-06-07 21:48:22.558781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.591 qpair failed and we were unable to recover it. 00:31:22.591 [2024-06-07 21:48:22.558996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.591 [2024-06-07 21:48:22.559005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.591 qpair failed and we were unable to recover it. 00:31:22.591 [2024-06-07 21:48:22.559218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.591 [2024-06-07 21:48:22.559227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.591 qpair failed and we were unable to recover it. 00:31:22.591 [2024-06-07 21:48:22.559515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.591 [2024-06-07 21:48:22.559524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.591 qpair failed and we were unable to recover it. 00:31:22.591 [2024-06-07 21:48:22.559751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.591 [2024-06-07 21:48:22.559760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.591 qpair failed and we were unable to recover it. 00:31:22.591 [2024-06-07 21:48:22.559900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.591 [2024-06-07 21:48:22.559908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.591 qpair failed and we were unable to recover it. 00:31:22.591 [2024-06-07 21:48:22.560115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.591 [2024-06-07 21:48:22.560124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.591 qpair failed and we were unable to recover it. 00:31:22.591 [2024-06-07 21:48:22.560318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.591 [2024-06-07 21:48:22.560327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.591 qpair failed and we were unable to recover it. 00:31:22.591 [2024-06-07 21:48:22.560532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.591 [2024-06-07 21:48:22.560541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.591 qpair failed and we were unable to recover it. 00:31:22.591 [2024-06-07 21:48:22.560831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.591 [2024-06-07 21:48:22.560841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.591 qpair failed and we were unable to recover it. 00:31:22.591 [2024-06-07 21:48:22.561047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.591 [2024-06-07 21:48:22.561056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.591 qpair failed and we were unable to recover it. 00:31:22.591 [2024-06-07 21:48:22.561214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.591 [2024-06-07 21:48:22.561223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.591 qpair failed and we were unable to recover it. 00:31:22.591 [2024-06-07 21:48:22.561437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.591 [2024-06-07 21:48:22.561467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.591 qpair failed and we were unable to recover it. 00:31:22.591 [2024-06-07 21:48:22.561833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.591 [2024-06-07 21:48:22.561863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.591 qpair failed and we were unable to recover it. 00:31:22.591 [2024-06-07 21:48:22.562174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.591 [2024-06-07 21:48:22.562206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.591 qpair failed and we were unable to recover it. 00:31:22.591 [2024-06-07 21:48:22.562461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.591 [2024-06-07 21:48:22.562470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.591 qpair failed and we were unable to recover it. 00:31:22.592 [2024-06-07 21:48:22.562749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.592 [2024-06-07 21:48:22.562759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.592 qpair failed and we were unable to recover it. 00:31:22.592 [2024-06-07 21:48:22.563069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.592 [2024-06-07 21:48:22.563079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.592 qpair failed and we were unable to recover it. 00:31:22.592 [2024-06-07 21:48:22.563237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.592 [2024-06-07 21:48:22.563247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.592 qpair failed and we were unable to recover it. 00:31:22.592 [2024-06-07 21:48:22.563458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.592 [2024-06-07 21:48:22.563467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.592 qpair failed and we were unable to recover it. 00:31:22.592 [2024-06-07 21:48:22.563690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.592 [2024-06-07 21:48:22.563699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.592 qpair failed and we were unable to recover it. 00:31:22.592 [2024-06-07 21:48:22.563838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.592 [2024-06-07 21:48:22.563849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.592 qpair failed and we were unable to recover it. 00:31:22.592 [2024-06-07 21:48:22.564047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.592 [2024-06-07 21:48:22.564091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.592 qpair failed and we were unable to recover it. 00:31:22.592 [2024-06-07 21:48:22.564284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.592 [2024-06-07 21:48:22.564315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.592 qpair failed and we were unable to recover it. 00:31:22.592 [2024-06-07 21:48:22.564488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.592 [2024-06-07 21:48:22.564519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.592 qpair failed and we were unable to recover it. 00:31:22.592 [2024-06-07 21:48:22.564830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.592 [2024-06-07 21:48:22.564860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.592 qpair failed and we were unable to recover it. 00:31:22.592 [2024-06-07 21:48:22.565198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.592 [2024-06-07 21:48:22.565229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.592 qpair failed and we were unable to recover it. 00:31:22.592 [2024-06-07 21:48:22.565556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.592 [2024-06-07 21:48:22.565588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.592 qpair failed and we were unable to recover it. 00:31:22.592 [2024-06-07 21:48:22.565899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.592 [2024-06-07 21:48:22.565929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.592 qpair failed and we were unable to recover it. 00:31:22.592 [2024-06-07 21:48:22.566173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.592 [2024-06-07 21:48:22.566205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.592 qpair failed and we were unable to recover it. 00:31:22.592 [2024-06-07 21:48:22.566423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.592 [2024-06-07 21:48:22.566454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.592 qpair failed and we were unable to recover it. 00:31:22.592 [2024-06-07 21:48:22.566781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.592 [2024-06-07 21:48:22.566791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.592 qpair failed and we were unable to recover it. 00:31:22.592 [2024-06-07 21:48:22.567059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.592 [2024-06-07 21:48:22.567091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.592 qpair failed and we were unable to recover it. 00:31:22.592 [2024-06-07 21:48:22.567405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.592 [2024-06-07 21:48:22.567435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.592 qpair failed and we were unable to recover it. 00:31:22.592 [2024-06-07 21:48:22.567683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.592 [2024-06-07 21:48:22.567714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.592 qpair failed and we were unable to recover it. 00:31:22.592 [2024-06-07 21:48:22.568056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.592 [2024-06-07 21:48:22.568094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.592 qpair failed and we were unable to recover it. 00:31:22.592 [2024-06-07 21:48:22.568307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.592 [2024-06-07 21:48:22.568316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.592 qpair failed and we were unable to recover it. 00:31:22.592 [2024-06-07 21:48:22.568530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.592 [2024-06-07 21:48:22.568540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.592 qpair failed and we were unable to recover it. 00:31:22.592 [2024-06-07 21:48:22.568747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.592 [2024-06-07 21:48:22.568756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.592 qpair failed and we were unable to recover it. 00:31:22.592 [2024-06-07 21:48:22.568859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.592 [2024-06-07 21:48:22.568868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.592 qpair failed and we were unable to recover it. 00:31:22.592 [2024-06-07 21:48:22.569015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.592 [2024-06-07 21:48:22.569032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.592 qpair failed and we were unable to recover it. 00:31:22.592 [2024-06-07 21:48:22.569330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.592 [2024-06-07 21:48:22.569339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.592 qpair failed and we were unable to recover it. 00:31:22.592 [2024-06-07 21:48:22.569550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.592 [2024-06-07 21:48:22.569559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.592 qpair failed and we were unable to recover it. 00:31:22.592 [2024-06-07 21:48:22.569877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.592 [2024-06-07 21:48:22.569886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.592 qpair failed and we were unable to recover it. 00:31:22.592 [2024-06-07 21:48:22.570149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.592 [2024-06-07 21:48:22.570158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.592 qpair failed and we were unable to recover it. 00:31:22.592 [2024-06-07 21:48:22.570393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.592 [2024-06-07 21:48:22.570402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.592 qpair failed and we were unable to recover it. 00:31:22.592 [2024-06-07 21:48:22.570637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.592 [2024-06-07 21:48:22.570646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.592 qpair failed and we were unable to recover it. 00:31:22.592 [2024-06-07 21:48:22.570861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.592 [2024-06-07 21:48:22.570870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.592 qpair failed and we were unable to recover it. 00:31:22.592 [2024-06-07 21:48:22.571105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.592 [2024-06-07 21:48:22.571115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.592 qpair failed and we were unable to recover it. 00:31:22.592 [2024-06-07 21:48:22.571399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.592 [2024-06-07 21:48:22.571408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.592 qpair failed and we were unable to recover it. 00:31:22.592 [2024-06-07 21:48:22.571606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.592 [2024-06-07 21:48:22.571615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.592 qpair failed and we were unable to recover it. 00:31:22.592 [2024-06-07 21:48:22.571814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.592 [2024-06-07 21:48:22.571824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.592 qpair failed and we were unable to recover it. 00:31:22.592 [2024-06-07 21:48:22.571952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.592 [2024-06-07 21:48:22.571961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.593 qpair failed and we were unable to recover it. 00:31:22.593 [2024-06-07 21:48:22.572175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.593 [2024-06-07 21:48:22.572185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.593 qpair failed and we were unable to recover it. 00:31:22.593 [2024-06-07 21:48:22.572332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.593 [2024-06-07 21:48:22.572342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.593 qpair failed and we were unable to recover it. 00:31:22.593 [2024-06-07 21:48:22.572644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.593 [2024-06-07 21:48:22.572674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.593 qpair failed and we were unable to recover it. 00:31:22.593 [2024-06-07 21:48:22.572925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.593 [2024-06-07 21:48:22.572954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.593 qpair failed and we were unable to recover it. 00:31:22.593 [2024-06-07 21:48:22.573276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.593 [2024-06-07 21:48:22.573307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.593 qpair failed and we were unable to recover it. 00:31:22.593 [2024-06-07 21:48:22.573620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.593 [2024-06-07 21:48:22.573650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.593 qpair failed and we were unable to recover it. 00:31:22.593 [2024-06-07 21:48:22.573990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.593 [2024-06-07 21:48:22.574020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.593 qpair failed and we were unable to recover it. 00:31:22.593 [2024-06-07 21:48:22.574312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.593 [2024-06-07 21:48:22.574343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.593 qpair failed and we were unable to recover it. 00:31:22.593 [2024-06-07 21:48:22.574622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.593 [2024-06-07 21:48:22.574657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.593 qpair failed and we were unable to recover it. 00:31:22.593 [2024-06-07 21:48:22.575001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.593 [2024-06-07 21:48:22.575042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.593 qpair failed and we were unable to recover it. 00:31:22.593 [2024-06-07 21:48:22.575376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.593 [2024-06-07 21:48:22.575386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.593 qpair failed and we were unable to recover it. 00:31:22.593 [2024-06-07 21:48:22.575680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.593 [2024-06-07 21:48:22.575689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.593 qpair failed and we were unable to recover it. 00:31:22.593 [2024-06-07 21:48:22.575895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.593 [2024-06-07 21:48:22.575905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.593 qpair failed and we were unable to recover it. 00:31:22.593 [2024-06-07 21:48:22.576177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.593 [2024-06-07 21:48:22.576187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.593 qpair failed and we were unable to recover it. 00:31:22.593 [2024-06-07 21:48:22.576340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.593 [2024-06-07 21:48:22.576349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.593 qpair failed and we were unable to recover it. 00:31:22.593 [2024-06-07 21:48:22.576566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.593 [2024-06-07 21:48:22.576575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.593 qpair failed and we were unable to recover it. 00:31:22.593 [2024-06-07 21:48:22.576845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.593 [2024-06-07 21:48:22.576876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.593 qpair failed and we were unable to recover it. 00:31:22.593 [2024-06-07 21:48:22.577190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.593 [2024-06-07 21:48:22.577225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.593 qpair failed and we were unable to recover it. 00:31:22.593 [2024-06-07 21:48:22.577501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.593 [2024-06-07 21:48:22.577532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.593 qpair failed and we were unable to recover it. 00:31:22.593 [2024-06-07 21:48:22.577786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.593 [2024-06-07 21:48:22.577816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.593 qpair failed and we were unable to recover it. 00:31:22.593 [2024-06-07 21:48:22.578120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.593 [2024-06-07 21:48:22.578129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.593 qpair failed and we were unable to recover it. 00:31:22.593 [2024-06-07 21:48:22.578348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.593 [2024-06-07 21:48:22.578357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.593 qpair failed and we were unable to recover it. 00:31:22.593 [2024-06-07 21:48:22.578623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.593 [2024-06-07 21:48:22.578632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.593 qpair failed and we were unable to recover it. 00:31:22.593 [2024-06-07 21:48:22.578866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.593 [2024-06-07 21:48:22.578875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.593 qpair failed and we were unable to recover it. 00:31:22.593 [2024-06-07 21:48:22.579082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.593 [2024-06-07 21:48:22.579092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.593 qpair failed and we were unable to recover it. 00:31:22.593 [2024-06-07 21:48:22.579298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.593 [2024-06-07 21:48:22.579308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.593 qpair failed and we were unable to recover it. 00:31:22.593 [2024-06-07 21:48:22.579575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.593 [2024-06-07 21:48:22.579584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.593 qpair failed and we were unable to recover it. 00:31:22.593 [2024-06-07 21:48:22.579804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.593 [2024-06-07 21:48:22.579813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.593 qpair failed and we were unable to recover it. 00:31:22.593 [2024-06-07 21:48:22.580011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.593 [2024-06-07 21:48:22.580020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.593 qpair failed and we were unable to recover it. 00:31:22.593 [2024-06-07 21:48:22.580236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.593 [2024-06-07 21:48:22.580245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.593 qpair failed and we were unable to recover it. 00:31:22.593 [2024-06-07 21:48:22.580382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.593 [2024-06-07 21:48:22.580391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.593 qpair failed and we were unable to recover it. 00:31:22.593 [2024-06-07 21:48:22.580711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.593 [2024-06-07 21:48:22.580720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.593 qpair failed and we were unable to recover it. 00:31:22.593 [2024-06-07 21:48:22.580986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.593 [2024-06-07 21:48:22.580996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.593 qpair failed and we were unable to recover it. 00:31:22.593 [2024-06-07 21:48:22.581191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.593 [2024-06-07 21:48:22.581200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.593 qpair failed and we were unable to recover it. 00:31:22.594 [2024-06-07 21:48:22.581353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.594 [2024-06-07 21:48:22.581362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.594 qpair failed and we were unable to recover it. 00:31:22.594 [2024-06-07 21:48:22.581685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.594 [2024-06-07 21:48:22.581716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.594 qpair failed and we were unable to recover it. 00:31:22.594 [2024-06-07 21:48:22.582039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.594 [2024-06-07 21:48:22.582071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.594 qpair failed and we were unable to recover it. 00:31:22.594 [2024-06-07 21:48:22.582315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.594 [2024-06-07 21:48:22.582346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.594 qpair failed and we were unable to recover it. 00:31:22.594 [2024-06-07 21:48:22.582598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.594 [2024-06-07 21:48:22.582634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.594 qpair failed and we were unable to recover it. 00:31:22.594 [2024-06-07 21:48:22.582917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.594 [2024-06-07 21:48:22.582927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.594 qpair failed and we were unable to recover it. 00:31:22.594 [2024-06-07 21:48:22.583179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.594 [2024-06-07 21:48:22.583189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.594 qpair failed and we were unable to recover it. 00:31:22.594 [2024-06-07 21:48:22.583418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.594 [2024-06-07 21:48:22.583427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.594 qpair failed and we were unable to recover it. 00:31:22.594 [2024-06-07 21:48:22.583670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.594 [2024-06-07 21:48:22.583701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.594 qpair failed and we were unable to recover it. 00:31:22.594 [2024-06-07 21:48:22.583973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.594 [2024-06-07 21:48:22.584003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.594 qpair failed and we were unable to recover it. 00:31:22.594 [2024-06-07 21:48:22.584266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.594 [2024-06-07 21:48:22.584297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.594 qpair failed and we were unable to recover it. 00:31:22.594 [2024-06-07 21:48:22.584554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.594 [2024-06-07 21:48:22.584584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.594 qpair failed and we were unable to recover it. 00:31:22.594 [2024-06-07 21:48:22.584885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.594 [2024-06-07 21:48:22.584894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.594 qpair failed and we were unable to recover it. 00:31:22.594 [2024-06-07 21:48:22.585091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.594 [2024-06-07 21:48:22.585101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.594 qpair failed and we were unable to recover it. 00:31:22.594 [2024-06-07 21:48:22.585336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.594 [2024-06-07 21:48:22.585372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.594 qpair failed and we were unable to recover it. 00:31:22.594 [2024-06-07 21:48:22.585628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.594 [2024-06-07 21:48:22.585658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.594 qpair failed and we were unable to recover it. 00:31:22.594 [2024-06-07 21:48:22.585983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.594 [2024-06-07 21:48:22.586014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.594 qpair failed and we were unable to recover it. 00:31:22.594 [2024-06-07 21:48:22.586296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.594 [2024-06-07 21:48:22.586327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.594 qpair failed and we were unable to recover it. 00:31:22.594 [2024-06-07 21:48:22.586687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.594 [2024-06-07 21:48:22.586718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.594 qpair failed and we were unable to recover it. 00:31:22.594 [2024-06-07 21:48:22.587035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.594 [2024-06-07 21:48:22.587067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.594 qpair failed and we were unable to recover it. 00:31:22.594 [2024-06-07 21:48:22.587269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.594 [2024-06-07 21:48:22.587300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.594 qpair failed and we were unable to recover it. 00:31:22.594 [2024-06-07 21:48:22.587547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.594 [2024-06-07 21:48:22.587556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.594 qpair failed and we were unable to recover it. 00:31:22.594 [2024-06-07 21:48:22.587765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.594 [2024-06-07 21:48:22.587774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.594 qpair failed and we were unable to recover it. 00:31:22.594 [2024-06-07 21:48:22.587926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.594 [2024-06-07 21:48:22.587936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.594 qpair failed and we were unable to recover it. 00:31:22.594 [2024-06-07 21:48:22.588152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.594 [2024-06-07 21:48:22.588161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.594 qpair failed and we were unable to recover it. 00:31:22.594 [2024-06-07 21:48:22.588411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.594 [2024-06-07 21:48:22.588442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.594 qpair failed and we were unable to recover it. 00:31:22.594 [2024-06-07 21:48:22.588753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.594 [2024-06-07 21:48:22.588783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.594 qpair failed and we were unable to recover it. 00:31:22.594 [2024-06-07 21:48:22.589144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.594 [2024-06-07 21:48:22.589176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.594 qpair failed and we were unable to recover it. 00:31:22.594 [2024-06-07 21:48:22.589464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.594 [2024-06-07 21:48:22.589495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.594 qpair failed and we were unable to recover it. 00:31:22.594 [2024-06-07 21:48:22.589747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.594 [2024-06-07 21:48:22.589756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.594 qpair failed and we were unable to recover it. 00:31:22.594 [2024-06-07 21:48:22.590046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.594 [2024-06-07 21:48:22.590056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.594 qpair failed and we were unable to recover it. 00:31:22.594 [2024-06-07 21:48:22.590280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.594 [2024-06-07 21:48:22.590289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.594 qpair failed and we were unable to recover it. 00:31:22.594 [2024-06-07 21:48:22.590553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.595 [2024-06-07 21:48:22.590562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.595 qpair failed and we were unable to recover it. 00:31:22.595 [2024-06-07 21:48:22.590861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.595 [2024-06-07 21:48:22.590870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.595 qpair failed and we were unable to recover it. 00:31:22.595 [2024-06-07 21:48:22.591091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.595 [2024-06-07 21:48:22.591101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.595 qpair failed and we were unable to recover it. 00:31:22.595 [2024-06-07 21:48:22.591254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.595 [2024-06-07 21:48:22.591264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.595 qpair failed and we were unable to recover it. 00:31:22.595 [2024-06-07 21:48:22.591473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.595 [2024-06-07 21:48:22.591504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.595 qpair failed and we were unable to recover it. 00:31:22.595 [2024-06-07 21:48:22.591772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.595 [2024-06-07 21:48:22.591802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.595 qpair failed and we were unable to recover it. 00:31:22.595 [2024-06-07 21:48:22.592140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.595 [2024-06-07 21:48:22.592172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.595 qpair failed and we were unable to recover it. 00:31:22.595 [2024-06-07 21:48:22.592430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.595 [2024-06-07 21:48:22.592460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.595 qpair failed and we were unable to recover it. 00:31:22.595 [2024-06-07 21:48:22.592725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.595 [2024-06-07 21:48:22.592755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.595 qpair failed and we were unable to recover it. 00:31:22.595 [2024-06-07 21:48:22.593066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.595 [2024-06-07 21:48:22.593099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.595 qpair failed and we were unable to recover it. 00:31:22.595 [2024-06-07 21:48:22.593360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.595 [2024-06-07 21:48:22.593392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.595 qpair failed and we were unable to recover it. 00:31:22.595 [2024-06-07 21:48:22.593744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.595 [2024-06-07 21:48:22.593784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.595 qpair failed and we were unable to recover it. 00:31:22.595 [2024-06-07 21:48:22.594092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.595 [2024-06-07 21:48:22.594124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.595 qpair failed and we were unable to recover it. 00:31:22.595 [2024-06-07 21:48:22.594346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.595 [2024-06-07 21:48:22.594356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.595 qpair failed and we were unable to recover it. 00:31:22.595 [2024-06-07 21:48:22.594567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.595 [2024-06-07 21:48:22.594576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.595 qpair failed and we were unable to recover it. 00:31:22.595 [2024-06-07 21:48:22.594789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.595 [2024-06-07 21:48:22.594798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.595 qpair failed and we were unable to recover it. 00:31:22.595 [2024-06-07 21:48:22.595072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.595 [2024-06-07 21:48:22.595081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.595 qpair failed and we were unable to recover it. 00:31:22.595 [2024-06-07 21:48:22.595400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.595 [2024-06-07 21:48:22.595409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.595 qpair failed and we were unable to recover it. 00:31:22.595 [2024-06-07 21:48:22.595672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.595 [2024-06-07 21:48:22.595681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.595 qpair failed and we were unable to recover it. 00:31:22.595 [2024-06-07 21:48:22.595827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.595 [2024-06-07 21:48:22.595836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.595 qpair failed and we were unable to recover it. 00:31:22.595 [2024-06-07 21:48:22.595996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.595 [2024-06-07 21:48:22.596005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.595 qpair failed and we were unable to recover it. 00:31:22.595 [2024-06-07 21:48:22.596299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.595 [2024-06-07 21:48:22.596308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.595 qpair failed and we were unable to recover it. 00:31:22.595 [2024-06-07 21:48:22.596540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.595 [2024-06-07 21:48:22.596551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.595 qpair failed and we were unable to recover it. 00:31:22.595 [2024-06-07 21:48:22.596706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.595 [2024-06-07 21:48:22.596715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.595 qpair failed and we were unable to recover it. 00:31:22.595 [2024-06-07 21:48:22.596858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.595 [2024-06-07 21:48:22.596868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.595 qpair failed and we were unable to recover it. 00:31:22.595 [2024-06-07 21:48:22.597146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.595 [2024-06-07 21:48:22.597156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.595 qpair failed and we were unable to recover it. 00:31:22.595 [2024-06-07 21:48:22.597384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.595 [2024-06-07 21:48:22.597393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.595 qpair failed and we were unable to recover it. 00:31:22.595 [2024-06-07 21:48:22.597591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.595 [2024-06-07 21:48:22.597600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.595 qpair failed and we were unable to recover it. 00:31:22.595 [2024-06-07 21:48:22.597815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.595 [2024-06-07 21:48:22.597824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.595 qpair failed and we were unable to recover it. 00:31:22.595 [2024-06-07 21:48:22.598056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.595 [2024-06-07 21:48:22.598088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.595 qpair failed and we were unable to recover it. 00:31:22.595 [2024-06-07 21:48:22.598400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.595 [2024-06-07 21:48:22.598430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.595 qpair failed and we were unable to recover it. 00:31:22.595 [2024-06-07 21:48:22.598670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.595 [2024-06-07 21:48:22.598701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.595 qpair failed and we were unable to recover it. 00:31:22.595 [2024-06-07 21:48:22.599018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.595 [2024-06-07 21:48:22.599057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.595 qpair failed and we were unable to recover it. 00:31:22.595 [2024-06-07 21:48:22.599242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.595 [2024-06-07 21:48:22.599272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.595 qpair failed and we were unable to recover it. 00:31:22.595 [2024-06-07 21:48:22.599516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.596 [2024-06-07 21:48:22.599546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.596 qpair failed and we were unable to recover it. 00:31:22.596 [2024-06-07 21:48:22.599721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.596 [2024-06-07 21:48:22.599730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.596 qpair failed and we were unable to recover it. 00:31:22.596 [2024-06-07 21:48:22.599942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.596 [2024-06-07 21:48:22.599973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.596 qpair failed and we were unable to recover it. 00:31:22.596 [2024-06-07 21:48:22.600224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.596 [2024-06-07 21:48:22.600255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.596 qpair failed and we were unable to recover it. 00:31:22.596 [2024-06-07 21:48:22.600563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.596 [2024-06-07 21:48:22.600593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.596 qpair failed and we were unable to recover it. 00:31:22.596 [2024-06-07 21:48:22.600847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.596 [2024-06-07 21:48:22.600878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.596 qpair failed and we were unable to recover it. 00:31:22.596 [2024-06-07 21:48:22.601125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.596 [2024-06-07 21:48:22.601156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.596 qpair failed and we were unable to recover it. 00:31:22.596 [2024-06-07 21:48:22.601330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.596 [2024-06-07 21:48:22.601361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.596 qpair failed and we were unable to recover it. 00:31:22.596 [2024-06-07 21:48:22.601610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.596 [2024-06-07 21:48:22.601641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.596 qpair failed and we were unable to recover it. 00:31:22.596 [2024-06-07 21:48:22.601975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.596 [2024-06-07 21:48:22.602005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.596 qpair failed and we were unable to recover it. 00:31:22.596 [2024-06-07 21:48:22.602269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.596 [2024-06-07 21:48:22.602300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.596 qpair failed and we were unable to recover it. 00:31:22.596 [2024-06-07 21:48:22.602540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.596 [2024-06-07 21:48:22.602571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.596 qpair failed and we were unable to recover it. 00:31:22.596 [2024-06-07 21:48:22.602919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.596 [2024-06-07 21:48:22.602928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.596 qpair failed and we were unable to recover it. 00:31:22.596 [2024-06-07 21:48:22.603222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.596 [2024-06-07 21:48:22.603254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.596 qpair failed and we were unable to recover it. 00:31:22.596 [2024-06-07 21:48:22.603580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.596 [2024-06-07 21:48:22.603610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.596 qpair failed and we were unable to recover it. 00:31:22.596 [2024-06-07 21:48:22.603817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.596 [2024-06-07 21:48:22.603848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.596 qpair failed and we were unable to recover it. 00:31:22.596 [2024-06-07 21:48:22.604110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.596 [2024-06-07 21:48:22.604141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.596 qpair failed and we were unable to recover it. 00:31:22.596 [2024-06-07 21:48:22.604366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.596 [2024-06-07 21:48:22.604376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.596 qpair failed and we were unable to recover it. 00:31:22.596 [2024-06-07 21:48:22.604573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.596 [2024-06-07 21:48:22.604583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.596 qpair failed and we were unable to recover it. 00:31:22.596 [2024-06-07 21:48:22.604827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.596 [2024-06-07 21:48:22.604857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.596 qpair failed and we were unable to recover it. 00:31:22.596 [2024-06-07 21:48:22.605196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.596 [2024-06-07 21:48:22.605227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.596 qpair failed and we were unable to recover it. 00:31:22.596 [2024-06-07 21:48:22.605485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.596 [2024-06-07 21:48:22.605515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.596 qpair failed and we were unable to recover it. 00:31:22.596 [2024-06-07 21:48:22.605850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.596 [2024-06-07 21:48:22.605881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.596 qpair failed and we were unable to recover it. 00:31:22.596 [2024-06-07 21:48:22.606138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.596 [2024-06-07 21:48:22.606170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.596 qpair failed and we were unable to recover it. 00:31:22.596 [2024-06-07 21:48:22.606480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.596 [2024-06-07 21:48:22.606511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.596 qpair failed and we were unable to recover it. 00:31:22.596 [2024-06-07 21:48:22.606832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.596 [2024-06-07 21:48:22.606863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.596 qpair failed and we were unable to recover it. 00:31:22.596 [2024-06-07 21:48:22.607103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.596 [2024-06-07 21:48:22.607135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.596 qpair failed and we were unable to recover it. 00:31:22.596 [2024-06-07 21:48:22.607414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.596 [2024-06-07 21:48:22.607423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.596 qpair failed and we were unable to recover it. 00:31:22.596 [2024-06-07 21:48:22.607592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.596 [2024-06-07 21:48:22.607612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.596 qpair failed and we were unable to recover it. 00:31:22.596 [2024-06-07 21:48:22.607766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.596 [2024-06-07 21:48:22.607776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.596 qpair failed and we were unable to recover it. 00:31:22.596 [2024-06-07 21:48:22.607991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.596 [2024-06-07 21:48:22.608000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.596 qpair failed and we were unable to recover it. 00:31:22.596 [2024-06-07 21:48:22.608229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.596 [2024-06-07 21:48:22.608238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.597 qpair failed and we were unable to recover it. 00:31:22.597 [2024-06-07 21:48:22.608530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.597 [2024-06-07 21:48:22.608539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.597 qpair failed and we were unable to recover it. 00:31:22.597 [2024-06-07 21:48:22.608699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.597 [2024-06-07 21:48:22.608708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.597 qpair failed and we were unable to recover it. 00:31:22.597 [2024-06-07 21:48:22.608828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.597 [2024-06-07 21:48:22.608837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.597 qpair failed and we were unable to recover it. 00:31:22.597 [2024-06-07 21:48:22.609041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.597 [2024-06-07 21:48:22.609050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.597 qpair failed and we were unable to recover it. 00:31:22.597 [2024-06-07 21:48:22.609290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.597 [2024-06-07 21:48:22.609299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.597 qpair failed and we were unable to recover it. 00:31:22.597 [2024-06-07 21:48:22.609565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.597 [2024-06-07 21:48:22.609574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.597 qpair failed and we were unable to recover it. 00:31:22.597 [2024-06-07 21:48:22.609850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.597 [2024-06-07 21:48:22.609859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.597 qpair failed and we were unable to recover it. 00:31:22.597 [2024-06-07 21:48:22.610155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.597 [2024-06-07 21:48:22.610164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.597 qpair failed and we were unable to recover it. 00:31:22.597 [2024-06-07 21:48:22.610393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.597 [2024-06-07 21:48:22.610403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.597 qpair failed and we were unable to recover it. 00:31:22.597 [2024-06-07 21:48:22.610695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.597 [2024-06-07 21:48:22.610704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.597 qpair failed and we were unable to recover it. 00:31:22.597 [2024-06-07 21:48:22.610916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.597 [2024-06-07 21:48:22.610926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.597 qpair failed and we were unable to recover it. 00:31:22.597 [2024-06-07 21:48:22.611092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.597 [2024-06-07 21:48:22.611102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.597 qpair failed and we were unable to recover it. 00:31:22.597 [2024-06-07 21:48:22.611372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.597 [2024-06-07 21:48:22.611381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.597 qpair failed and we were unable to recover it. 00:31:22.597 [2024-06-07 21:48:22.611524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.597 [2024-06-07 21:48:22.611533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.597 qpair failed and we were unable to recover it. 00:31:22.597 [2024-06-07 21:48:22.611692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.597 [2024-06-07 21:48:22.611701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.597 qpair failed and we were unable to recover it. 00:31:22.597 [2024-06-07 21:48:22.611848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.597 [2024-06-07 21:48:22.611857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.597 qpair failed and we were unable to recover it. 00:31:22.597 [2024-06-07 21:48:22.612156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.597 [2024-06-07 21:48:22.612187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.597 qpair failed and we were unable to recover it. 00:31:22.597 [2024-06-07 21:48:22.612357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.597 [2024-06-07 21:48:22.612367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.597 qpair failed and we were unable to recover it. 00:31:22.597 [2024-06-07 21:48:22.612636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.597 [2024-06-07 21:48:22.612667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.597 qpair failed and we were unable to recover it. 00:31:22.597 [2024-06-07 21:48:22.612923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.597 [2024-06-07 21:48:22.612953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.597 qpair failed and we were unable to recover it. 00:31:22.597 [2024-06-07 21:48:22.613265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.597 [2024-06-07 21:48:22.613297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.597 qpair failed and we were unable to recover it. 00:31:22.597 [2024-06-07 21:48:22.613503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.597 [2024-06-07 21:48:22.613533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.597 qpair failed and we were unable to recover it. 00:31:22.597 [2024-06-07 21:48:22.613820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.597 [2024-06-07 21:48:22.613829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.597 qpair failed and we were unable to recover it. 00:31:22.597 [2024-06-07 21:48:22.613972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.597 [2024-06-07 21:48:22.613981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.597 qpair failed and we were unable to recover it. 00:31:22.597 [2024-06-07 21:48:22.614196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.597 [2024-06-07 21:48:22.614227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.597 qpair failed and we were unable to recover it. 00:31:22.597 [2024-06-07 21:48:22.614487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.597 [2024-06-07 21:48:22.614517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.597 qpair failed and we were unable to recover it. 00:31:22.597 [2024-06-07 21:48:22.614774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.597 [2024-06-07 21:48:22.614804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.597 qpair failed and we were unable to recover it. 00:31:22.597 [2024-06-07 21:48:22.615084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.597 [2024-06-07 21:48:22.615116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.597 qpair failed and we were unable to recover it. 00:31:22.597 [2024-06-07 21:48:22.615450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.598 [2024-06-07 21:48:22.615481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.598 qpair failed and we were unable to recover it. 00:31:22.598 [2024-06-07 21:48:22.615802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.598 [2024-06-07 21:48:22.615832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.598 qpair failed and we were unable to recover it. 00:31:22.598 [2024-06-07 21:48:22.616170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.598 [2024-06-07 21:48:22.616201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.598 qpair failed and we were unable to recover it. 00:31:22.598 [2024-06-07 21:48:22.616481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.598 [2024-06-07 21:48:22.616490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.598 qpair failed and we were unable to recover it. 00:31:22.598 [2024-06-07 21:48:22.616766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.598 [2024-06-07 21:48:22.616796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.598 qpair failed and we were unable to recover it. 00:31:22.598 [2024-06-07 21:48:22.617046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.598 [2024-06-07 21:48:22.617078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.598 qpair failed and we were unable to recover it. 00:31:22.598 [2024-06-07 21:48:22.617364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.598 [2024-06-07 21:48:22.617395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.598 qpair failed and we were unable to recover it. 00:31:22.598 [2024-06-07 21:48:22.617638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.598 [2024-06-07 21:48:22.617676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.598 qpair failed and we were unable to recover it. 00:31:22.598 [2024-06-07 21:48:22.617873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.598 [2024-06-07 21:48:22.617884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.598 qpair failed and we were unable to recover it. 00:31:22.598 [2024-06-07 21:48:22.618158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.598 [2024-06-07 21:48:22.618168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.598 qpair failed and we were unable to recover it. 00:31:22.598 [2024-06-07 21:48:22.618404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.598 [2024-06-07 21:48:22.618434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.598 qpair failed and we were unable to recover it. 00:31:22.598 [2024-06-07 21:48:22.618695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.598 [2024-06-07 21:48:22.618726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.598 qpair failed and we were unable to recover it. 00:31:22.598 [2024-06-07 21:48:22.618971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.598 [2024-06-07 21:48:22.619001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.598 qpair failed and we were unable to recover it. 00:31:22.598 [2024-06-07 21:48:22.619266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.598 [2024-06-07 21:48:22.619297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.598 qpair failed and we were unable to recover it. 00:31:22.598 [2024-06-07 21:48:22.619646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.598 [2024-06-07 21:48:22.619678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.598 qpair failed and we were unable to recover it. 00:31:22.598 [2024-06-07 21:48:22.620000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.598 [2024-06-07 21:48:22.620041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.598 qpair failed and we were unable to recover it. 00:31:22.598 [2024-06-07 21:48:22.620334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.598 [2024-06-07 21:48:22.620364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.598 qpair failed and we were unable to recover it. 00:31:22.598 [2024-06-07 21:48:22.620655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.598 [2024-06-07 21:48:22.620686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.598 qpair failed and we were unable to recover it. 00:31:22.598 [2024-06-07 21:48:22.620997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.598 [2024-06-07 21:48:22.621053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.598 qpair failed and we were unable to recover it. 00:31:22.598 [2024-06-07 21:48:22.621311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.598 [2024-06-07 21:48:22.621341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.598 qpair failed and we were unable to recover it. 00:31:22.598 [2024-06-07 21:48:22.621651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.598 [2024-06-07 21:48:22.621681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.598 qpair failed and we were unable to recover it. 00:31:22.598 [2024-06-07 21:48:22.621930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.598 [2024-06-07 21:48:22.621939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.598 qpair failed and we were unable to recover it. 00:31:22.598 [2024-06-07 21:48:22.622233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.598 [2024-06-07 21:48:22.622243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.598 qpair failed and we were unable to recover it. 00:31:22.598 [2024-06-07 21:48:22.622461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.598 [2024-06-07 21:48:22.622470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.598 qpair failed and we were unable to recover it. 00:31:22.598 [2024-06-07 21:48:22.622681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.598 [2024-06-07 21:48:22.622690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.598 qpair failed and we were unable to recover it. 00:31:22.598 [2024-06-07 21:48:22.622898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.598 [2024-06-07 21:48:22.622907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.598 qpair failed and we were unable to recover it. 00:31:22.598 [2024-06-07 21:48:22.623209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.598 [2024-06-07 21:48:22.623240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.598 qpair failed and we were unable to recover it. 00:31:22.598 [2024-06-07 21:48:22.623536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.598 [2024-06-07 21:48:22.623566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.598 qpair failed and we were unable to recover it. 00:31:22.598 [2024-06-07 21:48:22.623822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.598 [2024-06-07 21:48:22.623853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.598 qpair failed and we were unable to recover it. 00:31:22.598 [2024-06-07 21:48:22.624112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.598 [2024-06-07 21:48:22.624143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.598 qpair failed and we were unable to recover it. 00:31:22.598 [2024-06-07 21:48:22.624434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.598 [2024-06-07 21:48:22.624464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.598 qpair failed and we were unable to recover it. 00:31:22.598 [2024-06-07 21:48:22.624787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.598 [2024-06-07 21:48:22.624818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.598 qpair failed and we were unable to recover it. 00:31:22.598 [2024-06-07 21:48:22.625074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.598 [2024-06-07 21:48:22.625106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.598 qpair failed and we were unable to recover it. 00:31:22.598 [2024-06-07 21:48:22.625423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.598 [2024-06-07 21:48:22.625454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.598 qpair failed and we were unable to recover it. 00:31:22.598 [2024-06-07 21:48:22.625661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.598 [2024-06-07 21:48:22.625691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.598 qpair failed and we were unable to recover it. 00:31:22.599 [2024-06-07 21:48:22.625876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.599 [2024-06-07 21:48:22.625907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.599 qpair failed and we were unable to recover it. 00:31:22.599 [2024-06-07 21:48:22.626242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.599 [2024-06-07 21:48:22.626283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.599 qpair failed and we were unable to recover it. 00:31:22.599 [2024-06-07 21:48:22.626499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.599 [2024-06-07 21:48:22.626508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.599 qpair failed and we were unable to recover it. 00:31:22.599 [2024-06-07 21:48:22.626771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.599 [2024-06-07 21:48:22.626780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.599 qpair failed and we were unable to recover it. 00:31:22.599 [2024-06-07 21:48:22.627053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.599 [2024-06-07 21:48:22.627084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.599 qpair failed and we were unable to recover it. 00:31:22.599 [2024-06-07 21:48:22.627397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.599 [2024-06-07 21:48:22.627428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.599 qpair failed and we were unable to recover it. 00:31:22.599 [2024-06-07 21:48:22.627633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.599 [2024-06-07 21:48:22.627663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.599 qpair failed and we were unable to recover it. 00:31:22.599 [2024-06-07 21:48:22.627938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.599 [2024-06-07 21:48:22.627969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.599 qpair failed and we were unable to recover it. 00:31:22.599 [2024-06-07 21:48:22.628306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.599 [2024-06-07 21:48:22.628337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.599 qpair failed and we were unable to recover it. 00:31:22.599 [2024-06-07 21:48:22.628541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.599 [2024-06-07 21:48:22.628571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.599 qpair failed and we were unable to recover it. 00:31:22.599 [2024-06-07 21:48:22.628778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.599 [2024-06-07 21:48:22.628809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.599 qpair failed and we were unable to recover it. 00:31:22.599 [2024-06-07 21:48:22.629066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.599 [2024-06-07 21:48:22.629097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.599 qpair failed and we were unable to recover it. 00:31:22.599 [2024-06-07 21:48:22.629316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.599 [2024-06-07 21:48:22.629346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.599 qpair failed and we were unable to recover it. 00:31:22.599 [2024-06-07 21:48:22.629685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.599 [2024-06-07 21:48:22.629722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.599 qpair failed and we were unable to recover it. 00:31:22.599 [2024-06-07 21:48:22.629923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.599 [2024-06-07 21:48:22.629953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.599 qpair failed and we were unable to recover it. 00:31:22.599 [2024-06-07 21:48:22.630266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.599 [2024-06-07 21:48:22.630298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.599 qpair failed and we were unable to recover it. 00:31:22.599 [2024-06-07 21:48:22.630557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.599 [2024-06-07 21:48:22.630588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.599 qpair failed and we were unable to recover it. 00:31:22.599 [2024-06-07 21:48:22.630867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.599 [2024-06-07 21:48:22.630876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.599 qpair failed and we were unable to recover it. 00:31:22.599 [2024-06-07 21:48:22.631218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.599 [2024-06-07 21:48:22.631249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.599 qpair failed and we were unable to recover it. 00:31:22.599 [2024-06-07 21:48:22.631533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.599 [2024-06-07 21:48:22.631563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.599 qpair failed and we were unable to recover it. 00:31:22.599 [2024-06-07 21:48:22.631855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.599 [2024-06-07 21:48:22.631886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.599 qpair failed and we were unable to recover it. 00:31:22.599 [2024-06-07 21:48:22.632087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.599 [2024-06-07 21:48:22.632118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.599 qpair failed and we were unable to recover it. 00:31:22.599 [2024-06-07 21:48:22.632457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.599 [2024-06-07 21:48:22.632487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.599 qpair failed and we were unable to recover it. 00:31:22.599 [2024-06-07 21:48:22.632758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.599 [2024-06-07 21:48:22.632768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.599 qpair failed and we were unable to recover it. 00:31:22.599 [2024-06-07 21:48:22.633053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.599 [2024-06-07 21:48:22.633085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.599 qpair failed and we were unable to recover it. 00:31:22.599 [2024-06-07 21:48:22.633435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.599 [2024-06-07 21:48:22.633466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.599 qpair failed and we were unable to recover it. 00:31:22.599 [2024-06-07 21:48:22.633787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.599 [2024-06-07 21:48:22.633818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.599 qpair failed and we were unable to recover it. 00:31:22.599 [2024-06-07 21:48:22.634110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.599 [2024-06-07 21:48:22.634142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.599 qpair failed and we were unable to recover it. 00:31:22.599 [2024-06-07 21:48:22.634382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.599 [2024-06-07 21:48:22.634412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.599 qpair failed and we were unable to recover it. 00:31:22.599 [2024-06-07 21:48:22.634652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.599 [2024-06-07 21:48:22.634683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.599 qpair failed and we were unable to recover it. 00:31:22.599 [2024-06-07 21:48:22.635034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.599 [2024-06-07 21:48:22.635066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.599 qpair failed and we were unable to recover it. 00:31:22.599 [2024-06-07 21:48:22.635408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.599 [2024-06-07 21:48:22.635438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.599 qpair failed and we were unable to recover it. 00:31:22.599 [2024-06-07 21:48:22.635705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.599 [2024-06-07 21:48:22.635736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.599 qpair failed and we were unable to recover it. 00:31:22.599 [2024-06-07 21:48:22.636048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.599 [2024-06-07 21:48:22.636079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.599 qpair failed and we were unable to recover it. 00:31:22.599 [2024-06-07 21:48:22.636386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.599 [2024-06-07 21:48:22.636418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.599 qpair failed and we were unable to recover it. 00:31:22.599 [2024-06-07 21:48:22.636668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.599 [2024-06-07 21:48:22.636698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.599 qpair failed and we were unable to recover it. 00:31:22.599 [2024-06-07 21:48:22.637019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.599 [2024-06-07 21:48:22.637059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.599 qpair failed and we were unable to recover it. 00:31:22.599 [2024-06-07 21:48:22.637317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.600 [2024-06-07 21:48:22.637347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.600 qpair failed and we were unable to recover it. 00:31:22.600 [2024-06-07 21:48:22.637631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.600 [2024-06-07 21:48:22.637663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.600 qpair failed and we were unable to recover it. 00:31:22.600 [2024-06-07 21:48:22.637985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.600 [2024-06-07 21:48:22.638016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.600 qpair failed and we were unable to recover it. 00:31:22.600 [2024-06-07 21:48:22.638348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.600 [2024-06-07 21:48:22.638379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.600 qpair failed and we were unable to recover it. 00:31:22.600 [2024-06-07 21:48:22.638663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.600 [2024-06-07 21:48:22.638694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.600 qpair failed and we were unable to recover it. 00:31:22.600 [2024-06-07 21:48:22.639040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.600 [2024-06-07 21:48:22.639071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.600 qpair failed and we were unable to recover it. 00:31:22.600 [2024-06-07 21:48:22.639392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.600 [2024-06-07 21:48:22.639423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.600 qpair failed and we were unable to recover it. 00:31:22.600 [2024-06-07 21:48:22.639778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.600 [2024-06-07 21:48:22.639809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.600 qpair failed and we were unable to recover it. 00:31:22.600 [2024-06-07 21:48:22.640014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.600 [2024-06-07 21:48:22.640055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.600 qpair failed and we were unable to recover it. 00:31:22.600 [2024-06-07 21:48:22.640368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.600 [2024-06-07 21:48:22.640399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.600 qpair failed and we were unable to recover it. 00:31:22.600 [2024-06-07 21:48:22.640732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.600 [2024-06-07 21:48:22.640762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.600 qpair failed and we were unable to recover it. 00:31:22.600 [2024-06-07 21:48:22.641076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.600 [2024-06-07 21:48:22.641108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.600 qpair failed and we were unable to recover it. 00:31:22.600 [2024-06-07 21:48:22.641303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.600 [2024-06-07 21:48:22.641334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.600 qpair failed and we were unable to recover it. 00:31:22.600 [2024-06-07 21:48:22.641642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.600 [2024-06-07 21:48:22.641673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.600 qpair failed and we were unable to recover it. 00:31:22.600 [2024-06-07 21:48:22.642001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.600 [2024-06-07 21:48:22.642041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.600 qpair failed and we were unable to recover it. 00:31:22.600 [2024-06-07 21:48:22.642285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.600 [2024-06-07 21:48:22.642316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.600 qpair failed and we were unable to recover it. 00:31:22.600 [2024-06-07 21:48:22.642607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.600 [2024-06-07 21:48:22.642638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.600 qpair failed and we were unable to recover it. 00:31:22.600 [2024-06-07 21:48:22.642845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.600 [2024-06-07 21:48:22.642876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.600 qpair failed and we were unable to recover it. 00:31:22.600 [2024-06-07 21:48:22.643189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.600 [2024-06-07 21:48:22.643220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.600 qpair failed and we were unable to recover it. 00:31:22.600 [2024-06-07 21:48:22.643368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.600 [2024-06-07 21:48:22.643377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.600 qpair failed and we were unable to recover it. 00:31:22.600 [2024-06-07 21:48:22.643604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.600 [2024-06-07 21:48:22.643634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.600 qpair failed and we were unable to recover it. 00:31:22.600 [2024-06-07 21:48:22.643967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.600 [2024-06-07 21:48:22.643998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.600 qpair failed and we were unable to recover it. 00:31:22.600 [2024-06-07 21:48:22.644265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.600 [2024-06-07 21:48:22.644296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.600 qpair failed and we were unable to recover it. 00:31:22.600 [2024-06-07 21:48:22.644605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.600 [2024-06-07 21:48:22.644635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.600 qpair failed and we were unable to recover it. 00:31:22.600 [2024-06-07 21:48:22.644894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.600 [2024-06-07 21:48:22.644925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.600 qpair failed and we were unable to recover it. 00:31:22.600 [2024-06-07 21:48:22.645235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.600 [2024-06-07 21:48:22.645267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.600 qpair failed and we were unable to recover it. 00:31:22.600 [2024-06-07 21:48:22.645582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.600 [2024-06-07 21:48:22.645613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.600 qpair failed and we were unable to recover it. 00:31:22.600 [2024-06-07 21:48:22.645761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.600 [2024-06-07 21:48:22.645770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.600 qpair failed and we were unable to recover it. 00:31:22.600 [2024-06-07 21:48:22.646071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.600 [2024-06-07 21:48:22.646102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.600 qpair failed and we were unable to recover it. 00:31:22.600 [2024-06-07 21:48:22.646309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.600 [2024-06-07 21:48:22.646339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.600 qpair failed and we were unable to recover it. 00:31:22.600 [2024-06-07 21:48:22.646655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.600 [2024-06-07 21:48:22.646686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.600 qpair failed and we were unable to recover it. 00:31:22.600 [2024-06-07 21:48:22.647035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.600 [2024-06-07 21:48:22.647067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.600 qpair failed and we were unable to recover it. 00:31:22.600 [2024-06-07 21:48:22.647264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.600 [2024-06-07 21:48:22.647294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.600 qpair failed and we were unable to recover it. 00:31:22.600 [2024-06-07 21:48:22.647627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.600 [2024-06-07 21:48:22.647637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.600 qpair failed and we were unable to recover it. 00:31:22.600 [2024-06-07 21:48:22.647863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.600 [2024-06-07 21:48:22.647872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.600 qpair failed and we were unable to recover it. 00:31:22.600 [2024-06-07 21:48:22.648206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.600 [2024-06-07 21:48:22.648216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.600 qpair failed and we were unable to recover it. 00:31:22.600 [2024-06-07 21:48:22.648421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.601 [2024-06-07 21:48:22.648452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.601 qpair failed and we were unable to recover it. 00:31:22.601 [2024-06-07 21:48:22.648790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.601 [2024-06-07 21:48:22.648821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.601 qpair failed and we were unable to recover it. 00:31:22.601 [2024-06-07 21:48:22.649140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.601 [2024-06-07 21:48:22.649172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.601 qpair failed and we were unable to recover it. 00:31:22.601 [2024-06-07 21:48:22.649433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.601 [2024-06-07 21:48:22.649463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.601 qpair failed and we were unable to recover it. 00:31:22.601 [2024-06-07 21:48:22.649742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.601 [2024-06-07 21:48:22.649773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.601 qpair failed and we were unable to recover it. 00:31:22.601 [2024-06-07 21:48:22.649978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.601 [2024-06-07 21:48:22.650008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.601 qpair failed and we were unable to recover it. 00:31:22.601 [2024-06-07 21:48:22.650251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.601 [2024-06-07 21:48:22.650282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.601 qpair failed and we were unable to recover it. 00:31:22.601 [2024-06-07 21:48:22.650535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.601 [2024-06-07 21:48:22.650570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.601 qpair failed and we were unable to recover it. 00:31:22.601 [2024-06-07 21:48:22.650903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.601 [2024-06-07 21:48:22.650933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.601 qpair failed and we were unable to recover it. 00:31:22.601 [2024-06-07 21:48:22.651242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.601 [2024-06-07 21:48:22.651274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.601 qpair failed and we were unable to recover it. 00:31:22.601 [2024-06-07 21:48:22.651625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.601 [2024-06-07 21:48:22.651656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.601 qpair failed and we were unable to recover it. 00:31:22.601 [2024-06-07 21:48:22.652017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.601 [2024-06-07 21:48:22.652056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.601 qpair failed and we were unable to recover it. 00:31:22.601 [2024-06-07 21:48:22.652350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.601 [2024-06-07 21:48:22.652381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.601 qpair failed and we were unable to recover it. 00:31:22.601 [2024-06-07 21:48:22.652715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.601 [2024-06-07 21:48:22.652745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.601 qpair failed and we were unable to recover it. 00:31:22.601 [2024-06-07 21:48:22.653066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.601 [2024-06-07 21:48:22.653098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.601 qpair failed and we were unable to recover it. 00:31:22.601 [2024-06-07 21:48:22.653434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.601 [2024-06-07 21:48:22.653464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.601 qpair failed and we were unable to recover it. 00:31:22.601 [2024-06-07 21:48:22.653704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.601 [2024-06-07 21:48:22.653735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.601 qpair failed and we were unable to recover it. 00:31:22.601 [2024-06-07 21:48:22.653997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.601 [2024-06-07 21:48:22.654036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.601 qpair failed and we were unable to recover it. 00:31:22.601 [2024-06-07 21:48:22.654370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.601 [2024-06-07 21:48:22.654400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.601 qpair failed and we were unable to recover it. 00:31:22.601 [2024-06-07 21:48:22.654605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.601 [2024-06-07 21:48:22.654635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.601 qpair failed and we were unable to recover it. 00:31:22.601 [2024-06-07 21:48:22.654849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.601 [2024-06-07 21:48:22.654858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.601 qpair failed and we were unable to recover it. 00:31:22.601 [2024-06-07 21:48:22.655018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.601 [2024-06-07 21:48:22.655058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.601 qpair failed and we were unable to recover it. 00:31:22.601 [2024-06-07 21:48:22.655395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.601 [2024-06-07 21:48:22.655425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.601 qpair failed and we were unable to recover it. 00:31:22.601 [2024-06-07 21:48:22.655686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.601 [2024-06-07 21:48:22.655717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.601 qpair failed and we were unable to recover it. 00:31:22.601 [2024-06-07 21:48:22.655976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.601 [2024-06-07 21:48:22.656006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.601 qpair failed and we were unable to recover it. 00:31:22.601 [2024-06-07 21:48:22.656271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.601 [2024-06-07 21:48:22.656302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.601 qpair failed and we were unable to recover it. 00:31:22.601 [2024-06-07 21:48:22.656549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.601 [2024-06-07 21:48:22.656558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.601 qpair failed and we were unable to recover it. 00:31:22.601 [2024-06-07 21:48:22.656844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.601 [2024-06-07 21:48:22.656853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.601 qpair failed and we were unable to recover it. 00:31:22.601 [2024-06-07 21:48:22.657091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.601 [2024-06-07 21:48:22.657124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.601 qpair failed and we were unable to recover it. 00:31:22.601 [2024-06-07 21:48:22.657322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.601 [2024-06-07 21:48:22.657353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.601 qpair failed and we were unable to recover it. 00:31:22.601 [2024-06-07 21:48:22.657554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.601 [2024-06-07 21:48:22.657584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.601 qpair failed and we were unable to recover it. 00:31:22.601 [2024-06-07 21:48:22.657916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.601 [2024-06-07 21:48:22.657947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.601 qpair failed and we were unable to recover it. 00:31:22.601 [2024-06-07 21:48:22.658191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.602 [2024-06-07 21:48:22.658223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.602 qpair failed and we were unable to recover it. 00:31:22.602 [2024-06-07 21:48:22.658476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.602 [2024-06-07 21:48:22.658486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.602 qpair failed and we were unable to recover it. 00:31:22.602 [2024-06-07 21:48:22.658766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.602 [2024-06-07 21:48:22.658775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.602 qpair failed and we were unable to recover it. 00:31:22.602 [2024-06-07 21:48:22.658988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.602 [2024-06-07 21:48:22.659018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.602 qpair failed and we were unable to recover it. 00:31:22.602 [2024-06-07 21:48:22.659285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.602 [2024-06-07 21:48:22.659316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.602 qpair failed and we were unable to recover it. 00:31:22.602 [2024-06-07 21:48:22.659555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.602 [2024-06-07 21:48:22.659564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.602 qpair failed and we were unable to recover it. 00:31:22.602 [2024-06-07 21:48:22.659861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.602 [2024-06-07 21:48:22.659891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.602 qpair failed and we were unable to recover it. 00:31:22.602 [2024-06-07 21:48:22.660099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.602 [2024-06-07 21:48:22.660129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.602 qpair failed and we were unable to recover it. 00:31:22.602 [2024-06-07 21:48:22.660387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.602 [2024-06-07 21:48:22.660419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.602 qpair failed and we were unable to recover it. 00:31:22.602 [2024-06-07 21:48:22.660680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.602 [2024-06-07 21:48:22.660710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.602 qpair failed and we were unable to recover it. 00:31:22.602 [2024-06-07 21:48:22.660921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.602 [2024-06-07 21:48:22.660930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.602 qpair failed and we were unable to recover it. 00:31:22.602 [2024-06-07 21:48:22.661083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.602 [2024-06-07 21:48:22.661093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.602 qpair failed and we were unable to recover it. 00:31:22.602 [2024-06-07 21:48:22.661293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.602 [2024-06-07 21:48:22.661302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.602 qpair failed and we were unable to recover it. 00:31:22.602 [2024-06-07 21:48:22.661455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.602 [2024-06-07 21:48:22.661464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.602 qpair failed and we were unable to recover it. 00:31:22.602 [2024-06-07 21:48:22.661767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.602 [2024-06-07 21:48:22.661798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.602 qpair failed and we were unable to recover it. 00:31:22.602 [2024-06-07 21:48:22.662119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.602 [2024-06-07 21:48:22.662156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.602 qpair failed and we were unable to recover it. 00:31:22.602 [2024-06-07 21:48:22.662434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.602 [2024-06-07 21:48:22.662465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.602 qpair failed and we were unable to recover it. 00:31:22.602 [2024-06-07 21:48:22.662769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.602 [2024-06-07 21:48:22.662778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.602 qpair failed and we were unable to recover it. 00:31:22.602 [2024-06-07 21:48:22.663128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.602 [2024-06-07 21:48:22.663159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.602 qpair failed and we were unable to recover it. 00:31:22.602 [2024-06-07 21:48:22.663417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.602 [2024-06-07 21:48:22.663448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.602 qpair failed and we were unable to recover it. 00:31:22.602 [2024-06-07 21:48:22.663756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.602 [2024-06-07 21:48:22.663787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.602 qpair failed and we were unable to recover it. 00:31:22.602 [2024-06-07 21:48:22.664101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.602 [2024-06-07 21:48:22.664110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.602 qpair failed and we were unable to recover it. 00:31:22.602 [2024-06-07 21:48:22.664342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.602 [2024-06-07 21:48:22.664352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.602 qpair failed and we were unable to recover it. 00:31:22.602 [2024-06-07 21:48:22.664564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.602 [2024-06-07 21:48:22.664573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.602 qpair failed and we were unable to recover it. 00:31:22.602 [2024-06-07 21:48:22.664708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.602 [2024-06-07 21:48:22.664717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.602 qpair failed and we were unable to recover it. 00:31:22.602 [2024-06-07 21:48:22.664992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.602 [2024-06-07 21:48:22.665001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.602 qpair failed and we were unable to recover it. 00:31:22.602 [2024-06-07 21:48:22.665200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.602 [2024-06-07 21:48:22.665210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.602 qpair failed and we were unable to recover it. 00:31:22.602 [2024-06-07 21:48:22.665514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.602 [2024-06-07 21:48:22.665544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.602 qpair failed and we were unable to recover it. 00:31:22.602 [2024-06-07 21:48:22.665917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.602 [2024-06-07 21:48:22.665948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.602 qpair failed and we were unable to recover it. 00:31:22.602 [2024-06-07 21:48:22.666299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.602 [2024-06-07 21:48:22.666330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.602 qpair failed and we were unable to recover it. 00:31:22.602 [2024-06-07 21:48:22.666615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.603 [2024-06-07 21:48:22.666624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.603 qpair failed and we were unable to recover it. 00:31:22.603 [2024-06-07 21:48:22.666855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.603 [2024-06-07 21:48:22.666864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.603 qpair failed and we were unable to recover it. 00:31:22.603 [2024-06-07 21:48:22.667079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.603 [2024-06-07 21:48:22.667089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.603 qpair failed and we were unable to recover it. 00:31:22.603 [2024-06-07 21:48:22.667372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.603 [2024-06-07 21:48:22.667402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.603 qpair failed and we were unable to recover it. 00:31:22.603 [2024-06-07 21:48:22.667646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.603 [2024-06-07 21:48:22.667676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.603 qpair failed and we were unable to recover it. 00:31:22.603 [2024-06-07 21:48:22.667826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.603 [2024-06-07 21:48:22.667857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.603 qpair failed and we were unable to recover it. 00:31:22.603 [2024-06-07 21:48:22.668115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.603 [2024-06-07 21:48:22.668146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.603 qpair failed and we were unable to recover it. 00:31:22.603 [2024-06-07 21:48:22.668398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.603 [2024-06-07 21:48:22.668429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.603 qpair failed and we were unable to recover it. 00:31:22.603 [2024-06-07 21:48:22.668678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.603 [2024-06-07 21:48:22.668687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.603 qpair failed and we were unable to recover it. 00:31:22.603 [2024-06-07 21:48:22.668972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.603 [2024-06-07 21:48:22.669002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.603 qpair failed and we were unable to recover it. 00:31:22.603 [2024-06-07 21:48:22.669357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.603 [2024-06-07 21:48:22.669388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.603 qpair failed and we were unable to recover it. 00:31:22.603 [2024-06-07 21:48:22.669696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.603 [2024-06-07 21:48:22.669726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.603 qpair failed and we were unable to recover it. 00:31:22.603 [2024-06-07 21:48:22.669908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.603 [2024-06-07 21:48:22.669939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.603 qpair failed and we were unable to recover it. 00:31:22.603 [2024-06-07 21:48:22.670290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.603 [2024-06-07 21:48:22.670322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.603 qpair failed and we were unable to recover it. 00:31:22.603 [2024-06-07 21:48:22.670564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.603 [2024-06-07 21:48:22.670599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.603 qpair failed and we were unable to recover it. 00:31:22.603 [2024-06-07 21:48:22.670812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.603 [2024-06-07 21:48:22.670821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.603 qpair failed and we were unable to recover it. 00:31:22.603 [2024-06-07 21:48:22.671120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.603 [2024-06-07 21:48:22.671151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.603 qpair failed and we were unable to recover it. 00:31:22.603 [2024-06-07 21:48:22.671484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.603 [2024-06-07 21:48:22.671515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.603 qpair failed and we were unable to recover it. 00:31:22.603 [2024-06-07 21:48:22.671851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.603 [2024-06-07 21:48:22.671882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.603 qpair failed and we were unable to recover it. 00:31:22.603 [2024-06-07 21:48:22.672094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.603 [2024-06-07 21:48:22.672125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.603 qpair failed and we were unable to recover it. 00:31:22.603 [2024-06-07 21:48:22.672439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.603 [2024-06-07 21:48:22.672469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.603 qpair failed and we were unable to recover it. 00:31:22.603 [2024-06-07 21:48:22.672799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.603 [2024-06-07 21:48:22.672830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.603 qpair failed and we were unable to recover it. 00:31:22.603 [2024-06-07 21:48:22.673073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.603 [2024-06-07 21:48:22.673105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.603 qpair failed and we were unable to recover it. 00:31:22.603 [2024-06-07 21:48:22.673355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.603 [2024-06-07 21:48:22.673398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.603 qpair failed and we were unable to recover it. 00:31:22.603 [2024-06-07 21:48:22.673686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.603 [2024-06-07 21:48:22.673716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.603 qpair failed and we were unable to recover it. 00:31:22.603 [2024-06-07 21:48:22.674049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.603 [2024-06-07 21:48:22.674086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.603 qpair failed and we were unable to recover it. 00:31:22.603 [2024-06-07 21:48:22.674346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.603 [2024-06-07 21:48:22.674376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.603 qpair failed and we were unable to recover it. 00:31:22.603 [2024-06-07 21:48:22.674689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.603 [2024-06-07 21:48:22.674719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.603 qpair failed and we were unable to recover it. 00:31:22.603 [2024-06-07 21:48:22.674949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.603 [2024-06-07 21:48:22.674958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.603 qpair failed and we were unable to recover it. 00:31:22.603 [2024-06-07 21:48:22.675197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.603 [2024-06-07 21:48:22.675208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.603 qpair failed and we were unable to recover it. 00:31:22.603 [2024-06-07 21:48:22.675416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.603 [2024-06-07 21:48:22.675425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.603 qpair failed and we were unable to recover it. 00:31:22.603 [2024-06-07 21:48:22.675661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.603 [2024-06-07 21:48:22.675671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.603 qpair failed and we were unable to recover it. 00:31:22.603 [2024-06-07 21:48:22.675888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.603 [2024-06-07 21:48:22.675918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.603 qpair failed and we were unable to recover it. 00:31:22.603 [2024-06-07 21:48:22.676122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.603 [2024-06-07 21:48:22.676153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.603 qpair failed and we were unable to recover it. 00:31:22.603 [2024-06-07 21:48:22.676424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.603 [2024-06-07 21:48:22.676456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.603 qpair failed and we were unable to recover it. 00:31:22.603 [2024-06-07 21:48:22.676772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.603 [2024-06-07 21:48:22.676803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.603 qpair failed and we were unable to recover it. 00:31:22.603 [2024-06-07 21:48:22.677119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.603 [2024-06-07 21:48:22.677151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.603 qpair failed and we were unable to recover it. 00:31:22.603 [2024-06-07 21:48:22.677423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.603 [2024-06-07 21:48:22.677454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.603 qpair failed and we were unable to recover it. 00:31:22.603 [2024-06-07 21:48:22.677720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.604 [2024-06-07 21:48:22.677750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.604 qpair failed and we were unable to recover it. 00:31:22.604 [2024-06-07 21:48:22.678114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.604 [2024-06-07 21:48:22.678163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.604 qpair failed and we were unable to recover it. 00:31:22.604 [2024-06-07 21:48:22.678502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.604 [2024-06-07 21:48:22.678533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.604 qpair failed and we were unable to recover it. 00:31:22.604 [2024-06-07 21:48:22.678792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.604 [2024-06-07 21:48:22.678823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.604 qpair failed and we were unable to recover it. 00:31:22.604 [2024-06-07 21:48:22.679108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.604 [2024-06-07 21:48:22.679140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.604 qpair failed and we were unable to recover it. 00:31:22.604 [2024-06-07 21:48:22.679480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.604 [2024-06-07 21:48:22.679511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.604 qpair failed and we were unable to recover it. 00:31:22.604 [2024-06-07 21:48:22.679794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.604 [2024-06-07 21:48:22.679825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.604 qpair failed and we were unable to recover it. 00:31:22.604 [2024-06-07 21:48:22.680094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.604 [2024-06-07 21:48:22.680126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.604 qpair failed and we were unable to recover it. 00:31:22.604 [2024-06-07 21:48:22.680337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.604 [2024-06-07 21:48:22.680375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.604 qpair failed and we were unable to recover it. 00:31:22.604 [2024-06-07 21:48:22.680604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.604 [2024-06-07 21:48:22.680613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.604 qpair failed and we were unable to recover it. 00:31:22.604 [2024-06-07 21:48:22.680846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.604 [2024-06-07 21:48:22.680855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.604 qpair failed and we were unable to recover it. 00:31:22.604 [2024-06-07 21:48:22.681091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.604 [2024-06-07 21:48:22.681101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.604 qpair failed and we were unable to recover it. 00:31:22.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1638456 Killed "${NVMF_APP[@]}" "$@" 00:31:22.604 [2024-06-07 21:48:22.681256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.604 [2024-06-07 21:48:22.681268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.604 qpair failed and we were unable to recover it. 00:31:22.604 [2024-06-07 21:48:22.681485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.604 [2024-06-07 21:48:22.681521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.604 qpair failed and we were unable to recover it. 00:31:22.604 [2024-06-07 21:48:22.681803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.604 [2024-06-07 21:48:22.681835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.604 qpair failed and we were unable to recover it. 00:31:22.604 21:48:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:31:22.604 [2024-06-07 21:48:22.682080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.604 [2024-06-07 21:48:22.682113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.604 qpair failed and we were unable to recover it. 00:31:22.604 21:48:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:31:22.604 [2024-06-07 21:48:22.682480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.604 [2024-06-07 21:48:22.682513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.604 qpair failed and we were unable to recover it. 00:31:22.604 21:48:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:22.604 [2024-06-07 21:48:22.682781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.604 [2024-06-07 21:48:22.682794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.604 qpair failed and we were unable to recover it. 00:31:22.604 21:48:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:22.604 [2024-06-07 21:48:22.683001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.604 [2024-06-07 21:48:22.683012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.604 qpair failed and we were unable to recover it. 00:31:22.604 21:48:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:22.604 [2024-06-07 21:48:22.683226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.604 [2024-06-07 21:48:22.683237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.604 qpair failed and we were unable to recover it. 00:31:22.604 [2024-06-07 21:48:22.683460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.604 [2024-06-07 21:48:22.683490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.604 qpair failed and we were unable to recover it. 00:31:22.604 [2024-06-07 21:48:22.683747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.604 [2024-06-07 21:48:22.683779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.604 qpair failed and we were unable to recover it. 00:31:22.604 [2024-06-07 21:48:22.684045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.604 [2024-06-07 21:48:22.684077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.604 qpair failed and we were unable to recover it. 00:31:22.604 [2024-06-07 21:48:22.684412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.604 [2024-06-07 21:48:22.684443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.604 qpair failed and we were unable to recover it. 00:31:22.604 [2024-06-07 21:48:22.684710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.604 [2024-06-07 21:48:22.684741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.604 qpair failed and we were unable to recover it. 00:31:22.604 [2024-06-07 21:48:22.684951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.604 [2024-06-07 21:48:22.684982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.604 qpair failed and we were unable to recover it. 00:31:22.604 [2024-06-07 21:48:22.685305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.604 [2024-06-07 21:48:22.685336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.604 qpair failed and we were unable to recover it. 00:31:22.604 [2024-06-07 21:48:22.685569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.604 [2024-06-07 21:48:22.685578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.604 qpair failed and we were unable to recover it. 00:31:22.604 [2024-06-07 21:48:22.685813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.604 [2024-06-07 21:48:22.685822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.604 qpair failed and we were unable to recover it. 00:31:22.604 [2024-06-07 21:48:22.685951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.604 [2024-06-07 21:48:22.685961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.604 qpair failed and we were unable to recover it. 00:31:22.604 [2024-06-07 21:48:22.686265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.604 [2024-06-07 21:48:22.686296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.604 qpair failed and we were unable to recover it. 00:31:22.604 [2024-06-07 21:48:22.686622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.604 [2024-06-07 21:48:22.686653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.604 qpair failed and we were unable to recover it. 00:31:22.604 [2024-06-07 21:48:22.686840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.604 [2024-06-07 21:48:22.686872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.604 qpair failed and we were unable to recover it. 00:31:22.604 [2024-06-07 21:48:22.687074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.604 [2024-06-07 21:48:22.687106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.604 qpair failed and we were unable to recover it. 00:31:22.604 [2024-06-07 21:48:22.687397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.604 [2024-06-07 21:48:22.687427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.604 qpair failed and we were unable to recover it. 00:31:22.604 [2024-06-07 21:48:22.687765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.604 [2024-06-07 21:48:22.687796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.605 qpair failed and we were unable to recover it. 00:31:22.605 [2024-06-07 21:48:22.688131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.605 [2024-06-07 21:48:22.688162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.605 qpair failed and we were unable to recover it. 00:31:22.605 [2024-06-07 21:48:22.688367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.605 [2024-06-07 21:48:22.688402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.605 qpair failed and we were unable to recover it. 00:31:22.605 [2024-06-07 21:48:22.688685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.605 [2024-06-07 21:48:22.688716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.605 qpair failed and we were unable to recover it. 00:31:22.605 [2024-06-07 21:48:22.688979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.605 [2024-06-07 21:48:22.689011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.605 qpair failed and we were unable to recover it. 00:31:22.605 [2024-06-07 21:48:22.689303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.605 [2024-06-07 21:48:22.689334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.605 qpair failed and we were unable to recover it. 00:31:22.605 [2024-06-07 21:48:22.689637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.605 [2024-06-07 21:48:22.689647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.605 qpair failed and we were unable to recover it. 00:31:22.605 21:48:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1639278 00:31:22.605 [2024-06-07 21:48:22.689766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.605 [2024-06-07 21:48:22.689777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.605 qpair failed and we were unable to recover it. 00:31:22.605 21:48:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1639278 00:31:22.605 [2024-06-07 21:48:22.689962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.605 [2024-06-07 21:48:22.689975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.605 qpair failed and we were unable to recover it. 00:31:22.605 21:48:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:31:22.605 [2024-06-07 21:48:22.690152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.605 [2024-06-07 21:48:22.690184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.605 qpair failed and we were unable to recover it. 00:31:22.605 21:48:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@830 -- # '[' -z 1639278 ']' 00:31:22.605 [2024-06-07 21:48:22.690457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.605 [2024-06-07 21:48:22.690494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.605 qpair failed and we were unable to recover it. 00:31:22.605 21:48:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:22.605 [2024-06-07 21:48:22.690707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.605 [2024-06-07 21:48:22.690718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.605 qpair failed and we were unable to recover it. 00:31:22.605 [2024-06-07 21:48:22.690920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.605 21:48:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:22.605 [2024-06-07 21:48:22.690930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.605 qpair failed and we were unable to recover it. 00:31:22.605 [2024-06-07 21:48:22.691087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.605 [2024-06-07 21:48:22.691097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.605 qpair failed and we were unable to recover it. 00:31:22.605 21:48:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:22.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:22.605 [2024-06-07 21:48:22.691321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.605 [2024-06-07 21:48:22.691333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.605 qpair failed and we were unable to recover it. 00:31:22.605 21:48:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:22.605 [2024-06-07 21:48:22.691480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.605 [2024-06-07 21:48:22.691492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.605 qpair failed and we were unable to recover it. 00:31:22.605 21:48:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:22.605 [2024-06-07 21:48:22.691763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.605 [2024-06-07 21:48:22.691774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.605 qpair failed and we were unable to recover it. 00:31:22.605 [2024-06-07 21:48:22.691989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.605 [2024-06-07 21:48:22.691999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.605 qpair failed and we were unable to recover it. 00:31:22.605 [2024-06-07 21:48:22.692237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.605 [2024-06-07 21:48:22.692246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.605 qpair failed and we were unable to recover it. 00:31:22.605 [2024-06-07 21:48:22.692408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.605 [2024-06-07 21:48:22.692417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.605 qpair failed and we were unable to recover it. 00:31:22.605 [2024-06-07 21:48:22.692630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.605 [2024-06-07 21:48:22.692639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.605 qpair failed and we were unable to recover it. 00:31:22.605 [2024-06-07 21:48:22.692791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.605 [2024-06-07 21:48:22.692801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.605 qpair failed and we were unable to recover it. 00:31:22.605 [2024-06-07 21:48:22.692951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.605 [2024-06-07 21:48:22.692961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.605 qpair failed and we were unable to recover it. 00:31:22.605 [2024-06-07 21:48:22.693120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.605 [2024-06-07 21:48:22.693130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.605 qpair failed and we were unable to recover it. 00:31:22.605 [2024-06-07 21:48:22.693333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.605 [2024-06-07 21:48:22.693342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.605 qpair failed and we were unable to recover it. 00:31:22.605 [2024-06-07 21:48:22.693502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.605 [2024-06-07 21:48:22.693514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.605 qpair failed and we were unable to recover it. 00:31:22.605 [2024-06-07 21:48:22.693692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.605 [2024-06-07 21:48:22.693702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.605 qpair failed and we were unable to recover it. 00:31:22.605 [2024-06-07 21:48:22.693856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.605 [2024-06-07 21:48:22.693865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.605 qpair failed and we were unable to recover it. 00:31:22.605 [2024-06-07 21:48:22.694081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.605 [2024-06-07 21:48:22.694091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.605 qpair failed and we were unable to recover it. 00:31:22.605 [2024-06-07 21:48:22.694218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.605 [2024-06-07 21:48:22.694228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.605 qpair failed and we were unable to recover it. 00:31:22.605 [2024-06-07 21:48:22.694445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.605 [2024-06-07 21:48:22.694455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.605 qpair failed and we were unable to recover it. 00:31:22.605 [2024-06-07 21:48:22.694593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.605 [2024-06-07 21:48:22.694603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.605 qpair failed and we were unable to recover it. 00:31:22.605 [2024-06-07 21:48:22.694835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.606 [2024-06-07 21:48:22.694844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.606 qpair failed and we were unable to recover it. 00:31:22.606 [2024-06-07 21:48:22.695056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.606 [2024-06-07 21:48:22.695065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.606 qpair failed and we were unable to recover it. 00:31:22.606 [2024-06-07 21:48:22.695211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.606 [2024-06-07 21:48:22.695221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.606 qpair failed and we were unable to recover it. 00:31:22.606 [2024-06-07 21:48:22.695424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.606 [2024-06-07 21:48:22.695433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.606 qpair failed and we were unable to recover it. 00:31:22.606 [2024-06-07 21:48:22.695578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.606 [2024-06-07 21:48:22.695588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.606 qpair failed and we were unable to recover it. 00:31:22.606 [2024-06-07 21:48:22.695751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.606 [2024-06-07 21:48:22.695761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.606 qpair failed and we were unable to recover it. 00:31:22.606 [2024-06-07 21:48:22.695961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.606 [2024-06-07 21:48:22.695971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.606 qpair failed and we were unable to recover it. 00:31:22.606 [2024-06-07 21:48:22.696179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.606 [2024-06-07 21:48:22.696189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.606 qpair failed and we were unable to recover it. 00:31:22.606 [2024-06-07 21:48:22.696424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.606 [2024-06-07 21:48:22.696434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.606 qpair failed and we were unable to recover it. 00:31:22.606 [2024-06-07 21:48:22.696635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.606 [2024-06-07 21:48:22.696645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.606 qpair failed and we were unable to recover it. 00:31:22.606 [2024-06-07 21:48:22.696779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.606 [2024-06-07 21:48:22.696788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.606 qpair failed and we were unable to recover it. 00:31:22.606 [2024-06-07 21:48:22.696949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.606 [2024-06-07 21:48:22.696958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.606 qpair failed and we were unable to recover it. 00:31:22.606 [2024-06-07 21:48:22.697173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.606 [2024-06-07 21:48:22.697183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.606 qpair failed and we were unable to recover it. 00:31:22.606 [2024-06-07 21:48:22.697391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.606 [2024-06-07 21:48:22.697401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.606 qpair failed and we were unable to recover it. 00:31:22.606 [2024-06-07 21:48:22.697603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.606 [2024-06-07 21:48:22.697614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.606 qpair failed and we were unable to recover it. 00:31:22.606 [2024-06-07 21:48:22.697881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.606 [2024-06-07 21:48:22.697890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.606 qpair failed and we were unable to recover it. 00:31:22.606 [2024-06-07 21:48:22.698093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.606 [2024-06-07 21:48:22.698102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.606 qpair failed and we were unable to recover it. 00:31:22.606 [2024-06-07 21:48:22.698310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.606 [2024-06-07 21:48:22.698319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.606 qpair failed and we were unable to recover it. 00:31:22.606 [2024-06-07 21:48:22.698548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.606 [2024-06-07 21:48:22.698558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.606 qpair failed and we were unable to recover it. 00:31:22.606 [2024-06-07 21:48:22.698772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.606 [2024-06-07 21:48:22.698782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.606 qpair failed and we were unable to recover it. 00:31:22.606 [2024-06-07 21:48:22.698946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.606 [2024-06-07 21:48:22.698956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.606 qpair failed and we were unable to recover it. 00:31:22.606 [2024-06-07 21:48:22.699122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.606 [2024-06-07 21:48:22.699131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.606 qpair failed and we were unable to recover it. 00:31:22.606 [2024-06-07 21:48:22.699337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.606 [2024-06-07 21:48:22.699346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.606 qpair failed and we were unable to recover it. 00:31:22.606 [2024-06-07 21:48:22.699613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.606 [2024-06-07 21:48:22.699622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.606 qpair failed and we were unable to recover it. 00:31:22.606 [2024-06-07 21:48:22.699826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.606 [2024-06-07 21:48:22.699836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.606 qpair failed and we were unable to recover it. 00:31:22.606 [2024-06-07 21:48:22.699970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.606 [2024-06-07 21:48:22.699979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.606 qpair failed and we were unable to recover it. 00:31:22.606 [2024-06-07 21:48:22.700266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.606 [2024-06-07 21:48:22.700276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.606 qpair failed and we were unable to recover it. 00:31:22.606 [2024-06-07 21:48:22.700474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.607 [2024-06-07 21:48:22.700484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.607 qpair failed and we were unable to recover it. 00:31:22.607 [2024-06-07 21:48:22.700704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.607 [2024-06-07 21:48:22.700713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.607 qpair failed and we were unable to recover it. 00:31:22.607 [2024-06-07 21:48:22.700917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.607 [2024-06-07 21:48:22.700926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.607 qpair failed and we were unable to recover it. 00:31:22.607 [2024-06-07 21:48:22.701149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.607 [2024-06-07 21:48:22.701159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.607 qpair failed and we were unable to recover it. 00:31:22.607 [2024-06-07 21:48:22.701481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.607 [2024-06-07 21:48:22.701491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.607 qpair failed and we were unable to recover it. 00:31:22.607 [2024-06-07 21:48:22.701700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.607 [2024-06-07 21:48:22.701710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.607 qpair failed and we were unable to recover it. 00:31:22.607 [2024-06-07 21:48:22.701944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.607 [2024-06-07 21:48:22.701956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.607 qpair failed and we were unable to recover it. 00:31:22.607 [2024-06-07 21:48:22.702167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.607 [2024-06-07 21:48:22.702176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.607 qpair failed and we were unable to recover it. 00:31:22.607 [2024-06-07 21:48:22.702315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.607 [2024-06-07 21:48:22.702324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.607 qpair failed and we were unable to recover it. 00:31:22.607 [2024-06-07 21:48:22.702589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.607 [2024-06-07 21:48:22.702598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.607 qpair failed and we were unable to recover it. 00:31:22.607 [2024-06-07 21:48:22.702803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.607 [2024-06-07 21:48:22.702812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.607 qpair failed and we were unable to recover it. 00:31:22.607 [2024-06-07 21:48:22.703049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.607 [2024-06-07 21:48:22.703059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.607 qpair failed and we were unable to recover it. 00:31:22.607 [2024-06-07 21:48:22.703194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.607 [2024-06-07 21:48:22.703203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.607 qpair failed and we were unable to recover it. 00:31:22.607 [2024-06-07 21:48:22.703405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.607 [2024-06-07 21:48:22.703414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.607 qpair failed and we were unable to recover it. 00:31:22.607 [2024-06-07 21:48:22.703658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.607 [2024-06-07 21:48:22.703668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.607 qpair failed and we were unable to recover it. 00:31:22.607 [2024-06-07 21:48:22.703816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.607 [2024-06-07 21:48:22.703825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.607 qpair failed and we were unable to recover it. 00:31:22.607 [2024-06-07 21:48:22.704088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.607 [2024-06-07 21:48:22.704098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.607 qpair failed and we were unable to recover it. 00:31:22.607 [2024-06-07 21:48:22.704392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.607 [2024-06-07 21:48:22.704401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.607 qpair failed and we were unable to recover it. 00:31:22.607 [2024-06-07 21:48:22.704696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.607 [2024-06-07 21:48:22.704705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.607 qpair failed and we were unable to recover it. 00:31:22.607 [2024-06-07 21:48:22.704853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.607 [2024-06-07 21:48:22.704862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.607 qpair failed and we were unable to recover it. 00:31:22.607 [2024-06-07 21:48:22.705090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.607 [2024-06-07 21:48:22.705100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.607 qpair failed and we were unable to recover it. 00:31:22.607 [2024-06-07 21:48:22.705301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.607 [2024-06-07 21:48:22.705310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.607 qpair failed and we were unable to recover it. 00:31:22.607 [2024-06-07 21:48:22.705464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.607 [2024-06-07 21:48:22.705473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.607 qpair failed and we were unable to recover it. 00:31:22.607 [2024-06-07 21:48:22.705615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.607 [2024-06-07 21:48:22.705625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.607 qpair failed and we were unable to recover it. 00:31:22.607 [2024-06-07 21:48:22.705919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.607 [2024-06-07 21:48:22.705929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.607 qpair failed and we were unable to recover it. 00:31:22.607 [2024-06-07 21:48:22.706135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.607 [2024-06-07 21:48:22.706145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.607 qpair failed and we were unable to recover it. 00:31:22.607 [2024-06-07 21:48:22.706274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.607 [2024-06-07 21:48:22.706283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.607 qpair failed and we were unable to recover it. 00:31:22.607 [2024-06-07 21:48:22.706430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.607 [2024-06-07 21:48:22.706440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.607 qpair failed and we were unable to recover it. 00:31:22.607 [2024-06-07 21:48:22.706708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.607 [2024-06-07 21:48:22.706717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.607 qpair failed and we were unable to recover it. 00:31:22.607 [2024-06-07 21:48:22.706845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.607 [2024-06-07 21:48:22.706854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.607 qpair failed and we were unable to recover it. 00:31:22.607 [2024-06-07 21:48:22.707139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.607 [2024-06-07 21:48:22.707149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.607 qpair failed and we were unable to recover it. 00:31:22.607 [2024-06-07 21:48:22.707371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.607 [2024-06-07 21:48:22.707381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.607 qpair failed and we were unable to recover it. 00:31:22.607 [2024-06-07 21:48:22.707585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.607 [2024-06-07 21:48:22.707594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.607 qpair failed and we were unable to recover it. 00:31:22.607 [2024-06-07 21:48:22.707791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.607 [2024-06-07 21:48:22.707800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.607 qpair failed and we were unable to recover it. 00:31:22.607 [2024-06-07 21:48:22.708017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.607 [2024-06-07 21:48:22.708031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.607 qpair failed and we were unable to recover it. 00:31:22.607 [2024-06-07 21:48:22.708162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.607 [2024-06-07 21:48:22.708171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.607 qpair failed and we were unable to recover it. 00:31:22.607 [2024-06-07 21:48:22.708327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.607 [2024-06-07 21:48:22.708336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.607 qpair failed and we were unable to recover it. 00:31:22.607 [2024-06-07 21:48:22.708491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.607 [2024-06-07 21:48:22.708500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.608 qpair failed and we were unable to recover it. 00:31:22.608 [2024-06-07 21:48:22.708792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.608 [2024-06-07 21:48:22.708801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.608 qpair failed and we were unable to recover it. 00:31:22.608 [2024-06-07 21:48:22.709066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.608 [2024-06-07 21:48:22.709076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.608 qpair failed and we were unable to recover it. 00:31:22.608 [2024-06-07 21:48:22.709215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.608 [2024-06-07 21:48:22.709225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.608 qpair failed and we were unable to recover it. 00:31:22.608 [2024-06-07 21:48:22.709422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.608 [2024-06-07 21:48:22.709432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.608 qpair failed and we were unable to recover it. 00:31:22.608 [2024-06-07 21:48:22.709700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.608 [2024-06-07 21:48:22.709709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.608 qpair failed and we were unable to recover it. 00:31:22.608 [2024-06-07 21:48:22.709933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.608 [2024-06-07 21:48:22.709942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.608 qpair failed and we were unable to recover it. 00:31:22.608 [2024-06-07 21:48:22.710208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.608 [2024-06-07 21:48:22.710219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.608 qpair failed and we were unable to recover it. 00:31:22.608 [2024-06-07 21:48:22.710439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.608 [2024-06-07 21:48:22.710448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.608 qpair failed and we were unable to recover it. 00:31:22.608 [2024-06-07 21:48:22.710659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.608 [2024-06-07 21:48:22.710671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.608 qpair failed and we were unable to recover it. 00:31:22.608 [2024-06-07 21:48:22.710903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.608 [2024-06-07 21:48:22.710912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.608 qpair failed and we were unable to recover it. 00:31:22.608 [2024-06-07 21:48:22.711122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.608 [2024-06-07 21:48:22.711131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.608 qpair failed and we were unable to recover it. 00:31:22.608 [2024-06-07 21:48:22.711331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.608 [2024-06-07 21:48:22.711340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.608 qpair failed and we were unable to recover it. 00:31:22.608 [2024-06-07 21:48:22.711482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.608 [2024-06-07 21:48:22.711491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.608 qpair failed and we were unable to recover it. 00:31:22.608 [2024-06-07 21:48:22.711629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.608 [2024-06-07 21:48:22.711639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.608 qpair failed and we were unable to recover it. 00:31:22.608 [2024-06-07 21:48:22.711812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.608 [2024-06-07 21:48:22.711821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.608 qpair failed and we were unable to recover it. 00:31:22.608 [2024-06-07 21:48:22.711982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.608 [2024-06-07 21:48:22.711992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.608 qpair failed and we were unable to recover it. 00:31:22.608 [2024-06-07 21:48:22.712198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.608 [2024-06-07 21:48:22.712207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.608 qpair failed and we were unable to recover it. 00:31:22.608 [2024-06-07 21:48:22.712357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.608 [2024-06-07 21:48:22.712367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.608 qpair failed and we were unable to recover it. 00:31:22.608 [2024-06-07 21:48:22.712578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.608 [2024-06-07 21:48:22.712588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.608 qpair failed and we were unable to recover it. 00:31:22.608 [2024-06-07 21:48:22.712903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.608 [2024-06-07 21:48:22.712912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.608 qpair failed and we were unable to recover it. 00:31:22.608 [2024-06-07 21:48:22.713123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.608 [2024-06-07 21:48:22.713132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.608 qpair failed and we were unable to recover it. 00:31:22.608 [2024-06-07 21:48:22.713354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.608 [2024-06-07 21:48:22.713363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.608 qpair failed and we were unable to recover it. 00:31:22.608 [2024-06-07 21:48:22.713571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.608 [2024-06-07 21:48:22.713580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.608 qpair failed and we were unable to recover it. 00:31:22.608 [2024-06-07 21:48:22.713711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.608 [2024-06-07 21:48:22.713720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.608 qpair failed and we were unable to recover it. 00:31:22.608 [2024-06-07 21:48:22.713855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.608 [2024-06-07 21:48:22.713864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.608 qpair failed and we were unable to recover it. 00:31:22.608 [2024-06-07 21:48:22.714089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.608 [2024-06-07 21:48:22.714098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.608 qpair failed and we were unable to recover it. 00:31:22.608 [2024-06-07 21:48:22.714332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.608 [2024-06-07 21:48:22.714342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.608 qpair failed and we were unable to recover it. 00:31:22.608 [2024-06-07 21:48:22.714546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.608 [2024-06-07 21:48:22.714556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.608 qpair failed and we were unable to recover it. 00:31:22.608 [2024-06-07 21:48:22.714770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.608 [2024-06-07 21:48:22.714780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.608 qpair failed and we were unable to recover it. 00:31:22.608 [2024-06-07 21:48:22.714917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.608 [2024-06-07 21:48:22.714928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.608 qpair failed and we were unable to recover it. 00:31:22.608 [2024-06-07 21:48:22.715071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.608 [2024-06-07 21:48:22.715081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.608 qpair failed and we were unable to recover it. 00:31:22.608 [2024-06-07 21:48:22.715318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.608 [2024-06-07 21:48:22.715327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.608 qpair failed and we were unable to recover it. 00:31:22.608 [2024-06-07 21:48:22.715417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.608 [2024-06-07 21:48:22.715426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.608 qpair failed and we were unable to recover it. 00:31:22.608 [2024-06-07 21:48:22.715680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.608 [2024-06-07 21:48:22.715689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.608 qpair failed and we were unable to recover it. 00:31:22.608 [2024-06-07 21:48:22.715895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.608 [2024-06-07 21:48:22.715904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.608 qpair failed and we were unable to recover it. 00:31:22.608 [2024-06-07 21:48:22.716108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.609 [2024-06-07 21:48:22.716118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.609 qpair failed and we were unable to recover it. 00:31:22.609 [2024-06-07 21:48:22.716268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.609 [2024-06-07 21:48:22.716277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.609 qpair failed and we were unable to recover it. 00:31:22.609 [2024-06-07 21:48:22.716424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.609 [2024-06-07 21:48:22.716433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.609 qpair failed and we were unable to recover it. 00:31:22.609 [2024-06-07 21:48:22.716692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.609 [2024-06-07 21:48:22.716702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.609 qpair failed and we were unable to recover it. 00:31:22.609 [2024-06-07 21:48:22.716797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.609 [2024-06-07 21:48:22.716807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.609 qpair failed and we were unable to recover it. 00:31:22.609 [2024-06-07 21:48:22.716966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.609 [2024-06-07 21:48:22.716975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.609 qpair failed and we were unable to recover it. 00:31:22.609 [2024-06-07 21:48:22.717190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.609 [2024-06-07 21:48:22.717200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.609 qpair failed and we were unable to recover it. 00:31:22.609 [2024-06-07 21:48:22.717326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.609 [2024-06-07 21:48:22.717336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.609 qpair failed and we were unable to recover it. 00:31:22.609 [2024-06-07 21:48:22.717468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.609 [2024-06-07 21:48:22.717479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.609 qpair failed and we were unable to recover it. 00:31:22.609 [2024-06-07 21:48:22.717681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.609 [2024-06-07 21:48:22.717690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.609 qpair failed and we were unable to recover it. 00:31:22.609 [2024-06-07 21:48:22.717917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.609 [2024-06-07 21:48:22.717927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.609 qpair failed and we were unable to recover it. 00:31:22.609 [2024-06-07 21:48:22.718008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.609 [2024-06-07 21:48:22.718017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.609 qpair failed and we were unable to recover it. 00:31:22.609 [2024-06-07 21:48:22.718259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.609 [2024-06-07 21:48:22.718300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.609 qpair failed and we were unable to recover it. 00:31:22.609 [2024-06-07 21:48:22.718471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.609 [2024-06-07 21:48:22.718499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.609 qpair failed and we were unable to recover it. 00:31:22.609 [2024-06-07 21:48:22.718766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.609 [2024-06-07 21:48:22.718783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.609 qpair failed and we were unable to recover it. 00:31:22.609 [2024-06-07 21:48:22.719087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.609 [2024-06-07 21:48:22.719106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.609 qpair failed and we were unable to recover it. 00:31:22.609 [2024-06-07 21:48:22.719398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.609 [2024-06-07 21:48:22.719409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.609 qpair failed and we were unable to recover it. 00:31:22.609 [2024-06-07 21:48:22.719621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.609 [2024-06-07 21:48:22.719630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.609 qpair failed and we were unable to recover it. 00:31:22.609 [2024-06-07 21:48:22.719830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.609 [2024-06-07 21:48:22.719840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.609 qpair failed and we were unable to recover it. 00:31:22.609 [2024-06-07 21:48:22.719984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.609 [2024-06-07 21:48:22.719994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.609 qpair failed and we were unable to recover it. 00:31:22.609 [2024-06-07 21:48:22.720268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.609 [2024-06-07 21:48:22.720278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.609 qpair failed and we were unable to recover it. 00:31:22.609 [2024-06-07 21:48:22.720493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.609 [2024-06-07 21:48:22.720503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.609 qpair failed and we were unable to recover it. 00:31:22.609 [2024-06-07 21:48:22.720717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.609 [2024-06-07 21:48:22.720727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.609 qpair failed and we were unable to recover it. 00:31:22.609 [2024-06-07 21:48:22.720995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.609 [2024-06-07 21:48:22.721005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.609 qpair failed and we were unable to recover it. 00:31:22.609 [2024-06-07 21:48:22.721221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.609 [2024-06-07 21:48:22.721231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.609 qpair failed and we were unable to recover it. 00:31:22.609 [2024-06-07 21:48:22.721361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.609 [2024-06-07 21:48:22.721370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.609 qpair failed and we were unable to recover it. 00:31:22.609 [2024-06-07 21:48:22.721584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.609 [2024-06-07 21:48:22.721593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.609 qpair failed and we were unable to recover it. 00:31:22.609 [2024-06-07 21:48:22.721745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.609 [2024-06-07 21:48:22.721754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.609 qpair failed and we were unable to recover it. 00:31:22.609 [2024-06-07 21:48:22.721956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.609 [2024-06-07 21:48:22.721966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.609 qpair failed and we were unable to recover it. 00:31:22.609 [2024-06-07 21:48:22.722081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.609 [2024-06-07 21:48:22.722090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.609 qpair failed and we were unable to recover it. 00:31:22.609 [2024-06-07 21:48:22.722317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.609 [2024-06-07 21:48:22.722327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.609 qpair failed and we were unable to recover it. 00:31:22.609 [2024-06-07 21:48:22.722599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.609 [2024-06-07 21:48:22.722609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.609 qpair failed and we were unable to recover it. 00:31:22.609 [2024-06-07 21:48:22.722824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.609 [2024-06-07 21:48:22.722833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.609 qpair failed and we were unable to recover it. 00:31:22.609 [2024-06-07 21:48:22.723110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.609 [2024-06-07 21:48:22.723120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.609 qpair failed and we were unable to recover it. 00:31:22.609 [2024-06-07 21:48:22.723340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.609 [2024-06-07 21:48:22.723350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.609 qpair failed and we were unable to recover it. 00:31:22.609 [2024-06-07 21:48:22.723664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.609 [2024-06-07 21:48:22.723673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.609 qpair failed and we were unable to recover it. 00:31:22.609 [2024-06-07 21:48:22.723839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.609 [2024-06-07 21:48:22.723849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.609 qpair failed and we were unable to recover it. 00:31:22.609 [2024-06-07 21:48:22.724013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.610 [2024-06-07 21:48:22.724022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.610 qpair failed and we were unable to recover it. 00:31:22.610 [2024-06-07 21:48:22.724180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.610 [2024-06-07 21:48:22.724190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.610 qpair failed and we were unable to recover it. 00:31:22.610 [2024-06-07 21:48:22.724386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.610 [2024-06-07 21:48:22.724396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.610 qpair failed and we were unable to recover it. 00:31:22.610 [2024-06-07 21:48:22.724663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.610 [2024-06-07 21:48:22.724673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.610 qpair failed and we were unable to recover it. 00:31:22.610 [2024-06-07 21:48:22.724814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.610 [2024-06-07 21:48:22.724823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.610 qpair failed and we were unable to recover it. 00:31:22.610 [2024-06-07 21:48:22.724965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.610 [2024-06-07 21:48:22.724975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.610 qpair failed and we were unable to recover it. 00:31:22.610 [2024-06-07 21:48:22.725123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.610 [2024-06-07 21:48:22.725133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.610 qpair failed and we were unable to recover it. 00:31:22.610 [2024-06-07 21:48:22.725258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.610 [2024-06-07 21:48:22.725267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.610 qpair failed and we were unable to recover it. 00:31:22.610 [2024-06-07 21:48:22.725396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.610 [2024-06-07 21:48:22.725406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.610 qpair failed and we were unable to recover it. 00:31:22.610 [2024-06-07 21:48:22.725604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.610 [2024-06-07 21:48:22.725614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.610 qpair failed and we were unable to recover it. 00:31:22.610 [2024-06-07 21:48:22.725841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.610 [2024-06-07 21:48:22.725850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.610 qpair failed and we were unable to recover it. 00:31:22.610 [2024-06-07 21:48:22.726135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.610 [2024-06-07 21:48:22.726144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.610 qpair failed and we were unable to recover it. 00:31:22.610 [2024-06-07 21:48:22.726376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.610 [2024-06-07 21:48:22.726386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.610 qpair failed and we were unable to recover it. 00:31:22.610 [2024-06-07 21:48:22.726578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.610 [2024-06-07 21:48:22.726588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.610 qpair failed and we were unable to recover it. 00:31:22.610 [2024-06-07 21:48:22.726857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.610 [2024-06-07 21:48:22.726866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.610 qpair failed and we were unable to recover it. 00:31:22.610 [2024-06-07 21:48:22.727088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.610 [2024-06-07 21:48:22.727098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.610 qpair failed and we were unable to recover it. 00:31:22.610 [2024-06-07 21:48:22.727310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.610 [2024-06-07 21:48:22.727322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.610 qpair failed and we were unable to recover it. 00:31:22.610 [2024-06-07 21:48:22.727517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.610 [2024-06-07 21:48:22.727526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.610 qpair failed and we were unable to recover it. 00:31:22.610 [2024-06-07 21:48:22.727666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.610 [2024-06-07 21:48:22.727676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.610 qpair failed and we were unable to recover it. 00:31:22.610 [2024-06-07 21:48:22.727888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.610 [2024-06-07 21:48:22.727898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.610 qpair failed and we were unable to recover it. 00:31:22.610 [2024-06-07 21:48:22.728196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.610 [2024-06-07 21:48:22.728206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.610 qpair failed and we were unable to recover it. 00:31:22.610 [2024-06-07 21:48:22.728498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.610 [2024-06-07 21:48:22.728508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.610 qpair failed and we were unable to recover it. 00:31:22.610 [2024-06-07 21:48:22.728712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.610 [2024-06-07 21:48:22.728722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.610 qpair failed and we were unable to recover it. 00:31:22.610 [2024-06-07 21:48:22.728993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.610 [2024-06-07 21:48:22.729002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.610 qpair failed and we were unable to recover it. 00:31:22.610 [2024-06-07 21:48:22.729159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.610 [2024-06-07 21:48:22.729168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.610 qpair failed and we were unable to recover it. 00:31:22.610 [2024-06-07 21:48:22.729433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.610 [2024-06-07 21:48:22.729443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.610 qpair failed and we were unable to recover it. 00:31:22.610 [2024-06-07 21:48:22.729709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.610 [2024-06-07 21:48:22.729718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.610 qpair failed and we were unable to recover it. 00:31:22.610 [2024-06-07 21:48:22.729861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.610 [2024-06-07 21:48:22.729871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.610 qpair failed and we were unable to recover it. 00:31:22.610 [2024-06-07 21:48:22.730154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.610 [2024-06-07 21:48:22.730163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.610 qpair failed and we were unable to recover it. 00:31:22.610 [2024-06-07 21:48:22.730309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.610 [2024-06-07 21:48:22.730318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.610 qpair failed and we were unable to recover it. 00:31:22.610 [2024-06-07 21:48:22.730537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.610 [2024-06-07 21:48:22.730546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.610 qpair failed and we were unable to recover it. 00:31:22.610 [2024-06-07 21:48:22.730788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.610 [2024-06-07 21:48:22.730798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.610 qpair failed and we were unable to recover it. 00:31:22.610 [2024-06-07 21:48:22.730951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.610 [2024-06-07 21:48:22.730960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.610 qpair failed and we were unable to recover it. 00:31:22.610 [2024-06-07 21:48:22.731253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.610 [2024-06-07 21:48:22.731263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.610 qpair failed and we were unable to recover it. 00:31:22.610 [2024-06-07 21:48:22.731503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.610 [2024-06-07 21:48:22.731513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.610 qpair failed and we were unable to recover it. 00:31:22.610 [2024-06-07 21:48:22.731743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.610 [2024-06-07 21:48:22.731752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.610 qpair failed and we were unable to recover it. 00:31:22.610 [2024-06-07 21:48:22.731988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-07 21:48:22.731997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-07 21:48:22.732195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-07 21:48:22.732204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-07 21:48:22.732407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-07 21:48:22.732417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-07 21:48:22.732627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-07 21:48:22.732636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-07 21:48:22.732953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-07 21:48:22.732962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-07 21:48:22.733202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-07 21:48:22.733211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-07 21:48:22.733441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-07 21:48:22.733451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-07 21:48:22.733597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-07 21:48:22.733606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-07 21:48:22.733755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-07 21:48:22.733764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-07 21:48:22.733972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-07 21:48:22.733981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-07 21:48:22.734194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-07 21:48:22.734203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-07 21:48:22.734412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-07 21:48:22.734422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-07 21:48:22.734620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-07 21:48:22.734630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-07 21:48:22.734852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-07 21:48:22.734861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-07 21:48:22.735183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-07 21:48:22.735192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-07 21:48:22.735413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-07 21:48:22.735423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-07 21:48:22.735572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-07 21:48:22.735582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-07 21:48:22.735781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-07 21:48:22.735790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-07 21:48:22.736082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-07 21:48:22.736092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-07 21:48:22.736378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-07 21:48:22.736387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-07 21:48:22.736654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-07 21:48:22.736665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-07 21:48:22.736863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-07 21:48:22.736874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-07 21:48:22.737017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-07 21:48:22.737031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-07 21:48:22.737240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-07 21:48:22.737250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-07 21:48:22.737534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-07 21:48:22.737543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-07 21:48:22.737697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-07 21:48:22.737706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-07 21:48:22.737851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-07 21:48:22.737860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-07 21:48:22.738151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-07 21:48:22.738161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-07 21:48:22.738314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-07 21:48:22.738324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-07 21:48:22.738527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-07 21:48:22.738537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-07 21:48:22.738800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-07 21:48:22.738810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-07 21:48:22.739122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-07 21:48:22.739132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-07 21:48:22.739328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-07 21:48:22.739338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-07 21:48:22.739627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-07 21:48:22.739637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-07 21:48:22.739959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-07 21:48:22.739969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-07 21:48:22.740111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-07 21:48:22.740129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-07 21:48:22.740275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-07 21:48:22.740284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-07 21:48:22.740436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-07 21:48:22.740446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-07 21:48:22.740663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-07 21:48:22.740672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-07 21:48:22.740944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-07 21:48:22.740953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-07 21:48:22.741260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-07 21:48:22.741270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-07 21:48:22.741414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-07 21:48:22.741423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-07 21:48:22.741640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-07 21:48:22.741649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-07 21:48:22.741793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-07 21:48:22.741802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-07 21:48:22.742096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-07 21:48:22.742105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-07 21:48:22.742266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-07 21:48:22.742276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-07 21:48:22.742492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-07 21:48:22.742501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-07 21:48:22.742645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-07 21:48:22.742654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-07 21:48:22.742865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-07 21:48:22.742875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-07 21:48:22.743108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-07 21:48:22.743118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-07 21:48:22.743229] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:31:22.612 [2024-06-07 21:48:22.743282] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:22.612 [2024-06-07 21:48:22.743354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-07 21:48:22.743363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-07 21:48:22.743578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-07 21:48:22.743586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-07 21:48:22.743814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-07 21:48:22.743822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-07 21:48:22.743976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-07 21:48:22.743986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-07 21:48:22.744218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-07 21:48:22.744227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-07 21:48:22.744377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-07 21:48:22.744386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-07 21:48:22.744671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-07 21:48:22.744681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-07 21:48:22.744896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-07 21:48:22.744906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-07 21:48:22.745190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-07 21:48:22.745199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-07 21:48:22.745438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-07 21:48:22.745450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-07 21:48:22.745664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-07 21:48:22.745674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-07 21:48:22.745875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-07 21:48:22.745885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-07 21:48:22.746101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-07 21:48:22.746110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-07 21:48:22.746347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-07 21:48:22.746356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-07 21:48:22.746556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-07 21:48:22.746565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-07 21:48:22.746717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-07 21:48:22.746726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-07 21:48:22.746936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-07 21:48:22.746946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-07 21:48:22.747190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-07 21:48:22.747201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-07 21:48:22.747366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-07 21:48:22.747375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-07 21:48:22.747585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-07 21:48:22.747595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-07 21:48:22.747813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-07 21:48:22.747823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-07 21:48:22.748121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-07 21:48:22.748131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-07 21:48:22.748431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-07 21:48:22.748441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-07 21:48:22.748645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-07 21:48:22.748655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-07 21:48:22.748796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-07 21:48:22.748805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-07 21:48:22.749010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-07 21:48:22.749020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-07 21:48:22.749229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-07 21:48:22.749238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-07 21:48:22.749451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-07 21:48:22.749461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-07 21:48:22.749608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-07 21:48:22.749617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-07 21:48:22.749844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-07 21:48:22.749854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-07 21:48:22.750143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-07 21:48:22.750153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-07 21:48:22.750318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-07 21:48:22.750327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-07 21:48:22.750595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-07 21:48:22.750604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-07 21:48:22.750871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-07 21:48:22.750881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-07 21:48:22.751077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-07 21:48:22.751087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-07 21:48:22.751282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-07 21:48:22.751292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-07 21:48:22.751560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-07 21:48:22.751569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-07 21:48:22.751785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-07 21:48:22.751794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-07 21:48:22.752086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-07 21:48:22.752096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-07 21:48:22.752434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-07 21:48:22.752444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-07 21:48:22.752599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-07 21:48:22.752608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-07 21:48:22.752874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-07 21:48:22.752884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-07 21:48:22.753096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-07 21:48:22.753106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-07 21:48:22.753324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-07 21:48:22.753334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-07 21:48:22.753624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-07 21:48:22.753634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-07 21:48:22.753831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-07 21:48:22.753840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-07 21:48:22.754143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-07 21:48:22.754153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-07 21:48:22.754315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-07 21:48:22.754325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-07 21:48:22.754531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-07 21:48:22.754541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-07 21:48:22.754808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-07 21:48:22.754819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-07 21:48:22.754980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-07 21:48:22.754989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-07 21:48:22.755299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-07 21:48:22.755308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-07 21:48:22.755452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-07 21:48:22.755462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-07 21:48:22.755671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-07 21:48:22.755681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-07 21:48:22.755985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-07 21:48:22.755994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-07 21:48:22.756198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-07 21:48:22.756208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-07 21:48:22.756426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-07 21:48:22.756436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-07 21:48:22.756683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-07 21:48:22.756692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-07 21:48:22.756982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-07 21:48:22.756991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-07 21:48:22.757277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-07 21:48:22.757287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-07 21:48:22.757583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-07 21:48:22.757592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-07 21:48:22.757857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-07 21:48:22.757867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-07 21:48:22.758015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-07 21:48:22.758028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-07 21:48:22.758248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-07 21:48:22.758258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-07 21:48:22.758524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-07 21:48:22.758535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-07 21:48:22.758828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-07 21:48:22.758838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-07 21:48:22.759053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-07 21:48:22.759063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-07 21:48:22.759360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-07 21:48:22.759369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-07 21:48:22.759476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-07 21:48:22.759485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-07 21:48:22.759711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-07 21:48:22.759720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-07 21:48:22.760016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-07 21:48:22.760028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-07 21:48:22.760259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-07 21:48:22.760269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-07 21:48:22.760468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-07 21:48:22.760477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-07 21:48:22.760677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-07 21:48:22.760686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-07 21:48:22.760980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-07 21:48:22.760989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-07 21:48:22.761202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-07 21:48:22.761212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-07 21:48:22.761361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-07 21:48:22.761371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-07 21:48:22.761583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-07 21:48:22.761592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-07 21:48:22.761884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-07 21:48:22.761893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-07 21:48:22.762112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-07 21:48:22.762121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-07 21:48:22.762325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-07 21:48:22.762334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-07 21:48:22.762488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-07 21:48:22.762497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-07 21:48:22.762790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-07 21:48:22.762801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-07 21:48:22.762943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-07 21:48:22.762954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-07 21:48:22.763152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-07 21:48:22.763162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-07 21:48:22.763428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-07 21:48:22.763438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-07 21:48:22.763627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-07 21:48:22.763637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-07 21:48:22.763835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-07 21:48:22.763845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-07 21:48:22.764054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-07 21:48:22.764065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-07 21:48:22.764366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-07 21:48:22.764377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-07 21:48:22.764588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-07 21:48:22.764597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-07 21:48:22.764754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-07 21:48:22.764764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-07 21:48:22.765058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-07 21:48:22.765068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-07 21:48:22.765272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-07 21:48:22.765282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-07 21:48:22.765525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-07 21:48:22.765535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-07 21:48:22.765825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-07 21:48:22.765834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-07 21:48:22.765990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-07 21:48:22.765999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-07 21:48:22.766204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-07 21:48:22.766213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-07 21:48:22.766420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-07 21:48:22.766429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-07 21:48:22.766675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-07 21:48:22.766684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-07 21:48:22.766961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-07 21:48:22.766970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-07 21:48:22.767201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-07 21:48:22.767211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-07 21:48:22.767509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-07 21:48:22.767518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-07 21:48:22.767768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-07 21:48:22.767777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-07 21:48:22.768043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-07 21:48:22.768053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-07 21:48:22.768347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-07 21:48:22.768356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-07 21:48:22.768553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-07 21:48:22.768562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-07 21:48:22.768686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-07 21:48:22.768695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-07 21:48:22.768904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-07 21:48:22.768914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-07 21:48:22.769122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-07 21:48:22.769132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-07 21:48:22.769334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-07 21:48:22.769343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-07 21:48:22.769505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-07 21:48:22.769514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-07 21:48:22.769834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-07 21:48:22.769844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-07 21:48:22.770061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-07 21:48:22.770071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-07 21:48:22.770363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-07 21:48:22.770372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-07 21:48:22.770656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-07 21:48:22.770666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-07 21:48:22.770887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-07 21:48:22.770897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-07 21:48:22.771103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-07 21:48:22.771112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-07 21:48:22.771322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-07 21:48:22.771332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-07 21:48:22.771557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-07 21:48:22.771567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-07 21:48:22.771766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-07 21:48:22.771776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-07 21:48:22.772079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-07 21:48:22.772089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-07 21:48:22.772295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-07 21:48:22.772305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-07 21:48:22.772517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-07 21:48:22.772527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-07 21:48:22.772803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-07 21:48:22.772812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-07 21:48:22.773077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-07 21:48:22.773086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-07 21:48:22.773240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-07 21:48:22.773250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-07 21:48:22.773539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-07 21:48:22.773548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-07 21:48:22.773832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-07 21:48:22.773841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-07 21:48:22.774050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-07 21:48:22.774062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-07 21:48:22.774329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-07 21:48:22.774339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-07 21:48:22.774619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-07 21:48:22.774628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-07 21:48:22.774841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-07 21:48:22.774850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-07 21:48:22.775115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-07 21:48:22.775125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-07 21:48:22.775341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-07 21:48:22.775351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-07 21:48:22.775546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-07 21:48:22.775556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-07 21:48:22.775698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-07 21:48:22.775707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-07 21:48:22.775974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-07 21:48:22.775983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-07 21:48:22.776279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-07 21:48:22.776288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-07 21:48:22.776497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-07 21:48:22.776507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-07 21:48:22.776635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-07 21:48:22.776645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-07 21:48:22.776910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-07 21:48:22.776919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-07 21:48:22.777116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-07 21:48:22.777126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-07 21:48:22.777338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-07 21:48:22.777348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-07 21:48:22.777616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-07 21:48:22.777625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-07 21:48:22.777835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-07 21:48:22.777844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-07 21:48:22.777996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-07 21:48:22.778006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-07 21:48:22.778218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-07 21:48:22.778228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-07 21:48:22.778496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-07 21:48:22.778505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-07 21:48:22.778774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-07 21:48:22.778783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-07 21:48:22.779017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-07 21:48:22.779031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-07 21:48:22.779309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-07 21:48:22.779318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-07 21:48:22.779609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-07 21:48:22.779619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-07 21:48:22.779901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-07 21:48:22.779911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-07 21:48:22.780206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-07 21:48:22.780215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-07 21:48:22.780430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-07 21:48:22.780440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-07 21:48:22.780711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-07 21:48:22.780720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-07 21:48:22.780933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-07 21:48:22.780942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-07 21:48:22.781102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-07 21:48:22.781112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-07 21:48:22.781341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-07 21:48:22.781350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-07 21:48:22.781569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-07 21:48:22.781580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-07 21:48:22.781792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-07 21:48:22.781801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-07 21:48:22.781950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-07 21:48:22.781960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-07 21:48:22.782222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-07 21:48:22.782231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-07 21:48:22.782512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-07 21:48:22.782521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-07 21:48:22.782731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-07 21:48:22.782740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-07 21:48:22.782929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-07 21:48:22.782938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-07 21:48:22.783214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-07 21:48:22.783224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-07 21:48:22.783384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-07 21:48:22.783393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-07 21:48:22.783602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-07 21:48:22.783613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-07 21:48:22.783825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-07 21:48:22.783835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-07 21:48:22.784156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-07 21:48:22.784165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-07 21:48:22.784462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-07 21:48:22.784471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-07 21:48:22.784615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-07 21:48:22.784624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-07 21:48:22.784911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-07 21:48:22.784921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-07 21:48:22.785134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-07 21:48:22.785144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-07 21:48:22.785296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-07 21:48:22.785305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-07 21:48:22.785530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-07 21:48:22.785539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-07 21:48:22.785696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-07 21:48:22.785705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-07 21:48:22.785922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-07 21:48:22.785931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 EAL: No free 2048 kB hugepages reported on node 1 00:31:22.617 [2024-06-07 21:48:22.786198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-07 21:48:22.786208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-07 21:48:22.786403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-07 21:48:22.786413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-07 21:48:22.786698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-07 21:48:22.786708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-07 21:48:22.787021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-07 21:48:22.787036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-07 21:48:22.787321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-07 21:48:22.787331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-07 21:48:22.787553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-07 21:48:22.787562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-07 21:48:22.787776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-07 21:48:22.787786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-07 21:48:22.787996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-07 21:48:22.788005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-07 21:48:22.788280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-07 21:48:22.788289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-07 21:48:22.788485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-07 21:48:22.788494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-07 21:48:22.788630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-07 21:48:22.788639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-07 21:48:22.788885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-07 21:48:22.788894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-07 21:48:22.789021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-07 21:48:22.789035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-07 21:48:22.789232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-07 21:48:22.789241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-07 21:48:22.789544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-07 21:48:22.789554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-07 21:48:22.789847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-07 21:48:22.789857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-07 21:48:22.790072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-07 21:48:22.790083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-07 21:48:22.790296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-07 21:48:22.790305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-07 21:48:22.790454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-07 21:48:22.790464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-07 21:48:22.790619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-07 21:48:22.790629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-07 21:48:22.790780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-07 21:48:22.790789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-07 21:48:22.790947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-07 21:48:22.790957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-07 21:48:22.791139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-07 21:48:22.791149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-07 21:48:22.791370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-07 21:48:22.791380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-07 21:48:22.791722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-07 21:48:22.791732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-07 21:48:22.791947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-07 21:48:22.791956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-07 21:48:22.792116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-07 21:48:22.792126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-07 21:48:22.792351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-07 21:48:22.792360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-07 21:48:22.792500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-07 21:48:22.792511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-07 21:48:22.792708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-07 21:48:22.792719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-07 21:48:22.792916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-07 21:48:22.792926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-07 21:48:22.793211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-07 21:48:22.793221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-07 21:48:22.793493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-07 21:48:22.793502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-07 21:48:22.793721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-07 21:48:22.793731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-07 21:48:22.793995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-07 21:48:22.794004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-07 21:48:22.794296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-07 21:48:22.794305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-07 21:48:22.794473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-07 21:48:22.794483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-07 21:48:22.794727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-07 21:48:22.794737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-07 21:48:22.794971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-07 21:48:22.794981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-07 21:48:22.795116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-07 21:48:22.795126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-07 21:48:22.795399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-07 21:48:22.795408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-07 21:48:22.795565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-07 21:48:22.795574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-07 21:48:22.795861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-07 21:48:22.795870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-07 21:48:22.796015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-07 21:48:22.796031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-07 21:48:22.796264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-07 21:48:22.796274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-07 21:48:22.796363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-07 21:48:22.796373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-07 21:48:22.796515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-07 21:48:22.796525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-07 21:48:22.796736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-07 21:48:22.796746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-07 21:48:22.796908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-07 21:48:22.796918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-07 21:48:22.797168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-07 21:48:22.797179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-07 21:48:22.797396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-07 21:48:22.797406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-07 21:48:22.797699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-07 21:48:22.797709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-07 21:48:22.797932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-07 21:48:22.797941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-07 21:48:22.798137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-07 21:48:22.798147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-07 21:48:22.798290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-07 21:48:22.798300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-07 21:48:22.798510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-07 21:48:22.798519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-07 21:48:22.798755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-07 21:48:22.798765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-07 21:48:22.798921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-07 21:48:22.798931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-07 21:48:22.799143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-07 21:48:22.799154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-07 21:48:22.799357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-07 21:48:22.799367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-07 21:48:22.799523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-07 21:48:22.799533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-07 21:48:22.799658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-07 21:48:22.799668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-07 21:48:22.799894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-07 21:48:22.799904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-07 21:48:22.800061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-07 21:48:22.800070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-07 21:48:22.800281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-07 21:48:22.800290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-07 21:48:22.800431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-07 21:48:22.800441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-07 21:48:22.800651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-07 21:48:22.800661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-07 21:48:22.800806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-07 21:48:22.800815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-07 21:48:22.801016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-07 21:48:22.801032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-07 21:48:22.801258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-07 21:48:22.801269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-07 21:48:22.801422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-07 21:48:22.801431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-07 21:48:22.801718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-07 21:48:22.801728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-07 21:48:22.801961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-07 21:48:22.801971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-07 21:48:22.802174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-07 21:48:22.802184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-07 21:48:22.802353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-07 21:48:22.802363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-07 21:48:22.802641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-07 21:48:22.802651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-07 21:48:22.802921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-07 21:48:22.802932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-07 21:48:22.803087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-07 21:48:22.803098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-07 21:48:22.803258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-07 21:48:22.803267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-07 21:48:22.803404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-07 21:48:22.803414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-07 21:48:22.803644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-07 21:48:22.803654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-07 21:48:22.803786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-07 21:48:22.803795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-07 21:48:22.804107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-07 21:48:22.804117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-07 21:48:22.804323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-07 21:48:22.804333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-07 21:48:22.804573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-07 21:48:22.804583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-07 21:48:22.804718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-07 21:48:22.804728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-07 21:48:22.804940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-07 21:48:22.804950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-07 21:48:22.805242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-07 21:48:22.805252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-07 21:48:22.805399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-07 21:48:22.805409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-07 21:48:22.805574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-07 21:48:22.805584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-07 21:48:22.805803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-07 21:48:22.805813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-07 21:48:22.806109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-07 21:48:22.806118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-07 21:48:22.806372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-07 21:48:22.806382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-07 21:48:22.806679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-07 21:48:22.806688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-07 21:48:22.806923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-07 21:48:22.806933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-07 21:48:22.807091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-07 21:48:22.807101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-07 21:48:22.807330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-07 21:48:22.807339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-07 21:48:22.807500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-07 21:48:22.807509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-07 21:48:22.807711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-07 21:48:22.807721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-07 21:48:22.807989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-07 21:48:22.807999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-07 21:48:22.808294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-07 21:48:22.808303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-07 21:48:22.808569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-07 21:48:22.808579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-07 21:48:22.808728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-07 21:48:22.808738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-07 21:48:22.808946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-07 21:48:22.808955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-07 21:48:22.809263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-07 21:48:22.809273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-07 21:48:22.809596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-07 21:48:22.809605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-07 21:48:22.809821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-07 21:48:22.809831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-07 21:48:22.809983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-07 21:48:22.809992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-07 21:48:22.810224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-07 21:48:22.810234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-07 21:48:22.810543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-07 21:48:22.810554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-07 21:48:22.810699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-07 21:48:22.810709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-07 21:48:22.811031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-07 21:48:22.811040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.621 [2024-06-07 21:48:22.811305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.621 [2024-06-07 21:48:22.811316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.621 qpair failed and we were unable to recover it. 00:31:22.621 [2024-06-07 21:48:22.811468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.621 [2024-06-07 21:48:22.811478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.621 qpair failed and we were unable to recover it. 00:31:22.621 [2024-06-07 21:48:22.811783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.621 [2024-06-07 21:48:22.811793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.621 qpair failed and we were unable to recover it. 00:31:22.621 [2024-06-07 21:48:22.812004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.621 [2024-06-07 21:48:22.812014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.621 qpair failed and we were unable to recover it. 00:31:22.621 [2024-06-07 21:48:22.812329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.621 [2024-06-07 21:48:22.812340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.621 qpair failed and we were unable to recover it. 00:31:22.621 [2024-06-07 21:48:22.812547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.621 [2024-06-07 21:48:22.812556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.621 qpair failed and we were unable to recover it. 00:31:22.621 [2024-06-07 21:48:22.812825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.621 [2024-06-07 21:48:22.812834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.621 qpair failed and we were unable to recover it. 00:31:22.621 [2024-06-07 21:48:22.813101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.621 [2024-06-07 21:48:22.813110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.621 qpair failed and we were unable to recover it. 00:31:22.621 [2024-06-07 21:48:22.813350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.621 [2024-06-07 21:48:22.813360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.621 qpair failed and we were unable to recover it. 00:31:22.621 [2024-06-07 21:48:22.813561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.621 [2024-06-07 21:48:22.813570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.621 qpair failed and we were unable to recover it. 00:31:22.621 [2024-06-07 21:48:22.813767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.621 [2024-06-07 21:48:22.813777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.621 qpair failed and we were unable to recover it. 00:31:22.621 [2024-06-07 21:48:22.814096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.621 [2024-06-07 21:48:22.814106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.621 qpair failed and we were unable to recover it. 00:31:22.621 [2024-06-07 21:48:22.814320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.621 [2024-06-07 21:48:22.814329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.621 qpair failed and we were unable to recover it. 00:31:22.621 [2024-06-07 21:48:22.814537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.621 [2024-06-07 21:48:22.814546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.621 qpair failed and we were unable to recover it. 00:31:22.621 [2024-06-07 21:48:22.814865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.621 [2024-06-07 21:48:22.814875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.621 qpair failed and we were unable to recover it. 00:31:22.621 [2024-06-07 21:48:22.815082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.621 [2024-06-07 21:48:22.815092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.621 qpair failed and we were unable to recover it. 00:31:22.621 [2024-06-07 21:48:22.815248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.621 [2024-06-07 21:48:22.815257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.621 qpair failed and we were unable to recover it. 00:31:22.621 [2024-06-07 21:48:22.815394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.621 [2024-06-07 21:48:22.815404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.621 qpair failed and we were unable to recover it. 00:31:22.621 [2024-06-07 21:48:22.815629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.621 [2024-06-07 21:48:22.815638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.621 qpair failed and we were unable to recover it. 00:31:22.621 [2024-06-07 21:48:22.815845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.621 [2024-06-07 21:48:22.815854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.621 qpair failed and we were unable to recover it. 00:31:22.621 [2024-06-07 21:48:22.816021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.621 [2024-06-07 21:48:22.816035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.621 qpair failed and we were unable to recover it. 00:31:22.621 [2024-06-07 21:48:22.816303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.621 [2024-06-07 21:48:22.816313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.621 qpair failed and we were unable to recover it. 00:31:22.621 [2024-06-07 21:48:22.816517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.621 [2024-06-07 21:48:22.816526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.621 qpair failed and we were unable to recover it. 00:31:22.621 [2024-06-07 21:48:22.816728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.621 [2024-06-07 21:48:22.816738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.621 qpair failed and we were unable to recover it. 00:31:22.621 [2024-06-07 21:48:22.816894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.621 [2024-06-07 21:48:22.816904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.621 qpair failed and we were unable to recover it. 00:31:22.621 [2024-06-07 21:48:22.817016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.621 [2024-06-07 21:48:22.817030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.621 qpair failed and we were unable to recover it. 00:31:22.621 [2024-06-07 21:48:22.817120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.621 [2024-06-07 21:48:22.817130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.621 qpair failed and we were unable to recover it. 00:31:22.621 [2024-06-07 21:48:22.817349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.621 [2024-06-07 21:48:22.817358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.621 qpair failed and we were unable to recover it. 00:31:22.621 [2024-06-07 21:48:22.817645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.621 [2024-06-07 21:48:22.817654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.621 qpair failed and we were unable to recover it. 00:31:22.621 [2024-06-07 21:48:22.817802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.621 [2024-06-07 21:48:22.817812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.621 qpair failed and we were unable to recover it. 00:31:22.621 [2024-06-07 21:48:22.817956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.622 [2024-06-07 21:48:22.817966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.622 qpair failed and we were unable to recover it. 00:31:22.622 [2024-06-07 21:48:22.818210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.622 [2024-06-07 21:48:22.818220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.622 qpair failed and we were unable to recover it. 00:31:22.622 [2024-06-07 21:48:22.818379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.622 [2024-06-07 21:48:22.818389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.622 qpair failed and we were unable to recover it. 00:31:22.622 [2024-06-07 21:48:22.818529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.622 [2024-06-07 21:48:22.818539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.622 qpair failed and we were unable to recover it. 00:31:22.622 [2024-06-07 21:48:22.818747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.622 [2024-06-07 21:48:22.818757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.622 qpair failed and we were unable to recover it. 00:31:22.622 [2024-06-07 21:48:22.818955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.622 [2024-06-07 21:48:22.818964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.622 qpair failed and we were unable to recover it. 00:31:22.622 [2024-06-07 21:48:22.819253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.622 [2024-06-07 21:48:22.819263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.622 qpair failed and we were unable to recover it. 00:31:22.622 [2024-06-07 21:48:22.819565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.622 [2024-06-07 21:48:22.819577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.622 qpair failed and we were unable to recover it. 00:31:22.622 [2024-06-07 21:48:22.819790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.622 [2024-06-07 21:48:22.819800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.622 qpair failed and we were unable to recover it. 00:31:22.622 [2024-06-07 21:48:22.819964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.622 [2024-06-07 21:48:22.819973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.622 qpair failed and we were unable to recover it. 00:31:22.622 [2024-06-07 21:48:22.820180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.622 [2024-06-07 21:48:22.820190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.622 qpair failed and we were unable to recover it. 00:31:22.622 [2024-06-07 21:48:22.820416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.622 [2024-06-07 21:48:22.820426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.622 qpair failed and we were unable to recover it. 00:31:22.622 [2024-06-07 21:48:22.820648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.622 [2024-06-07 21:48:22.820658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.622 qpair failed and we were unable to recover it. 00:31:22.622 [2024-06-07 21:48:22.820951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.622 [2024-06-07 21:48:22.820961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.622 qpair failed and we were unable to recover it. 00:31:22.622 [2024-06-07 21:48:22.821175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.622 [2024-06-07 21:48:22.821185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.622 qpair failed and we were unable to recover it. 00:31:22.622 [2024-06-07 21:48:22.821374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.622 [2024-06-07 21:48:22.821383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.622 qpair failed and we were unable to recover it. 00:31:22.622 [2024-06-07 21:48:22.821649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.622 [2024-06-07 21:48:22.821659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.622 qpair failed and we were unable to recover it. 00:31:22.622 [2024-06-07 21:48:22.821901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.622 [2024-06-07 21:48:22.821911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.622 qpair failed and we were unable to recover it. 00:31:22.622 [2024-06-07 21:48:22.822041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.622 [2024-06-07 21:48:22.822051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.622 qpair failed and we were unable to recover it. 00:31:22.622 [2024-06-07 21:48:22.822260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.622 [2024-06-07 21:48:22.822270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.622 qpair failed and we were unable to recover it. 00:31:22.622 [2024-06-07 21:48:22.822465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.622 [2024-06-07 21:48:22.822474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.622 qpair failed and we were unable to recover it. 00:31:22.622 [2024-06-07 21:48:22.822682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.622 [2024-06-07 21:48:22.822691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.622 qpair failed and we were unable to recover it. 00:31:22.622 [2024-06-07 21:48:22.822886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.622 [2024-06-07 21:48:22.822896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.622 qpair failed and we were unable to recover it. 00:31:22.622 [2024-06-07 21:48:22.823036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.622 [2024-06-07 21:48:22.823046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.622 qpair failed and we were unable to recover it. 00:31:22.622 [2024-06-07 21:48:22.823207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.622 [2024-06-07 21:48:22.823218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.622 qpair failed and we were unable to recover it. 00:31:22.622 [2024-06-07 21:48:22.823501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.622 [2024-06-07 21:48:22.823511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.622 qpair failed and we were unable to recover it. 00:31:22.622 [2024-06-07 21:48:22.823782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.622 [2024-06-07 21:48:22.823792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.622 qpair failed and we were unable to recover it. 00:31:22.622 [2024-06-07 21:48:22.823999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.622 [2024-06-07 21:48:22.824009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.622 qpair failed and we were unable to recover it. 00:31:22.622 [2024-06-07 21:48:22.824295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.622 [2024-06-07 21:48:22.824304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.622 qpair failed and we were unable to recover it. 00:31:22.622 [2024-06-07 21:48:22.824416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.622 [2024-06-07 21:48:22.824425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.622 qpair failed and we were unable to recover it. 00:31:22.622 [2024-06-07 21:48:22.824642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.622 [2024-06-07 21:48:22.824651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.622 qpair failed and we were unable to recover it. 00:31:22.622 [2024-06-07 21:48:22.824866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.622 [2024-06-07 21:48:22.824875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.622 qpair failed and we were unable to recover it. 00:31:22.622 [2024-06-07 21:48:22.825157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.622 [2024-06-07 21:48:22.825166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.623 qpair failed and we were unable to recover it. 00:31:22.623 [2024-06-07 21:48:22.825384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.623 [2024-06-07 21:48:22.825394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.623 qpair failed and we were unable to recover it. 00:31:22.623 [2024-06-07 21:48:22.825662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.623 [2024-06-07 21:48:22.825672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.623 qpair failed and we were unable to recover it. 00:31:22.623 [2024-06-07 21:48:22.825867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.623 [2024-06-07 21:48:22.825877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.623 qpair failed and we were unable to recover it. 00:31:22.623 [2024-06-07 21:48:22.826082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.623 [2024-06-07 21:48:22.826092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.623 qpair failed and we were unable to recover it. 00:31:22.623 [2024-06-07 21:48:22.826303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.623 [2024-06-07 21:48:22.826312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.623 qpair failed and we were unable to recover it. 00:31:22.623 [2024-06-07 21:48:22.826453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.623 [2024-06-07 21:48:22.826463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.623 qpair failed and we were unable to recover it. 00:31:22.623 [2024-06-07 21:48:22.826676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.623 [2024-06-07 21:48:22.826685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.623 qpair failed and we were unable to recover it. 00:31:22.623 [2024-06-07 21:48:22.826896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.623 [2024-06-07 21:48:22.826905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.623 qpair failed and we were unable to recover it. 00:31:22.623 [2024-06-07 21:48:22.827110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.623 [2024-06-07 21:48:22.827120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.623 qpair failed and we were unable to recover it. 00:31:22.623 [2024-06-07 21:48:22.827266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.623 [2024-06-07 21:48:22.827275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.623 qpair failed and we were unable to recover it. 00:31:22.623 [2024-06-07 21:48:22.827410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.623 [2024-06-07 21:48:22.827421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.623 qpair failed and we were unable to recover it. 00:31:22.623 [2024-06-07 21:48:22.827639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.623 [2024-06-07 21:48:22.827649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.623 qpair failed and we were unable to recover it. 00:31:22.623 [2024-06-07 21:48:22.827940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.623 [2024-06-07 21:48:22.827950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.623 qpair failed and we were unable to recover it. 00:31:22.623 [2024-06-07 21:48:22.828219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.623 [2024-06-07 21:48:22.828229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.623 qpair failed and we were unable to recover it. 00:31:22.623 [2024-06-07 21:48:22.828382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.623 [2024-06-07 21:48:22.828393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.623 qpair failed and we were unable to recover it. 00:31:22.623 [2024-06-07 21:48:22.828614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.623 [2024-06-07 21:48:22.828624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.623 qpair failed and we were unable to recover it. 00:31:22.623 [2024-06-07 21:48:22.828765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.623 [2024-06-07 21:48:22.828775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.623 qpair failed and we were unable to recover it. 00:31:22.623 [2024-06-07 21:48:22.829076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.623 [2024-06-07 21:48:22.829086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.623 qpair failed and we were unable to recover it. 00:31:22.623 [2024-06-07 21:48:22.829284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.623 [2024-06-07 21:48:22.829294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.623 qpair failed and we were unable to recover it. 00:31:22.623 [2024-06-07 21:48:22.829598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.623 [2024-06-07 21:48:22.829608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.623 qpair failed and we were unable to recover it. 00:31:22.623 [2024-06-07 21:48:22.829851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.623 [2024-06-07 21:48:22.829861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.623 qpair failed and we were unable to recover it. 00:31:22.623 [2024-06-07 21:48:22.830078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.623 [2024-06-07 21:48:22.830088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.623 qpair failed and we were unable to recover it. 00:31:22.623 [2024-06-07 21:48:22.830298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.623 [2024-06-07 21:48:22.830308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.623 qpair failed and we were unable to recover it. 00:31:22.623 [2024-06-07 21:48:22.830579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.623 [2024-06-07 21:48:22.830589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.623 qpair failed and we were unable to recover it. 00:31:22.623 [2024-06-07 21:48:22.830867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.623 [2024-06-07 21:48:22.830877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.623 qpair failed and we were unable to recover it. 00:31:22.623 [2024-06-07 21:48:22.831031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.623 [2024-06-07 21:48:22.831041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.623 qpair failed and we were unable to recover it. 00:31:22.623 [2024-06-07 21:48:22.831198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.623 [2024-06-07 21:48:22.831207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.623 qpair failed and we were unable to recover it. 00:31:22.623 [2024-06-07 21:48:22.831472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.623 [2024-06-07 21:48:22.831482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.623 qpair failed and we were unable to recover it. 00:31:22.623 [2024-06-07 21:48:22.831619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.623 [2024-06-07 21:48:22.831629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.623 qpair failed and we were unable to recover it. 00:31:22.623 [2024-06-07 21:48:22.831823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.623 [2024-06-07 21:48:22.831833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.623 qpair failed and we were unable to recover it. 00:31:22.623 [2024-06-07 21:48:22.832046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.623 [2024-06-07 21:48:22.832056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.623 qpair failed and we were unable to recover it. 00:31:22.623 [2024-06-07 21:48:22.832343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.623 [2024-06-07 21:48:22.832353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.623 qpair failed and we were unable to recover it. 00:31:22.623 [2024-06-07 21:48:22.832632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.623 [2024-06-07 21:48:22.832642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.623 qpair failed and we were unable to recover it. 00:31:22.623 [2024-06-07 21:48:22.832866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.623 [2024-06-07 21:48:22.832876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.623 qpair failed and we were unable to recover it. 00:31:22.623 [2024-06-07 21:48:22.833044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.623 [2024-06-07 21:48:22.833054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.623 qpair failed and we were unable to recover it. 00:31:22.623 [2024-06-07 21:48:22.833288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.623 [2024-06-07 21:48:22.833298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.623 qpair failed and we were unable to recover it. 00:31:22.624 [2024-06-07 21:48:22.833592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.624 [2024-06-07 21:48:22.833602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.624 qpair failed and we were unable to recover it. 00:31:22.624 [2024-06-07 21:48:22.833898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.624 [2024-06-07 21:48:22.833907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.624 qpair failed and we were unable to recover it. 00:31:22.624 [2024-06-07 21:48:22.834143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.624 [2024-06-07 21:48:22.834153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.624 qpair failed and we were unable to recover it. 00:31:22.624 [2024-06-07 21:48:22.834344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.624 [2024-06-07 21:48:22.834353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.624 qpair failed and we were unable to recover it. 00:31:22.624 [2024-06-07 21:48:22.834502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.624 [2024-06-07 21:48:22.834512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.624 qpair failed and we were unable to recover it. 00:31:22.904 [2024-06-07 21:48:22.834782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.904 [2024-06-07 21:48:22.834792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.904 qpair failed and we were unable to recover it. 00:31:22.904 [2024-06-07 21:48:22.835090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.904 [2024-06-07 21:48:22.835100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.904 qpair failed and we were unable to recover it. 00:31:22.904 [2024-06-07 21:48:22.835366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.904 [2024-06-07 21:48:22.835376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.904 qpair failed and we were unable to recover it. 00:31:22.904 [2024-06-07 21:48:22.835586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.904 [2024-06-07 21:48:22.835596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.904 qpair failed and we were unable to recover it. 00:31:22.904 [2024-06-07 21:48:22.835812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.904 [2024-06-07 21:48:22.835821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.904 qpair failed and we were unable to recover it. 00:31:22.904 [2024-06-07 21:48:22.835971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.904 [2024-06-07 21:48:22.835981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.904 qpair failed and we were unable to recover it. 00:31:22.904 [2024-06-07 21:48:22.836188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.904 [2024-06-07 21:48:22.836198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.904 qpair failed and we were unable to recover it. 00:31:22.904 [2024-06-07 21:48:22.836488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.904 [2024-06-07 21:48:22.836498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.904 qpair failed and we were unable to recover it. 00:31:22.904 [2024-06-07 21:48:22.836701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.904 [2024-06-07 21:48:22.836710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.904 qpair failed and we were unable to recover it. 00:31:22.904 [2024-06-07 21:48:22.836846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.904 [2024-06-07 21:48:22.836855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.904 qpair failed and we were unable to recover it. 00:31:22.904 [2024-06-07 21:48:22.837014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.904 [2024-06-07 21:48:22.837028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.904 qpair failed and we were unable to recover it. 00:31:22.904 [2024-06-07 21:48:22.837252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.904 [2024-06-07 21:48:22.837261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.904 qpair failed and we were unable to recover it. 00:31:22.904 [2024-06-07 21:48:22.837480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.904 [2024-06-07 21:48:22.837490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.904 qpair failed and we were unable to recover it. 00:31:22.904 [2024-06-07 21:48:22.837699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.904 [2024-06-07 21:48:22.837710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.904 qpair failed and we were unable to recover it. 00:31:22.904 [2024-06-07 21:48:22.837786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.904 [2024-06-07 21:48:22.837796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.904 qpair failed and we were unable to recover it. 00:31:22.904 [2024-06-07 21:48:22.837960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.904 [2024-06-07 21:48:22.837970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.904 qpair failed and we were unable to recover it. 00:31:22.904 [2024-06-07 21:48:22.838267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.904 [2024-06-07 21:48:22.838278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.904 qpair failed and we were unable to recover it. 00:31:22.904 [2024-06-07 21:48:22.838495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.904 [2024-06-07 21:48:22.838505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.904 qpair failed and we were unable to recover it. 00:31:22.904 [2024-06-07 21:48:22.838788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.904 [2024-06-07 21:48:22.838798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.904 qpair failed and we were unable to recover it. 00:31:22.904 [2024-06-07 21:48:22.839010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.904 [2024-06-07 21:48:22.839019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.904 qpair failed and we were unable to recover it. 00:31:22.904 [2024-06-07 21:48:22.839252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.904 [2024-06-07 21:48:22.839262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.904 qpair failed and we were unable to recover it. 00:31:22.904 [2024-06-07 21:48:22.839497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.904 [2024-06-07 21:48:22.839506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.904 qpair failed and we were unable to recover it. 00:31:22.904 [2024-06-07 21:48:22.839809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.904 [2024-06-07 21:48:22.839819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.904 qpair failed and we were unable to recover it. 00:31:22.904 [2024-06-07 21:48:22.839853] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:22.904 [2024-06-07 21:48:22.840108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.904 [2024-06-07 21:48:22.840119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.904 qpair failed and we were unable to recover it. 00:31:22.904 [2024-06-07 21:48:22.840278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.905 [2024-06-07 21:48:22.840289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.905 qpair failed and we were unable to recover it. 00:31:22.905 [2024-06-07 21:48:22.840438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.905 [2024-06-07 21:48:22.840448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.905 qpair failed and we were unable to recover it. 00:31:22.905 [2024-06-07 21:48:22.840683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.905 [2024-06-07 21:48:22.840696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.905 qpair failed and we were unable to recover it. 00:31:22.905 [2024-06-07 21:48:22.840852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.905 [2024-06-07 21:48:22.840862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.905 qpair failed and we were unable to recover it. 00:31:22.905 [2024-06-07 21:48:22.841070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.905 [2024-06-07 21:48:22.841080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.905 qpair failed and we were unable to recover it. 00:31:22.905 [2024-06-07 21:48:22.841224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.905 [2024-06-07 21:48:22.841234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.905 qpair failed and we were unable to recover it. 00:31:22.905 [2024-06-07 21:48:22.841439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.905 [2024-06-07 21:48:22.841449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.905 qpair failed and we were unable to recover it. 00:31:22.905 [2024-06-07 21:48:22.841732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.905 [2024-06-07 21:48:22.841741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.905 qpair failed and we were unable to recover it. 00:31:22.905 [2024-06-07 21:48:22.841873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.905 [2024-06-07 21:48:22.841883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.905 qpair failed and we were unable to recover it. 00:31:22.905 [2024-06-07 21:48:22.842092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.905 [2024-06-07 21:48:22.842102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.905 qpair failed and we were unable to recover it. 00:31:22.905 [2024-06-07 21:48:22.842389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.905 [2024-06-07 21:48:22.842398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.905 qpair failed and we were unable to recover it. 00:31:22.905 [2024-06-07 21:48:22.842733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.905 [2024-06-07 21:48:22.842742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.905 qpair failed and we were unable to recover it. 00:31:22.905 [2024-06-07 21:48:22.843013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.905 [2024-06-07 21:48:22.843023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.905 qpair failed and we were unable to recover it. 00:31:22.905 [2024-06-07 21:48:22.843238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.905 [2024-06-07 21:48:22.843248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.905 qpair failed and we were unable to recover it. 00:31:22.905 [2024-06-07 21:48:22.843390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.905 [2024-06-07 21:48:22.843399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.905 qpair failed and we were unable to recover it. 00:31:22.905 [2024-06-07 21:48:22.843610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.905 [2024-06-07 21:48:22.843621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.905 qpair failed and we were unable to recover it. 00:31:22.905 [2024-06-07 21:48:22.843896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.905 [2024-06-07 21:48:22.843906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.905 qpair failed and we were unable to recover it. 00:31:22.905 [2024-06-07 21:48:22.844067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.905 [2024-06-07 21:48:22.844076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.905 qpair failed and we were unable to recover it. 00:31:22.905 [2024-06-07 21:48:22.844368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.905 [2024-06-07 21:48:22.844379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.905 qpair failed and we were unable to recover it. 00:31:22.905 [2024-06-07 21:48:22.844684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.905 [2024-06-07 21:48:22.844694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.905 qpair failed and we were unable to recover it. 00:31:22.905 [2024-06-07 21:48:22.844958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.905 [2024-06-07 21:48:22.844968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.905 qpair failed and we were unable to recover it. 00:31:22.905 [2024-06-07 21:48:22.845125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.905 [2024-06-07 21:48:22.845135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.905 qpair failed and we were unable to recover it. 00:31:22.905 [2024-06-07 21:48:22.845403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.905 [2024-06-07 21:48:22.845412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.905 qpair failed and we were unable to recover it. 00:31:22.905 [2024-06-07 21:48:22.845629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.905 [2024-06-07 21:48:22.845639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.905 qpair failed and we were unable to recover it. 00:31:22.905 [2024-06-07 21:48:22.845837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.905 [2024-06-07 21:48:22.845847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.905 qpair failed and we were unable to recover it. 00:31:22.905 [2024-06-07 21:48:22.846028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.905 [2024-06-07 21:48:22.846038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.905 qpair failed and we were unable to recover it. 00:31:22.905 [2024-06-07 21:48:22.846308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.905 [2024-06-07 21:48:22.846319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.905 qpair failed and we were unable to recover it. 00:31:22.905 [2024-06-07 21:48:22.846555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.905 [2024-06-07 21:48:22.846565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.905 qpair failed and we were unable to recover it. 00:31:22.905 [2024-06-07 21:48:22.846723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.905 [2024-06-07 21:48:22.846733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.905 qpair failed and we were unable to recover it. 00:31:22.905 [2024-06-07 21:48:22.846952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.905 [2024-06-07 21:48:22.846962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.905 qpair failed and we were unable to recover it. 00:31:22.905 [2024-06-07 21:48:22.847105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.905 [2024-06-07 21:48:22.847115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.905 qpair failed and we were unable to recover it. 00:31:22.905 [2024-06-07 21:48:22.847331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.905 [2024-06-07 21:48:22.847341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.905 qpair failed and we were unable to recover it. 00:31:22.905 [2024-06-07 21:48:22.847629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.905 [2024-06-07 21:48:22.847640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.905 qpair failed and we were unable to recover it. 00:31:22.905 [2024-06-07 21:48:22.847880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.905 [2024-06-07 21:48:22.847890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.905 qpair failed and we were unable to recover it. 00:31:22.905 [2024-06-07 21:48:22.848099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.905 [2024-06-07 21:48:22.848109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.905 qpair failed and we were unable to recover it. 00:31:22.905 [2024-06-07 21:48:22.848305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.905 [2024-06-07 21:48:22.848316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.905 qpair failed and we were unable to recover it. 00:31:22.905 [2024-06-07 21:48:22.848582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.905 [2024-06-07 21:48:22.848593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.905 qpair failed and we were unable to recover it. 00:31:22.905 [2024-06-07 21:48:22.848860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.905 [2024-06-07 21:48:22.848871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.905 qpair failed and we were unable to recover it. 00:31:22.906 [2024-06-07 21:48:22.849086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.906 [2024-06-07 21:48:22.849097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.906 qpair failed and we were unable to recover it. 00:31:22.906 [2024-06-07 21:48:22.849363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.906 [2024-06-07 21:48:22.849375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.906 qpair failed and we were unable to recover it. 00:31:22.906 [2024-06-07 21:48:22.849618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.906 [2024-06-07 21:48:22.849630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.906 qpair failed and we were unable to recover it. 00:31:22.906 [2024-06-07 21:48:22.849927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.906 [2024-06-07 21:48:22.849937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.906 qpair failed and we were unable to recover it. 00:31:22.906 [2024-06-07 21:48:22.850095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.906 [2024-06-07 21:48:22.850110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.906 qpair failed and we were unable to recover it. 00:31:22.906 [2024-06-07 21:48:22.850217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.906 [2024-06-07 21:48:22.850227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.906 qpair failed and we were unable to recover it. 00:31:22.906 [2024-06-07 21:48:22.850431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.906 [2024-06-07 21:48:22.850440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.906 qpair failed and we were unable to recover it. 00:31:22.906 [2024-06-07 21:48:22.850579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.906 [2024-06-07 21:48:22.850590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.906 qpair failed and we were unable to recover it. 00:31:22.906 [2024-06-07 21:48:22.850815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.906 [2024-06-07 21:48:22.850824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.906 qpair failed and we were unable to recover it. 00:31:22.906 [2024-06-07 21:48:22.851035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.906 [2024-06-07 21:48:22.851046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.906 qpair failed and we were unable to recover it. 00:31:22.906 [2024-06-07 21:48:22.851282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.906 [2024-06-07 21:48:22.851291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.906 qpair failed and we were unable to recover it. 00:31:22.906 [2024-06-07 21:48:22.851455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.906 [2024-06-07 21:48:22.851464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.906 qpair failed and we were unable to recover it. 00:31:22.906 [2024-06-07 21:48:22.851667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.906 [2024-06-07 21:48:22.851678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.906 qpair failed and we were unable to recover it. 00:31:22.906 [2024-06-07 21:48:22.851979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.906 [2024-06-07 21:48:22.851989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.906 qpair failed and we were unable to recover it. 00:31:22.906 [2024-06-07 21:48:22.852148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.906 [2024-06-07 21:48:22.852157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.906 qpair failed and we were unable to recover it. 00:31:22.906 [2024-06-07 21:48:22.852296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.906 [2024-06-07 21:48:22.852306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.906 qpair failed and we were unable to recover it. 00:31:22.906 [2024-06-07 21:48:22.852575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.906 [2024-06-07 21:48:22.852585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.906 qpair failed and we were unable to recover it. 00:31:22.906 [2024-06-07 21:48:22.852797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.906 [2024-06-07 21:48:22.852807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.906 qpair failed and we were unable to recover it. 00:31:22.906 [2024-06-07 21:48:22.853030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.906 [2024-06-07 21:48:22.853040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.906 qpair failed and we were unable to recover it. 00:31:22.906 [2024-06-07 21:48:22.853258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.906 [2024-06-07 21:48:22.853267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.906 qpair failed and we were unable to recover it. 00:31:22.906 [2024-06-07 21:48:22.853465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.906 [2024-06-07 21:48:22.853475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.906 qpair failed and we were unable to recover it. 00:31:22.906 [2024-06-07 21:48:22.853764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.906 [2024-06-07 21:48:22.853774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.906 qpair failed and we were unable to recover it. 00:31:22.906 [2024-06-07 21:48:22.853981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.906 [2024-06-07 21:48:22.853991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.906 qpair failed and we were unable to recover it. 00:31:22.906 [2024-06-07 21:48:22.854132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.906 [2024-06-07 21:48:22.854142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.906 qpair failed and we were unable to recover it. 00:31:22.906 [2024-06-07 21:48:22.854356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.906 [2024-06-07 21:48:22.854365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.906 qpair failed and we were unable to recover it. 00:31:22.906 [2024-06-07 21:48:22.854567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.906 [2024-06-07 21:48:22.854576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.906 qpair failed and we were unable to recover it. 00:31:22.906 [2024-06-07 21:48:22.854803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.906 [2024-06-07 21:48:22.854812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.906 qpair failed and we were unable to recover it. 00:31:22.906 [2024-06-07 21:48:22.854911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.906 [2024-06-07 21:48:22.854920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.906 qpair failed and we were unable to recover it. 00:31:22.906 [2024-06-07 21:48:22.855120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.906 [2024-06-07 21:48:22.855130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.906 qpair failed and we were unable to recover it. 00:31:22.906 [2024-06-07 21:48:22.855405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.906 [2024-06-07 21:48:22.855414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.906 qpair failed and we were unable to recover it. 00:31:22.906 [2024-06-07 21:48:22.855627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.906 [2024-06-07 21:48:22.855637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.906 qpair failed and we were unable to recover it. 00:31:22.906 [2024-06-07 21:48:22.855771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.906 [2024-06-07 21:48:22.855781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.906 qpair failed and we were unable to recover it. 00:31:22.906 [2024-06-07 21:48:22.856061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.906 [2024-06-07 21:48:22.856071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.906 qpair failed and we were unable to recover it. 00:31:22.906 [2024-06-07 21:48:22.856284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.906 [2024-06-07 21:48:22.856293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.906 qpair failed and we were unable to recover it. 00:31:22.906 [2024-06-07 21:48:22.856501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.906 [2024-06-07 21:48:22.856510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.906 qpair failed and we were unable to recover it. 00:31:22.906 [2024-06-07 21:48:22.856663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.906 [2024-06-07 21:48:22.856672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.906 qpair failed and we were unable to recover it. 00:31:22.906 [2024-06-07 21:48:22.856965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.906 [2024-06-07 21:48:22.856974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.906 qpair failed and we were unable to recover it. 00:31:22.906 [2024-06-07 21:48:22.857178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.906 [2024-06-07 21:48:22.857188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.906 qpair failed and we were unable to recover it. 00:31:22.907 [2024-06-07 21:48:22.857468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.907 [2024-06-07 21:48:22.857477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.907 qpair failed and we were unable to recover it. 00:31:22.907 [2024-06-07 21:48:22.857691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.907 [2024-06-07 21:48:22.857700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.907 qpair failed and we were unable to recover it. 00:31:22.907 [2024-06-07 21:48:22.857968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.907 [2024-06-07 21:48:22.857977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.907 qpair failed and we were unable to recover it. 00:31:22.907 [2024-06-07 21:48:22.858245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.907 [2024-06-07 21:48:22.858255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.907 qpair failed and we were unable to recover it. 00:31:22.907 [2024-06-07 21:48:22.858528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.907 [2024-06-07 21:48:22.858538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.907 qpair failed and we were unable to recover it. 00:31:22.907 [2024-06-07 21:48:22.858833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.907 [2024-06-07 21:48:22.858842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.907 qpair failed and we were unable to recover it. 00:31:22.907 [2024-06-07 21:48:22.859126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.907 [2024-06-07 21:48:22.859137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.907 qpair failed and we were unable to recover it. 00:31:22.907 [2024-06-07 21:48:22.859429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.907 [2024-06-07 21:48:22.859438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.907 qpair failed and we were unable to recover it. 00:31:22.907 [2024-06-07 21:48:22.859576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.907 [2024-06-07 21:48:22.859585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.907 qpair failed and we were unable to recover it. 00:31:22.907 [2024-06-07 21:48:22.859781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.907 [2024-06-07 21:48:22.859790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.907 qpair failed and we were unable to recover it. 00:31:22.907 [2024-06-07 21:48:22.859934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.907 [2024-06-07 21:48:22.859944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.907 qpair failed and we were unable to recover it. 00:31:22.907 [2024-06-07 21:48:22.860264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.907 [2024-06-07 21:48:22.860273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.907 qpair failed and we were unable to recover it. 00:31:22.907 [2024-06-07 21:48:22.860473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.907 [2024-06-07 21:48:22.860482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.907 qpair failed and we were unable to recover it. 00:31:22.907 [2024-06-07 21:48:22.860637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.907 [2024-06-07 21:48:22.860646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.907 qpair failed and we were unable to recover it. 00:31:22.907 [2024-06-07 21:48:22.860872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.907 [2024-06-07 21:48:22.860881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.907 qpair failed and we were unable to recover it. 00:31:22.907 [2024-06-07 21:48:22.861077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.907 [2024-06-07 21:48:22.861086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.907 qpair failed and we were unable to recover it. 00:31:22.907 [2024-06-07 21:48:22.861282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.907 [2024-06-07 21:48:22.861292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.907 qpair failed and we were unable to recover it. 00:31:22.907 [2024-06-07 21:48:22.861501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.907 [2024-06-07 21:48:22.861510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.907 qpair failed and we were unable to recover it. 00:31:22.907 [2024-06-07 21:48:22.861724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.907 [2024-06-07 21:48:22.861733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.907 qpair failed and we were unable to recover it. 00:31:22.907 [2024-06-07 21:48:22.861952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.907 [2024-06-07 21:48:22.861961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.907 qpair failed and we were unable to recover it. 00:31:22.907 [2024-06-07 21:48:22.862193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.907 [2024-06-07 21:48:22.862203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.907 qpair failed and we were unable to recover it. 00:31:22.907 [2024-06-07 21:48:22.862441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.907 [2024-06-07 21:48:22.862450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.907 qpair failed and we were unable to recover it. 00:31:22.907 [2024-06-07 21:48:22.862736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.907 [2024-06-07 21:48:22.862745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.907 qpair failed and we were unable to recover it. 00:31:22.907 [2024-06-07 21:48:22.862948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.907 [2024-06-07 21:48:22.862957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.907 qpair failed and we were unable to recover it. 00:31:22.907 [2024-06-07 21:48:22.863172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.907 [2024-06-07 21:48:22.863182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.907 qpair failed and we were unable to recover it. 00:31:22.907 [2024-06-07 21:48:22.863411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.907 [2024-06-07 21:48:22.863421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.907 qpair failed and we were unable to recover it. 00:31:22.907 [2024-06-07 21:48:22.863618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.907 [2024-06-07 21:48:22.863627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.907 qpair failed and we were unable to recover it. 00:31:22.907 [2024-06-07 21:48:22.863834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.907 [2024-06-07 21:48:22.863843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.907 qpair failed and we were unable to recover it. 00:31:22.907 [2024-06-07 21:48:22.864134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.907 [2024-06-07 21:48:22.864143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.907 qpair failed and we were unable to recover it. 00:31:22.907 [2024-06-07 21:48:22.864368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.907 [2024-06-07 21:48:22.864377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.907 qpair failed and we were unable to recover it. 00:31:22.907 [2024-06-07 21:48:22.864691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.907 [2024-06-07 21:48:22.864700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.907 qpair failed and we were unable to recover it. 00:31:22.907 [2024-06-07 21:48:22.864841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.907 [2024-06-07 21:48:22.864850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.907 qpair failed and we were unable to recover it. 00:31:22.907 [2024-06-07 21:48:22.865143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.907 [2024-06-07 21:48:22.865153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.907 qpair failed and we were unable to recover it. 00:31:22.907 [2024-06-07 21:48:22.865299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.907 [2024-06-07 21:48:22.865310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.907 qpair failed and we were unable to recover it. 00:31:22.907 [2024-06-07 21:48:22.865463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.907 [2024-06-07 21:48:22.865472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.907 qpair failed and we were unable to recover it. 00:31:22.907 [2024-06-07 21:48:22.865686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.907 [2024-06-07 21:48:22.865695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.907 qpair failed and we were unable to recover it. 00:31:22.907 [2024-06-07 21:48:22.865847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.907 [2024-06-07 21:48:22.865857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.907 qpair failed and we were unable to recover it. 00:31:22.907 [2024-06-07 21:48:22.866147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.907 [2024-06-07 21:48:22.866156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.907 qpair failed and we were unable to recover it. 00:31:22.907 [2024-06-07 21:48:22.866391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.908 [2024-06-07 21:48:22.866400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.908 qpair failed and we were unable to recover it. 00:31:22.908 [2024-06-07 21:48:22.866666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.908 [2024-06-07 21:48:22.866675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.908 qpair failed and we were unable to recover it. 00:31:22.908 [2024-06-07 21:48:22.866877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.908 [2024-06-07 21:48:22.866886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.908 qpair failed and we were unable to recover it. 00:31:22.908 [2024-06-07 21:48:22.867034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.908 [2024-06-07 21:48:22.867043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.908 qpair failed and we were unable to recover it. 00:31:22.908 [2024-06-07 21:48:22.867253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.908 [2024-06-07 21:48:22.867264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.908 qpair failed and we were unable to recover it. 00:31:22.908 [2024-06-07 21:48:22.867531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.908 [2024-06-07 21:48:22.867540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.908 qpair failed and we were unable to recover it. 00:31:22.908 [2024-06-07 21:48:22.867674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.908 [2024-06-07 21:48:22.867683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.908 qpair failed and we were unable to recover it. 00:31:22.908 [2024-06-07 21:48:22.867977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.908 [2024-06-07 21:48:22.867986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.908 qpair failed and we were unable to recover it. 00:31:22.908 [2024-06-07 21:48:22.868283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.908 [2024-06-07 21:48:22.868293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.908 qpair failed and we were unable to recover it. 00:31:22.908 [2024-06-07 21:48:22.868437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.908 [2024-06-07 21:48:22.868447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.908 qpair failed and we were unable to recover it. 00:31:22.908 [2024-06-07 21:48:22.868740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.908 [2024-06-07 21:48:22.868750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.908 qpair failed and we were unable to recover it. 00:31:22.908 [2024-06-07 21:48:22.868962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.908 [2024-06-07 21:48:22.868971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.908 qpair failed and we were unable to recover it. 00:31:22.908 [2024-06-07 21:48:22.869129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.908 [2024-06-07 21:48:22.869138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.908 qpair failed and we were unable to recover it. 00:31:22.908 [2024-06-07 21:48:22.869281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.908 [2024-06-07 21:48:22.869291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.908 qpair failed and we were unable to recover it. 00:31:22.908 [2024-06-07 21:48:22.869557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.908 [2024-06-07 21:48:22.869566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.908 qpair failed and we were unable to recover it. 00:31:22.908 [2024-06-07 21:48:22.869770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.908 [2024-06-07 21:48:22.869779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.908 qpair failed and we were unable to recover it. 00:31:22.908 [2024-06-07 21:48:22.870057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.908 [2024-06-07 21:48:22.870067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.908 qpair failed and we were unable to recover it. 00:31:22.908 [2024-06-07 21:48:22.870278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.908 [2024-06-07 21:48:22.870287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.908 qpair failed and we were unable to recover it. 00:31:22.908 [2024-06-07 21:48:22.870417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.908 [2024-06-07 21:48:22.870426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.908 qpair failed and we were unable to recover it. 00:31:22.908 [2024-06-07 21:48:22.870715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.908 [2024-06-07 21:48:22.870724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.908 qpair failed and we were unable to recover it. 00:31:22.908 [2024-06-07 21:48:22.870879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.908 [2024-06-07 21:48:22.870888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.908 qpair failed and we were unable to recover it. 00:31:22.908 [2024-06-07 21:48:22.871097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.908 [2024-06-07 21:48:22.871106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.908 qpair failed and we were unable to recover it. 00:31:22.908 [2024-06-07 21:48:22.871301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.908 [2024-06-07 21:48:22.871310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.908 qpair failed and we were unable to recover it. 00:31:22.908 [2024-06-07 21:48:22.871458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.908 [2024-06-07 21:48:22.871467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.908 qpair failed and we were unable to recover it. 00:31:22.908 [2024-06-07 21:48:22.871674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.908 [2024-06-07 21:48:22.871683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.908 qpair failed and we were unable to recover it. 00:31:22.908 [2024-06-07 21:48:22.871883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.908 [2024-06-07 21:48:22.871892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.908 qpair failed and we were unable to recover it. 00:31:22.908 [2024-06-07 21:48:22.872160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.908 [2024-06-07 21:48:22.872170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.908 qpair failed and we were unable to recover it. 00:31:22.908 [2024-06-07 21:48:22.872435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.908 [2024-06-07 21:48:22.872444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.908 qpair failed and we were unable to recover it. 00:31:22.908 [2024-06-07 21:48:22.872656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.908 [2024-06-07 21:48:22.872665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.908 qpair failed and we were unable to recover it. 00:31:22.908 [2024-06-07 21:48:22.872860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.908 [2024-06-07 21:48:22.872870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.908 qpair failed and we were unable to recover it. 00:31:22.908 [2024-06-07 21:48:22.873032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.908 [2024-06-07 21:48:22.873042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.908 qpair failed and we were unable to recover it. 00:31:22.909 [2024-06-07 21:48:22.873308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.909 [2024-06-07 21:48:22.873317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.909 qpair failed and we were unable to recover it. 00:31:22.909 [2024-06-07 21:48:22.873593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.909 [2024-06-07 21:48:22.873603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.909 qpair failed and we were unable to recover it. 00:31:22.909 [2024-06-07 21:48:22.873808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.909 [2024-06-07 21:48:22.873817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.909 qpair failed and we were unable to recover it. 00:31:22.909 [2024-06-07 21:48:22.873979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.909 [2024-06-07 21:48:22.873988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.909 qpair failed and we were unable to recover it. 00:31:22.909 [2024-06-07 21:48:22.874120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.909 [2024-06-07 21:48:22.874132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.909 qpair failed and we were unable to recover it. 00:31:22.909 [2024-06-07 21:48:22.874408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.909 [2024-06-07 21:48:22.874417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.909 qpair failed and we were unable to recover it. 00:31:22.909 [2024-06-07 21:48:22.874686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.909 [2024-06-07 21:48:22.874695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.909 qpair failed and we were unable to recover it. 00:31:22.909 [2024-06-07 21:48:22.874986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.909 [2024-06-07 21:48:22.874996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.909 qpair failed and we were unable to recover it. 00:31:22.909 [2024-06-07 21:48:22.875231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.909 [2024-06-07 21:48:22.875241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.909 qpair failed and we were unable to recover it. 00:31:22.909 [2024-06-07 21:48:22.875439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.909 [2024-06-07 21:48:22.875448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.909 qpair failed and we were unable to recover it. 00:31:22.909 [2024-06-07 21:48:22.875690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.909 [2024-06-07 21:48:22.875699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.909 qpair failed and we were unable to recover it. 00:31:22.909 [2024-06-07 21:48:22.875841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.909 [2024-06-07 21:48:22.875851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.909 qpair failed and we were unable to recover it. 00:31:22.909 [2024-06-07 21:48:22.876121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.909 [2024-06-07 21:48:22.876130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.909 qpair failed and we were unable to recover it. 00:31:22.909 [2024-06-07 21:48:22.876345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.909 [2024-06-07 21:48:22.876355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.909 qpair failed and we were unable to recover it. 00:31:22.909 [2024-06-07 21:48:22.876572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.909 [2024-06-07 21:48:22.876582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.909 qpair failed and we were unable to recover it. 00:31:22.909 [2024-06-07 21:48:22.876875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.909 [2024-06-07 21:48:22.876885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.909 qpair failed and we were unable to recover it. 00:31:22.909 [2024-06-07 21:48:22.877183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.909 [2024-06-07 21:48:22.877192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.909 qpair failed and we were unable to recover it. 00:31:22.909 [2024-06-07 21:48:22.877431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.909 [2024-06-07 21:48:22.877440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.909 qpair failed and we were unable to recover it. 00:31:22.909 [2024-06-07 21:48:22.877659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.909 [2024-06-07 21:48:22.877668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.909 qpair failed and we were unable to recover it. 00:31:22.909 [2024-06-07 21:48:22.877886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.909 [2024-06-07 21:48:22.877895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.909 qpair failed and we were unable to recover it. 00:31:22.909 [2024-06-07 21:48:22.878106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.909 [2024-06-07 21:48:22.878115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.909 qpair failed and we were unable to recover it. 00:31:22.909 [2024-06-07 21:48:22.878256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.909 [2024-06-07 21:48:22.878265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.909 qpair failed and we were unable to recover it. 00:31:22.909 [2024-06-07 21:48:22.878485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.909 [2024-06-07 21:48:22.878494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.909 qpair failed and we were unable to recover it. 00:31:22.909 [2024-06-07 21:48:22.878705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.909 [2024-06-07 21:48:22.878714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.909 qpair failed and we were unable to recover it. 00:31:22.909 [2024-06-07 21:48:22.879003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.909 [2024-06-07 21:48:22.879012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.909 qpair failed and we were unable to recover it. 00:31:22.909 [2024-06-07 21:48:22.879254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.909 [2024-06-07 21:48:22.879264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.909 qpair failed and we were unable to recover it. 00:31:22.909 [2024-06-07 21:48:22.879473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.909 [2024-06-07 21:48:22.879482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.909 qpair failed and we were unable to recover it. 00:31:22.909 [2024-06-07 21:48:22.879693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.909 [2024-06-07 21:48:22.879703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.909 qpair failed and we were unable to recover it. 00:31:22.909 [2024-06-07 21:48:22.879920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.909 [2024-06-07 21:48:22.879929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.909 qpair failed and we were unable to recover it. 00:31:22.909 [2024-06-07 21:48:22.880028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.909 [2024-06-07 21:48:22.880038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.909 qpair failed and we were unable to recover it. 00:31:22.909 [2024-06-07 21:48:22.880318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.909 [2024-06-07 21:48:22.880327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.909 qpair failed and we were unable to recover it. 00:31:22.909 [2024-06-07 21:48:22.880529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.909 [2024-06-07 21:48:22.880538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.909 qpair failed and we were unable to recover it. 00:31:22.909 [2024-06-07 21:48:22.880840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.909 [2024-06-07 21:48:22.880850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.909 qpair failed and we were unable to recover it. 00:31:22.909 [2024-06-07 21:48:22.880953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.909 [2024-06-07 21:48:22.880962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.909 qpair failed and we were unable to recover it. 00:31:22.909 [2024-06-07 21:48:22.881199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.909 [2024-06-07 21:48:22.881209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.909 qpair failed and we were unable to recover it. 00:31:22.909 [2024-06-07 21:48:22.881475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.909 [2024-06-07 21:48:22.881484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.909 qpair failed and we were unable to recover it. 00:31:22.909 [2024-06-07 21:48:22.881693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.909 [2024-06-07 21:48:22.881703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.909 qpair failed and we were unable to recover it. 00:31:22.909 [2024-06-07 21:48:22.881847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.909 [2024-06-07 21:48:22.881857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.909 qpair failed and we were unable to recover it. 00:31:22.910 [2024-06-07 21:48:22.882005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.910 [2024-06-07 21:48:22.882015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.910 qpair failed and we were unable to recover it. 00:31:22.910 [2024-06-07 21:48:22.882331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.910 [2024-06-07 21:48:22.882347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.910 qpair failed and we were unable to recover it. 00:31:22.910 [2024-06-07 21:48:22.882501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.910 [2024-06-07 21:48:22.882512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.910 qpair failed and we were unable to recover it. 00:31:22.910 [2024-06-07 21:48:22.882751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.910 [2024-06-07 21:48:22.882763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.910 qpair failed and we were unable to recover it. 00:31:22.910 [2024-06-07 21:48:22.882923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.910 [2024-06-07 21:48:22.882933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.910 qpair failed and we were unable to recover it. 00:31:22.910 [2024-06-07 21:48:22.883225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.910 [2024-06-07 21:48:22.883239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.910 qpair failed and we were unable to recover it. 00:31:22.910 [2024-06-07 21:48:22.883462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.910 [2024-06-07 21:48:22.883476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.910 qpair failed and we were unable to recover it. 00:31:22.910 [2024-06-07 21:48:22.883679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.910 [2024-06-07 21:48:22.883690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.910 qpair failed and we were unable to recover it. 00:31:22.910 [2024-06-07 21:48:22.883922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.910 [2024-06-07 21:48:22.883934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.910 qpair failed and we were unable to recover it. 00:31:22.910 [2024-06-07 21:48:22.884142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.910 [2024-06-07 21:48:22.884153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.910 qpair failed and we were unable to recover it. 00:31:22.910 [2024-06-07 21:48:22.884393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.910 [2024-06-07 21:48:22.884404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.910 qpair failed and we were unable to recover it. 00:31:22.910 [2024-06-07 21:48:22.884689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.910 [2024-06-07 21:48:22.884699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.910 qpair failed and we were unable to recover it. 00:31:22.910 [2024-06-07 21:48:22.884929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.910 [2024-06-07 21:48:22.884939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.910 qpair failed and we were unable to recover it. 00:31:22.910 [2024-06-07 21:48:22.885151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.910 [2024-06-07 21:48:22.885161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.910 qpair failed and we were unable to recover it. 00:31:22.910 [2024-06-07 21:48:22.885320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.910 [2024-06-07 21:48:22.885331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.910 qpair failed and we were unable to recover it. 00:31:22.910 [2024-06-07 21:48:22.885555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.910 [2024-06-07 21:48:22.885566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.910 qpair failed and we were unable to recover it. 00:31:22.910 [2024-06-07 21:48:22.885720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.910 [2024-06-07 21:48:22.885730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.910 qpair failed and we were unable to recover it. 00:31:22.910 [2024-06-07 21:48:22.886034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.910 [2024-06-07 21:48:22.886044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.910 qpair failed and we were unable to recover it. 00:31:22.910 [2024-06-07 21:48:22.886271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.910 [2024-06-07 21:48:22.886282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.910 qpair failed and we were unable to recover it. 00:31:22.910 [2024-06-07 21:48:22.886501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.910 [2024-06-07 21:48:22.886513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.910 qpair failed and we were unable to recover it. 00:31:22.910 [2024-06-07 21:48:22.886753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.910 [2024-06-07 21:48:22.886764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.910 qpair failed and we were unable to recover it. 00:31:22.910 [2024-06-07 21:48:22.887045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.910 [2024-06-07 21:48:22.887055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.910 qpair failed and we were unable to recover it. 00:31:22.910 [2024-06-07 21:48:22.887196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.910 [2024-06-07 21:48:22.887206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.910 qpair failed and we were unable to recover it. 00:31:22.910 [2024-06-07 21:48:22.887369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.910 [2024-06-07 21:48:22.887379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.910 qpair failed and we were unable to recover it. 00:31:22.910 [2024-06-07 21:48:22.887581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.910 [2024-06-07 21:48:22.887591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.910 qpair failed and we were unable to recover it. 00:31:22.910 [2024-06-07 21:48:22.887740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.910 [2024-06-07 21:48:22.887750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.910 qpair failed and we were unable to recover it. 00:31:22.910 [2024-06-07 21:48:22.888017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.910 [2024-06-07 21:48:22.888032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.910 qpair failed and we were unable to recover it. 00:31:22.910 [2024-06-07 21:48:22.888242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.910 [2024-06-07 21:48:22.888252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.910 qpair failed and we were unable to recover it. 00:31:22.910 [2024-06-07 21:48:22.888388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.910 [2024-06-07 21:48:22.888398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.910 qpair failed and we were unable to recover it. 00:31:22.910 [2024-06-07 21:48:22.888693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.910 [2024-06-07 21:48:22.888703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.910 qpair failed and we were unable to recover it. 00:31:22.910 [2024-06-07 21:48:22.888982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.910 [2024-06-07 21:48:22.888992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.910 qpair failed and we were unable to recover it. 00:31:22.910 [2024-06-07 21:48:22.889223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.910 [2024-06-07 21:48:22.889233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.910 qpair failed and we were unable to recover it. 00:31:22.910 [2024-06-07 21:48:22.889449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.910 [2024-06-07 21:48:22.889459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.910 qpair failed and we were unable to recover it. 00:31:22.910 [2024-06-07 21:48:22.889727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.910 [2024-06-07 21:48:22.889737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.910 qpair failed and we were unable to recover it. 00:31:22.910 [2024-06-07 21:48:22.890012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.910 [2024-06-07 21:48:22.890022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.910 qpair failed and we were unable to recover it. 00:31:22.910 [2024-06-07 21:48:22.890261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.910 [2024-06-07 21:48:22.890271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.910 qpair failed and we were unable to recover it. 00:31:22.910 [2024-06-07 21:48:22.890484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.910 [2024-06-07 21:48:22.890494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.910 qpair failed and we were unable to recover it. 00:31:22.910 [2024-06-07 21:48:22.890652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.910 [2024-06-07 21:48:22.890663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.910 qpair failed and we were unable to recover it. 00:31:22.910 [2024-06-07 21:48:22.890930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.911 [2024-06-07 21:48:22.890941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.911 qpair failed and we were unable to recover it. 00:31:22.911 [2024-06-07 21:48:22.891141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.911 [2024-06-07 21:48:22.891151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.911 qpair failed and we were unable to recover it. 00:31:22.911 [2024-06-07 21:48:22.891358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.911 [2024-06-07 21:48:22.891369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.911 qpair failed and we were unable to recover it. 00:31:22.911 [2024-06-07 21:48:22.891577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.911 [2024-06-07 21:48:22.891588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.911 qpair failed and we were unable to recover it. 00:31:22.911 [2024-06-07 21:48:22.891828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.911 [2024-06-07 21:48:22.891838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.911 qpair failed and we were unable to recover it. 00:31:22.911 [2024-06-07 21:48:22.891994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.911 [2024-06-07 21:48:22.892004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.911 qpair failed and we were unable to recover it. 00:31:22.911 [2024-06-07 21:48:22.892213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.911 [2024-06-07 21:48:22.892223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.911 qpair failed and we were unable to recover it. 00:31:22.911 [2024-06-07 21:48:22.892514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.911 [2024-06-07 21:48:22.892524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.911 qpair failed and we were unable to recover it. 00:31:22.911 [2024-06-07 21:48:22.892798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.911 [2024-06-07 21:48:22.892809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.911 qpair failed and we were unable to recover it. 00:31:22.911 [2024-06-07 21:48:22.893010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.911 [2024-06-07 21:48:22.893019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.911 qpair failed and we were unable to recover it. 00:31:22.911 [2024-06-07 21:48:22.893221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.911 [2024-06-07 21:48:22.893231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.911 qpair failed and we were unable to recover it. 00:31:22.911 [2024-06-07 21:48:22.893432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.911 [2024-06-07 21:48:22.893441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.911 qpair failed and we were unable to recover it. 00:31:22.911 [2024-06-07 21:48:22.893710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.911 [2024-06-07 21:48:22.893720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.911 qpair failed and we were unable to recover it. 00:31:22.911 [2024-06-07 21:48:22.894015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.911 [2024-06-07 21:48:22.894035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.911 qpair failed and we were unable to recover it. 00:31:22.911 [2024-06-07 21:48:22.894246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.911 [2024-06-07 21:48:22.894256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.911 qpair failed and we were unable to recover it. 00:31:22.911 [2024-06-07 21:48:22.894421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.911 [2024-06-07 21:48:22.894431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.911 qpair failed and we were unable to recover it. 00:31:22.911 [2024-06-07 21:48:22.894698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.911 [2024-06-07 21:48:22.894707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.911 qpair failed and we were unable to recover it. 00:31:22.911 [2024-06-07 21:48:22.894918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.911 [2024-06-07 21:48:22.894927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.911 qpair failed and we were unable to recover it. 00:31:22.911 [2024-06-07 21:48:22.895072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.911 [2024-06-07 21:48:22.895082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.911 qpair failed and we were unable to recover it. 00:31:22.911 [2024-06-07 21:48:22.895288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.911 [2024-06-07 21:48:22.895298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.911 qpair failed and we were unable to recover it. 00:31:22.911 [2024-06-07 21:48:22.895443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.911 [2024-06-07 21:48:22.895452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.911 qpair failed and we were unable to recover it. 00:31:22.911 [2024-06-07 21:48:22.895754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.911 [2024-06-07 21:48:22.895763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.911 qpair failed and we were unable to recover it. 00:31:22.911 [2024-06-07 21:48:22.895915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.911 [2024-06-07 21:48:22.895924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.911 qpair failed and we were unable to recover it. 00:31:22.911 [2024-06-07 21:48:22.896138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.911 [2024-06-07 21:48:22.896148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.911 qpair failed and we were unable to recover it. 00:31:22.911 [2024-06-07 21:48:22.896372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.911 [2024-06-07 21:48:22.896381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.911 qpair failed and we were unable to recover it. 00:31:22.911 [2024-06-07 21:48:22.896601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.911 [2024-06-07 21:48:22.896610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.911 qpair failed and we were unable to recover it. 00:31:22.911 [2024-06-07 21:48:22.896767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.911 [2024-06-07 21:48:22.896776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.911 qpair failed and we were unable to recover it. 00:31:22.911 [2024-06-07 21:48:22.896981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.911 [2024-06-07 21:48:22.896990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.911 qpair failed and we were unable to recover it. 00:31:22.911 [2024-06-07 21:48:22.897194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.911 [2024-06-07 21:48:22.897204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.911 qpair failed and we were unable to recover it. 00:31:22.911 [2024-06-07 21:48:22.897467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.911 [2024-06-07 21:48:22.897477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.911 qpair failed and we were unable to recover it. 00:31:22.911 [2024-06-07 21:48:22.897767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.911 [2024-06-07 21:48:22.897776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.911 qpair failed and we were unable to recover it. 00:31:22.911 [2024-06-07 21:48:22.897992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.911 [2024-06-07 21:48:22.898002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.911 qpair failed and we were unable to recover it. 00:31:22.911 [2024-06-07 21:48:22.898210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.911 [2024-06-07 21:48:22.898219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.911 qpair failed and we were unable to recover it. 00:31:22.911 [2024-06-07 21:48:22.898373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.911 [2024-06-07 21:48:22.898383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.911 qpair failed and we were unable to recover it. 00:31:22.911 [2024-06-07 21:48:22.898675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.911 [2024-06-07 21:48:22.898684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.911 qpair failed and we were unable to recover it. 00:31:22.911 [2024-06-07 21:48:22.898915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.911 [2024-06-07 21:48:22.898925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.911 qpair failed and we were unable to recover it. 00:31:22.911 [2024-06-07 21:48:22.899203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.911 [2024-06-07 21:48:22.899213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.911 qpair failed and we were unable to recover it. 00:31:22.911 [2024-06-07 21:48:22.899422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.911 [2024-06-07 21:48:22.899431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.911 qpair failed and we were unable to recover it. 00:31:22.911 [2024-06-07 21:48:22.899634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.912 [2024-06-07 21:48:22.899643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.912 qpair failed and we were unable to recover it. 00:31:22.912 [2024-06-07 21:48:22.899804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.912 [2024-06-07 21:48:22.899814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.912 qpair failed and we were unable to recover it. 00:31:22.912 [2024-06-07 21:48:22.900010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.912 [2024-06-07 21:48:22.900020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.912 qpair failed and we were unable to recover it. 00:31:22.912 [2024-06-07 21:48:22.900257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.912 [2024-06-07 21:48:22.900266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.912 qpair failed and we were unable to recover it. 00:31:22.912 [2024-06-07 21:48:22.900502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.912 [2024-06-07 21:48:22.900512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.912 qpair failed and we were unable to recover it. 00:31:22.912 [2024-06-07 21:48:22.900740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.912 [2024-06-07 21:48:22.900749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.912 qpair failed and we were unable to recover it. 00:31:22.912 [2024-06-07 21:48:22.900886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.912 [2024-06-07 21:48:22.900896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.912 qpair failed and we were unable to recover it. 00:31:22.912 [2024-06-07 21:48:22.901220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.912 [2024-06-07 21:48:22.901230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.912 qpair failed and we were unable to recover it. 00:31:22.912 [2024-06-07 21:48:22.901454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.912 [2024-06-07 21:48:22.901464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.912 qpair failed and we were unable to recover it. 00:31:22.912 [2024-06-07 21:48:22.901604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.912 [2024-06-07 21:48:22.901614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.912 qpair failed and we were unable to recover it. 00:31:22.912 [2024-06-07 21:48:22.901741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.912 [2024-06-07 21:48:22.901752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.912 qpair failed and we were unable to recover it. 00:31:22.912 [2024-06-07 21:48:22.901965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.912 [2024-06-07 21:48:22.901975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.912 qpair failed and we were unable to recover it. 00:31:22.912 [2024-06-07 21:48:22.902125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.912 [2024-06-07 21:48:22.902135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.912 qpair failed and we were unable to recover it. 00:31:22.912 [2024-06-07 21:48:22.902332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.912 [2024-06-07 21:48:22.902341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.912 qpair failed and we were unable to recover it. 00:31:22.912 [2024-06-07 21:48:22.902488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.912 [2024-06-07 21:48:22.902497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.912 qpair failed and we were unable to recover it. 00:31:22.912 [2024-06-07 21:48:22.902655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.912 [2024-06-07 21:48:22.902665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.912 qpair failed and we were unable to recover it. 00:31:22.912 [2024-06-07 21:48:22.902866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.912 [2024-06-07 21:48:22.902875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.912 qpair failed and we were unable to recover it. 00:31:22.912 [2024-06-07 21:48:22.903023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.912 [2024-06-07 21:48:22.903042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.912 qpair failed and we were unable to recover it. 00:31:22.912 [2024-06-07 21:48:22.903255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.912 [2024-06-07 21:48:22.903264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.912 qpair failed and we were unable to recover it. 00:31:22.912 [2024-06-07 21:48:22.903461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.912 [2024-06-07 21:48:22.903470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.912 qpair failed and we were unable to recover it. 00:31:22.912 [2024-06-07 21:48:22.903666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.912 [2024-06-07 21:48:22.903675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.912 qpair failed and we were unable to recover it. 00:31:22.912 [2024-06-07 21:48:22.903976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.912 [2024-06-07 21:48:22.903986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.912 qpair failed and we were unable to recover it. 00:31:22.912 [2024-06-07 21:48:22.904218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.912 [2024-06-07 21:48:22.904228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.912 qpair failed and we were unable to recover it. 00:31:22.912 [2024-06-07 21:48:22.904380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.912 [2024-06-07 21:48:22.904390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.912 qpair failed and we were unable to recover it. 00:31:22.912 [2024-06-07 21:48:22.904630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.912 [2024-06-07 21:48:22.904639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.912 qpair failed and we were unable to recover it. 00:31:22.912 [2024-06-07 21:48:22.904907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.912 [2024-06-07 21:48:22.904917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.912 qpair failed and we were unable to recover it. 00:31:22.912 [2024-06-07 21:48:22.905114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.912 [2024-06-07 21:48:22.905124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.912 qpair failed and we were unable to recover it. 00:31:22.912 [2024-06-07 21:48:22.905329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.912 [2024-06-07 21:48:22.905338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.912 qpair failed and we were unable to recover it. 00:31:22.912 [2024-06-07 21:48:22.905575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.912 [2024-06-07 21:48:22.905584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.912 qpair failed and we were unable to recover it. 00:31:22.912 [2024-06-07 21:48:22.905850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.912 [2024-06-07 21:48:22.905859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.912 qpair failed and we were unable to recover it. 00:31:22.912 [2024-06-07 21:48:22.906085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.912 [2024-06-07 21:48:22.906095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.912 qpair failed and we were unable to recover it. 00:31:22.912 [2024-06-07 21:48:22.906303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.912 [2024-06-07 21:48:22.906313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.912 qpair failed and we were unable to recover it. 00:31:22.912 [2024-06-07 21:48:22.906534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.912 [2024-06-07 21:48:22.906543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.912 qpair failed and we were unable to recover it. 00:31:22.912 [2024-06-07 21:48:22.906765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.912 [2024-06-07 21:48:22.906774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.912 qpair failed and we were unable to recover it. 00:31:22.912 [2024-06-07 21:48:22.906981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.912 [2024-06-07 21:48:22.906990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.912 qpair failed and we were unable to recover it. 00:31:22.912 [2024-06-07 21:48:22.907203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.912 [2024-06-07 21:48:22.907212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.912 qpair failed and we were unable to recover it. 00:31:22.912 [2024-06-07 21:48:22.907368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.912 [2024-06-07 21:48:22.907378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.912 qpair failed and we were unable to recover it. 00:31:22.912 [2024-06-07 21:48:22.907518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.912 [2024-06-07 21:48:22.907528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.912 qpair failed and we were unable to recover it. 00:31:22.912 [2024-06-07 21:48:22.907778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.912 [2024-06-07 21:48:22.907787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.913 qpair failed and we were unable to recover it. 00:31:22.913 [2024-06-07 21:48:22.907996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.913 [2024-06-07 21:48:22.908005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.913 qpair failed and we were unable to recover it. 00:31:22.913 [2024-06-07 21:48:22.908299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.913 [2024-06-07 21:48:22.908309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.913 qpair failed and we were unable to recover it. 00:31:22.913 [2024-06-07 21:48:22.908547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.913 [2024-06-07 21:48:22.908557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.913 qpair failed and we were unable to recover it. 00:31:22.913 [2024-06-07 21:48:22.908864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.913 [2024-06-07 21:48:22.908873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.913 qpair failed and we were unable to recover it. 00:31:22.913 [2024-06-07 21:48:22.909111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.913 [2024-06-07 21:48:22.909121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.913 qpair failed and we were unable to recover it. 00:31:22.913 [2024-06-07 21:48:22.909454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.913 [2024-06-07 21:48:22.909464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.913 qpair failed and we were unable to recover it. 00:31:22.913 [2024-06-07 21:48:22.909754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.913 [2024-06-07 21:48:22.909764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.913 qpair failed and we were unable to recover it. 00:31:22.913 [2024-06-07 21:48:22.909911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.913 [2024-06-07 21:48:22.909921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.913 qpair failed and we were unable to recover it. 00:31:22.913 [2024-06-07 21:48:22.910075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.913 [2024-06-07 21:48:22.910085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.913 qpair failed and we were unable to recover it. 00:31:22.913 [2024-06-07 21:48:22.910235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.913 [2024-06-07 21:48:22.910244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.913 qpair failed and we were unable to recover it. 00:31:22.913 [2024-06-07 21:48:22.910386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.913 [2024-06-07 21:48:22.910396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.913 qpair failed and we were unable to recover it. 00:31:22.913 [2024-06-07 21:48:22.910669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.913 [2024-06-07 21:48:22.910681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.913 qpair failed and we were unable to recover it. 00:31:22.913 [2024-06-07 21:48:22.910827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.913 [2024-06-07 21:48:22.910837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.913 qpair failed and we were unable to recover it. 00:31:22.913 [2024-06-07 21:48:22.911040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.913 [2024-06-07 21:48:22.911050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.913 qpair failed and we were unable to recover it. 00:31:22.913 [2024-06-07 21:48:22.911350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.913 [2024-06-07 21:48:22.911360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.913 qpair failed and we were unable to recover it. 00:31:22.913 [2024-06-07 21:48:22.911509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.913 [2024-06-07 21:48:22.911519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.913 qpair failed and we were unable to recover it. 00:31:22.913 [2024-06-07 21:48:22.911816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.913 [2024-06-07 21:48:22.911825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.913 qpair failed and we were unable to recover it. 00:31:22.913 [2024-06-07 21:48:22.912070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.913 [2024-06-07 21:48:22.912080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.913 qpair failed and we were unable to recover it. 00:31:22.913 [2024-06-07 21:48:22.912353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.913 [2024-06-07 21:48:22.912363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.913 qpair failed and we were unable to recover it. 00:31:22.913 [2024-06-07 21:48:22.912583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.913 [2024-06-07 21:48:22.912592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.913 qpair failed and we were unable to recover it. 00:31:22.913 [2024-06-07 21:48:22.912915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.913 [2024-06-07 21:48:22.912925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.913 qpair failed and we were unable to recover it. 00:31:22.913 [2024-06-07 21:48:22.913134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.913 [2024-06-07 21:48:22.913145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.913 qpair failed and we were unable to recover it. 00:31:22.913 [2024-06-07 21:48:22.913355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.913 [2024-06-07 21:48:22.913367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.913 qpair failed and we were unable to recover it. 00:31:22.913 [2024-06-07 21:48:22.913565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.913 [2024-06-07 21:48:22.913574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.913 qpair failed and we were unable to recover it. 00:31:22.913 [2024-06-07 21:48:22.913724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.913 [2024-06-07 21:48:22.913734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.913 qpair failed and we were unable to recover it. 00:31:22.913 [2024-06-07 21:48:22.913933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.913 [2024-06-07 21:48:22.913943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.913 qpair failed and we were unable to recover it. 00:31:22.913 [2024-06-07 21:48:22.914140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.913 [2024-06-07 21:48:22.914150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.913 qpair failed and we were unable to recover it. 00:31:22.913 [2024-06-07 21:48:22.914372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.913 [2024-06-07 21:48:22.914381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.913 qpair failed and we were unable to recover it. 00:31:22.913 [2024-06-07 21:48:22.914655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.913 [2024-06-07 21:48:22.914664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.913 qpair failed and we were unable to recover it. 00:31:22.913 [2024-06-07 21:48:22.914955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.913 [2024-06-07 21:48:22.914965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.913 qpair failed and we were unable to recover it. 00:31:22.913 [2024-06-07 21:48:22.915120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.913 [2024-06-07 21:48:22.915129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.913 qpair failed and we were unable to recover it. 00:31:22.913 [2024-06-07 21:48:22.915285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.913 [2024-06-07 21:48:22.915294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.914 qpair failed and we were unable to recover it. 00:31:22.914 [2024-06-07 21:48:22.915491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.914 [2024-06-07 21:48:22.915501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.914 qpair failed and we were unable to recover it. 00:31:22.914 [2024-06-07 21:48:22.915776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.914 [2024-06-07 21:48:22.915785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.914 qpair failed and we were unable to recover it. 00:31:22.914 [2024-06-07 21:48:22.916021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.914 [2024-06-07 21:48:22.916034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.914 qpair failed and we were unable to recover it. 00:31:22.914 [2024-06-07 21:48:22.916241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.914 [2024-06-07 21:48:22.916250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.914 qpair failed and we were unable to recover it. 00:31:22.914 [2024-06-07 21:48:22.916401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.914 [2024-06-07 21:48:22.916411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.914 qpair failed and we were unable to recover it. 00:31:22.914 [2024-06-07 21:48:22.916619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.914 [2024-06-07 21:48:22.916628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.914 qpair failed and we were unable to recover it. 00:31:22.914 [2024-06-07 21:48:22.916838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.914 [2024-06-07 21:48:22.916847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.914 qpair failed and we were unable to recover it. 00:31:22.914 [2024-06-07 21:48:22.917138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.914 [2024-06-07 21:48:22.917148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.914 qpair failed and we were unable to recover it. 00:31:22.914 [2024-06-07 21:48:22.917425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.914 [2024-06-07 21:48:22.917434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.914 qpair failed and we were unable to recover it. 00:31:22.914 [2024-06-07 21:48:22.917650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.914 [2024-06-07 21:48:22.917660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.914 qpair failed and we were unable to recover it. 00:31:22.914 [2024-06-07 21:48:22.917934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.914 [2024-06-07 21:48:22.917943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.914 qpair failed and we were unable to recover it. 00:31:22.914 [2024-06-07 21:48:22.918078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.914 [2024-06-07 21:48:22.918087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.914 qpair failed and we were unable to recover it. 00:31:22.914 [2024-06-07 21:48:22.918350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.914 [2024-06-07 21:48:22.918360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.914 qpair failed and we were unable to recover it. 00:31:22.914 [2024-06-07 21:48:22.918514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.914 [2024-06-07 21:48:22.918523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.914 qpair failed and we were unable to recover it. 00:31:22.914 [2024-06-07 21:48:22.918736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.914 [2024-06-07 21:48:22.918746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.914 qpair failed and we were unable to recover it. 00:31:22.914 [2024-06-07 21:48:22.919011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.914 [2024-06-07 21:48:22.919020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.914 qpair failed and we were unable to recover it. 00:31:22.914 [2024-06-07 21:48:22.919298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.914 [2024-06-07 21:48:22.919308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.914 qpair failed and we were unable to recover it. 00:31:22.914 [2024-06-07 21:48:22.919451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.914 [2024-06-07 21:48:22.919460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.914 qpair failed and we were unable to recover it. 00:31:22.914 [2024-06-07 21:48:22.919752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.914 [2024-06-07 21:48:22.919761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.914 qpair failed and we were unable to recover it. 00:31:22.914 [2024-06-07 21:48:22.919995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.914 [2024-06-07 21:48:22.920006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.914 qpair failed and we were unable to recover it. 00:31:22.914 [2024-06-07 21:48:22.920226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.914 [2024-06-07 21:48:22.920236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.914 qpair failed and we were unable to recover it. 00:31:22.914 [2024-06-07 21:48:22.920480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.914 [2024-06-07 21:48:22.920490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.914 qpair failed and we were unable to recover it. 00:31:22.914 [2024-06-07 21:48:22.920767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.914 [2024-06-07 21:48:22.920776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.914 qpair failed and we were unable to recover it. 00:31:22.914 [2024-06-07 21:48:22.920973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.914 [2024-06-07 21:48:22.920982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.914 qpair failed and we were unable to recover it. 00:31:22.914 [2024-06-07 21:48:22.921178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.914 [2024-06-07 21:48:22.921188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.914 qpair failed and we were unable to recover it. 00:31:22.914 [2024-06-07 21:48:22.921465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.914 [2024-06-07 21:48:22.921474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.914 qpair failed and we were unable to recover it. 00:31:22.914 [2024-06-07 21:48:22.921744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.914 [2024-06-07 21:48:22.921754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.914 qpair failed and we were unable to recover it. 00:31:22.914 [2024-06-07 21:48:22.921970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.914 [2024-06-07 21:48:22.921979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.914 qpair failed and we were unable to recover it. 00:31:22.914 [2024-06-07 21:48:22.922187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.914 [2024-06-07 21:48:22.922197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.914 qpair failed and we were unable to recover it. 00:31:22.914 [2024-06-07 21:48:22.922485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.914 [2024-06-07 21:48:22.922495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.914 qpair failed and we were unable to recover it. 00:31:22.914 [2024-06-07 21:48:22.922652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.914 [2024-06-07 21:48:22.922662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.914 qpair failed and we were unable to recover it. 00:31:22.914 [2024-06-07 21:48:22.922860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.914 [2024-06-07 21:48:22.922869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.914 qpair failed and we were unable to recover it. 00:31:22.914 [2024-06-07 21:48:22.923021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.914 [2024-06-07 21:48:22.923035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.914 qpair failed and we were unable to recover it. 00:31:22.914 [2024-06-07 21:48:22.923233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.914 [2024-06-07 21:48:22.923243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.914 qpair failed and we were unable to recover it. 00:31:22.914 [2024-06-07 21:48:22.923382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.914 [2024-06-07 21:48:22.923392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.914 qpair failed and we were unable to recover it. 00:31:22.914 [2024-06-07 21:48:22.923540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.914 [2024-06-07 21:48:22.923550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.914 qpair failed and we were unable to recover it. 00:31:22.914 [2024-06-07 21:48:22.923814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.914 [2024-06-07 21:48:22.923823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.914 qpair failed and we were unable to recover it. 00:31:22.914 [2024-06-07 21:48:22.924042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.914 [2024-06-07 21:48:22.924052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.914 qpair failed and we were unable to recover it. 00:31:22.915 [2024-06-07 21:48:22.924333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.915 [2024-06-07 21:48:22.924343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.915 qpair failed and we were unable to recover it. 00:31:22.915 [2024-06-07 21:48:22.924483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.915 [2024-06-07 21:48:22.924492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.915 qpair failed and we were unable to recover it. 00:31:22.915 [2024-06-07 21:48:22.924712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.915 [2024-06-07 21:48:22.924722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.915 qpair failed and we were unable to recover it. 00:31:22.915 [2024-06-07 21:48:22.924871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.915 [2024-06-07 21:48:22.924880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.915 qpair failed and we were unable to recover it. 00:31:22.915 [2024-06-07 21:48:22.925021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.915 [2024-06-07 21:48:22.925037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.915 qpair failed and we were unable to recover it. 00:31:22.915 [2024-06-07 21:48:22.925252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.915 [2024-06-07 21:48:22.925261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.915 qpair failed and we were unable to recover it. 00:31:22.915 [2024-06-07 21:48:22.925537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.915 [2024-06-07 21:48:22.925546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.915 qpair failed and we were unable to recover it. 00:31:22.915 [2024-06-07 21:48:22.925758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.915 [2024-06-07 21:48:22.925767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.915 qpair failed and we were unable to recover it. 00:31:22.915 [2024-06-07 21:48:22.925966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.915 [2024-06-07 21:48:22.925975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.915 qpair failed and we were unable to recover it. 00:31:22.915 [2024-06-07 21:48:22.926118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.915 [2024-06-07 21:48:22.926128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.915 qpair failed and we were unable to recover it. 00:31:22.915 [2024-06-07 21:48:22.926366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.915 [2024-06-07 21:48:22.926376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.915 qpair failed and we were unable to recover it. 00:31:22.915 [2024-06-07 21:48:22.926620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.915 [2024-06-07 21:48:22.926630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.915 qpair failed and we were unable to recover it. 00:31:22.915 [2024-06-07 21:48:22.926852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.915 [2024-06-07 21:48:22.926861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.915 qpair failed and we were unable to recover it. 00:31:22.915 [2024-06-07 21:48:22.927066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.915 [2024-06-07 21:48:22.927076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.915 qpair failed and we were unable to recover it. 00:31:22.915 [2024-06-07 21:48:22.927293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.915 [2024-06-07 21:48:22.927303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.915 qpair failed and we were unable to recover it. 00:31:22.915 [2024-06-07 21:48:22.927516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.915 [2024-06-07 21:48:22.927526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.915 qpair failed and we were unable to recover it. 00:31:22.915 [2024-06-07 21:48:22.927670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.915 [2024-06-07 21:48:22.927680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.915 qpair failed and we were unable to recover it. 00:31:22.915 [2024-06-07 21:48:22.927832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.915 [2024-06-07 21:48:22.927842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.915 qpair failed and we were unable to recover it. 00:31:22.915 [2024-06-07 21:48:22.928061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.915 [2024-06-07 21:48:22.928072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.915 qpair failed and we were unable to recover it. 00:31:22.915 [2024-06-07 21:48:22.928223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.915 [2024-06-07 21:48:22.928233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.915 qpair failed and we were unable to recover it. 00:31:22.915 [2024-06-07 21:48:22.928451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.915 [2024-06-07 21:48:22.928461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.915 qpair failed and we were unable to recover it. 00:31:22.915 [2024-06-07 21:48:22.928671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.915 [2024-06-07 21:48:22.928684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.915 qpair failed and we were unable to recover it. 00:31:22.915 [2024-06-07 21:48:22.928911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.915 [2024-06-07 21:48:22.928920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.915 qpair failed and we were unable to recover it. 00:31:22.915 [2024-06-07 21:48:22.929126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.915 [2024-06-07 21:48:22.929136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.915 qpair failed and we were unable to recover it. 00:31:22.915 [2024-06-07 21:48:22.929195] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:22.915 [2024-06-07 21:48:22.929230] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:22.915 [2024-06-07 21:48:22.929241] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:22.915 [2024-06-07 21:48:22.929250] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:22.915 [2024-06-07 21:48:22.929258] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:22.915 [2024-06-07 21:48:22.929461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.915 [2024-06-07 21:48:22.929471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.915 [2024-06-07 21:48:22.929377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:31:22.915 qpair failed and we were unable to recover it. 00:31:22.915 [2024-06-07 21:48:22.929492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:31:22.915 [2024-06-07 21:48:22.929656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.915 [2024-06-07 21:48:22.929702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.915 [2024-06-07 21:48:22.929626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:31:22.915 qpair failed and we were unable to recover it. 00:31:22.915 [2024-06-07 21:48:22.929626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:31:22.915 [2024-06-07 21:48:22.930087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.915 [2024-06-07 21:48:22.930158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b4d60 with addr=10.0.0.2, port=4420 00:31:22.915 qpair failed and we were unable to recover it. 00:31:22.915 [2024-06-07 21:48:22.930378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.915 [2024-06-07 21:48:22.930413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b4d60 with addr=10.0.0.2, port=4420 00:31:22.915 qpair failed and we were unable to recover it. 00:31:22.915 [2024-06-07 21:48:22.930755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.915 [2024-06-07 21:48:22.930786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b4d60 with addr=10.0.0.2, port=4420 00:31:22.915 qpair failed and we were unable to recover it. 00:31:22.915 [2024-06-07 21:48:22.931081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.915 [2024-06-07 21:48:22.931093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.915 qpair failed and we were unable to recover it. 00:31:22.915 [2024-06-07 21:48:22.931252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.915 [2024-06-07 21:48:22.931261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.915 qpair failed and we were unable to recover it. 00:31:22.915 [2024-06-07 21:48:22.931504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.915 [2024-06-07 21:48:22.931514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.915 qpair failed and we were unable to recover it. 00:31:22.915 [2024-06-07 21:48:22.931648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.915 [2024-06-07 21:48:22.931658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.915 qpair failed and we were unable to recover it. 00:31:22.915 [2024-06-07 21:48:22.931856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.915 [2024-06-07 21:48:22.931865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.915 qpair failed and we were unable to recover it. 00:31:22.915 [2024-06-07 21:48:22.932077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.915 [2024-06-07 21:48:22.932087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.915 qpair failed and we were unable to recover it. 00:31:22.916 [2024-06-07 21:48:22.932296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.916 [2024-06-07 21:48:22.932305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.916 qpair failed and we were unable to recover it. 00:31:22.916 [2024-06-07 21:48:22.932466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.916 [2024-06-07 21:48:22.932476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.916 qpair failed and we were unable to recover it. 00:31:22.916 [2024-06-07 21:48:22.932684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.916 [2024-06-07 21:48:22.932693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.916 qpair failed and we were unable to recover it. 00:31:22.916 [2024-06-07 21:48:22.932961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.916 [2024-06-07 21:48:22.932971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.916 qpair failed and we were unable to recover it. 00:31:22.916 [2024-06-07 21:48:22.933247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.916 [2024-06-07 21:48:22.933256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.916 qpair failed and we were unable to recover it. 00:31:22.916 [2024-06-07 21:48:22.933501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.916 [2024-06-07 21:48:22.933511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.916 qpair failed and we were unable to recover it. 00:31:22.916 [2024-06-07 21:48:22.933669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.916 [2024-06-07 21:48:22.933679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.916 qpair failed and we were unable to recover it. 00:31:22.916 [2024-06-07 21:48:22.933805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.916 [2024-06-07 21:48:22.933814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.916 qpair failed and we were unable to recover it. 00:31:22.916 [2024-06-07 21:48:22.934092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.916 [2024-06-07 21:48:22.934102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.916 qpair failed and we were unable to recover it. 00:31:22.916 [2024-06-07 21:48:22.934300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.916 [2024-06-07 21:48:22.934310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.916 qpair failed and we were unable to recover it. 00:31:22.916 [2024-06-07 21:48:22.934517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.916 [2024-06-07 21:48:22.934527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.916 qpair failed and we were unable to recover it. 00:31:22.916 [2024-06-07 21:48:22.934818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.916 [2024-06-07 21:48:22.934828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.916 qpair failed and we were unable to recover it. 00:31:22.916 [2024-06-07 21:48:22.934973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.916 [2024-06-07 21:48:22.934983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.916 qpair failed and we were unable to recover it. 00:31:22.916 [2024-06-07 21:48:22.935203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.916 [2024-06-07 21:48:22.935213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.916 qpair failed and we were unable to recover it. 00:31:22.916 [2024-06-07 21:48:22.935406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.916 [2024-06-07 21:48:22.935416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.916 qpair failed and we were unable to recover it. 00:31:22.916 [2024-06-07 21:48:22.935573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.916 [2024-06-07 21:48:22.935583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.916 qpair failed and we were unable to recover it. 00:31:22.916 [2024-06-07 21:48:22.935874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.916 [2024-06-07 21:48:22.935884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.916 qpair failed and we were unable to recover it. 00:31:22.916 [2024-06-07 21:48:22.936179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.916 [2024-06-07 21:48:22.936189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.916 qpair failed and we were unable to recover it. 00:31:22.916 [2024-06-07 21:48:22.936440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.916 [2024-06-07 21:48:22.936450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.916 qpair failed and we were unable to recover it. 00:31:22.916 [2024-06-07 21:48:22.936680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.916 [2024-06-07 21:48:22.936690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.916 qpair failed and we were unable to recover it. 00:31:22.916 [2024-06-07 21:48:22.936849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.916 [2024-06-07 21:48:22.936859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.916 qpair failed and we were unable to recover it. 00:31:22.916 [2024-06-07 21:48:22.937005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.916 [2024-06-07 21:48:22.937015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.916 qpair failed and we were unable to recover it. 00:31:22.916 [2024-06-07 21:48:22.937163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.916 [2024-06-07 21:48:22.937173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.916 qpair failed and we were unable to recover it. 00:31:22.916 [2024-06-07 21:48:22.937379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.916 [2024-06-07 21:48:22.937392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.916 qpair failed and we were unable to recover it. 00:31:22.916 [2024-06-07 21:48:22.937590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.916 [2024-06-07 21:48:22.937600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.916 qpair failed and we were unable to recover it. 00:31:22.916 [2024-06-07 21:48:22.937867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.916 [2024-06-07 21:48:22.937877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.916 qpair failed and we were unable to recover it. 00:31:22.916 [2024-06-07 21:48:22.938102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.916 [2024-06-07 21:48:22.938112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.916 qpair failed and we were unable to recover it. 00:31:22.916 [2024-06-07 21:48:22.938381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.916 [2024-06-07 21:48:22.938391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.916 qpair failed and we were unable to recover it. 00:31:22.916 [2024-06-07 21:48:22.938694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.916 [2024-06-07 21:48:22.938704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.916 qpair failed and we were unable to recover it. 00:31:22.916 [2024-06-07 21:48:22.938918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.916 [2024-06-07 21:48:22.938928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.916 qpair failed and we were unable to recover it. 00:31:22.916 [2024-06-07 21:48:22.939196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.916 [2024-06-07 21:48:22.939207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.916 qpair failed and we were unable to recover it. 00:31:22.916 [2024-06-07 21:48:22.939478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.916 [2024-06-07 21:48:22.939490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.916 qpair failed and we were unable to recover it. 00:31:22.916 [2024-06-07 21:48:22.939702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.916 [2024-06-07 21:48:22.939713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.916 qpair failed and we were unable to recover it. 00:31:22.916 [2024-06-07 21:48:22.940008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.916 [2024-06-07 21:48:22.940018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.916 qpair failed and we were unable to recover it. 00:31:22.916 [2024-06-07 21:48:22.940265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.916 [2024-06-07 21:48:22.940276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.916 qpair failed and we were unable to recover it. 00:31:22.916 [2024-06-07 21:48:22.940570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.916 [2024-06-07 21:48:22.940581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.916 qpair failed and we were unable to recover it. 00:31:22.916 [2024-06-07 21:48:22.940856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.916 [2024-06-07 21:48:22.940867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.916 qpair failed and we were unable to recover it. 00:31:22.916 [2024-06-07 21:48:22.941084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.916 [2024-06-07 21:48:22.941094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.916 qpair failed and we were unable to recover it. 00:31:22.916 [2024-06-07 21:48:22.941259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.917 [2024-06-07 21:48:22.941269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.917 qpair failed and we were unable to recover it. 00:31:22.917 [2024-06-07 21:48:22.941564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.917 [2024-06-07 21:48:22.941574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.917 qpair failed and we were unable to recover it. 00:31:22.917 [2024-06-07 21:48:22.941788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.917 [2024-06-07 21:48:22.941798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.917 qpair failed and we were unable to recover it. 00:31:22.917 [2024-06-07 21:48:22.942011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.917 [2024-06-07 21:48:22.942021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.917 qpair failed and we were unable to recover it. 00:31:22.917 [2024-06-07 21:48:22.942236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.917 [2024-06-07 21:48:22.942247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.917 qpair failed and we were unable to recover it. 00:31:22.917 [2024-06-07 21:48:22.942407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.917 [2024-06-07 21:48:22.942418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.917 qpair failed and we were unable to recover it. 00:31:22.917 [2024-06-07 21:48:22.942612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.917 [2024-06-07 21:48:22.942622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.917 qpair failed and we were unable to recover it. 00:31:22.917 [2024-06-07 21:48:22.942754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.917 [2024-06-07 21:48:22.942764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.917 qpair failed and we were unable to recover it. 00:31:22.917 [2024-06-07 21:48:22.942990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.917 [2024-06-07 21:48:22.943001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.917 qpair failed and we were unable to recover it. 00:31:22.917 [2024-06-07 21:48:22.943216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.917 [2024-06-07 21:48:22.943228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.917 qpair failed and we were unable to recover it. 00:31:22.917 [2024-06-07 21:48:22.943369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.917 [2024-06-07 21:48:22.943379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.917 qpair failed and we were unable to recover it. 00:31:22.917 [2024-06-07 21:48:22.943592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.917 [2024-06-07 21:48:22.943604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.917 qpair failed and we were unable to recover it. 00:31:22.917 [2024-06-07 21:48:22.943764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.917 [2024-06-07 21:48:22.943774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.917 qpair failed and we were unable to recover it. 00:31:22.917 [2024-06-07 21:48:22.943936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.917 [2024-06-07 21:48:22.943947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.917 qpair failed and we were unable to recover it. 00:31:22.917 [2024-06-07 21:48:22.944187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.917 [2024-06-07 21:48:22.944198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.917 qpair failed and we were unable to recover it. 00:31:22.917 [2024-06-07 21:48:22.944403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.917 [2024-06-07 21:48:22.944414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.917 qpair failed and we were unable to recover it. 00:31:22.917 [2024-06-07 21:48:22.944616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.917 [2024-06-07 21:48:22.944627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.917 qpair failed and we were unable to recover it. 00:31:22.917 [2024-06-07 21:48:22.944756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.917 [2024-06-07 21:48:22.944766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.917 qpair failed and we were unable to recover it. 00:31:22.917 [2024-06-07 21:48:22.944908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.917 [2024-06-07 21:48:22.944917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.917 qpair failed and we were unable to recover it. 00:31:22.917 [2024-06-07 21:48:22.945210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.917 [2024-06-07 21:48:22.945221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.917 qpair failed and we were unable to recover it. 00:31:22.917 [2024-06-07 21:48:22.945421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.917 [2024-06-07 21:48:22.945432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.917 qpair failed and we were unable to recover it. 00:31:22.917 [2024-06-07 21:48:22.945694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.917 [2024-06-07 21:48:22.945706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.917 qpair failed and we were unable to recover it. 00:31:22.917 [2024-06-07 21:48:22.945927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.917 [2024-06-07 21:48:22.945939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.917 qpair failed and we were unable to recover it. 00:31:22.917 [2024-06-07 21:48:22.946151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.917 [2024-06-07 21:48:22.946162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.917 qpair failed and we were unable to recover it. 00:31:22.917 [2024-06-07 21:48:22.946457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.917 [2024-06-07 21:48:22.946468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.917 qpair failed and we were unable to recover it. 00:31:22.917 [2024-06-07 21:48:22.946669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.917 [2024-06-07 21:48:22.946683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.917 qpair failed and we were unable to recover it. 00:31:22.917 [2024-06-07 21:48:22.946890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.917 [2024-06-07 21:48:22.946901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.917 qpair failed and we were unable to recover it. 00:31:22.917 [2024-06-07 21:48:22.947104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.917 [2024-06-07 21:48:22.947114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.917 qpair failed and we were unable to recover it. 00:31:22.917 [2024-06-07 21:48:22.947274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.917 [2024-06-07 21:48:22.947286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.917 qpair failed and we were unable to recover it. 00:31:22.917 [2024-06-07 21:48:22.947504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.917 [2024-06-07 21:48:22.947515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.917 qpair failed and we were unable to recover it. 00:31:22.917 [2024-06-07 21:48:22.947672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.917 [2024-06-07 21:48:22.947682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.917 qpair failed and we were unable to recover it. 00:31:22.917 [2024-06-07 21:48:22.947976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.917 [2024-06-07 21:48:22.947988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.917 qpair failed and we were unable to recover it. 00:31:22.917 [2024-06-07 21:48:22.948289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.917 [2024-06-07 21:48:22.948300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.917 qpair failed and we were unable to recover it. 00:31:22.917 [2024-06-07 21:48:22.948494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.917 [2024-06-07 21:48:22.948505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.917 qpair failed and we were unable to recover it. 00:31:22.917 [2024-06-07 21:48:22.948668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.917 [2024-06-07 21:48:22.948678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.917 qpair failed and we were unable to recover it. 00:31:22.917 [2024-06-07 21:48:22.948833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.917 [2024-06-07 21:48:22.948845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.917 qpair failed and we were unable to recover it. 00:31:22.917 [2024-06-07 21:48:22.949064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.917 [2024-06-07 21:48:22.949076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.917 qpair failed and we were unable to recover it. 00:31:22.917 [2024-06-07 21:48:22.949206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.917 [2024-06-07 21:48:22.949218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.918 qpair failed and we were unable to recover it. 00:31:22.918 [2024-06-07 21:48:22.949417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.918 [2024-06-07 21:48:22.949427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.918 qpair failed and we were unable to recover it. 00:31:22.918 [2024-06-07 21:48:22.949559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.918 [2024-06-07 21:48:22.949570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.918 qpair failed and we were unable to recover it. 00:31:22.918 [2024-06-07 21:48:22.949866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.918 [2024-06-07 21:48:22.949879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.918 qpair failed and we were unable to recover it. 00:31:22.918 [2024-06-07 21:48:22.950080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.918 [2024-06-07 21:48:22.950092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.918 qpair failed and we were unable to recover it. 00:31:22.918 [2024-06-07 21:48:22.950309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.918 [2024-06-07 21:48:22.950319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.918 qpair failed and we were unable to recover it. 00:31:22.918 [2024-06-07 21:48:22.950479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.918 [2024-06-07 21:48:22.950489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.918 qpair failed and we were unable to recover it. 00:31:22.918 [2024-06-07 21:48:22.950645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.918 [2024-06-07 21:48:22.950655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.918 qpair failed and we were unable to recover it. 00:31:22.918 [2024-06-07 21:48:22.950814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.918 [2024-06-07 21:48:22.950825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.918 qpair failed and we were unable to recover it. 00:31:22.918 [2024-06-07 21:48:22.951057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.918 [2024-06-07 21:48:22.951068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.918 qpair failed and we were unable to recover it. 00:31:22.918 [2024-06-07 21:48:22.951222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.918 [2024-06-07 21:48:22.951233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.918 qpair failed and we were unable to recover it. 00:31:22.918 [2024-06-07 21:48:22.951376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.918 [2024-06-07 21:48:22.951386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.918 qpair failed and we were unable to recover it. 00:31:22.918 [2024-06-07 21:48:22.951587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.918 [2024-06-07 21:48:22.951598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.918 qpair failed and we were unable to recover it. 00:31:22.918 [2024-06-07 21:48:22.951742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.918 [2024-06-07 21:48:22.951752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.918 qpair failed and we were unable to recover it. 00:31:22.918 [2024-06-07 21:48:22.951955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.918 [2024-06-07 21:48:22.951966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.918 qpair failed and we were unable to recover it. 00:31:22.918 [2024-06-07 21:48:22.952186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.918 [2024-06-07 21:48:22.952198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.918 qpair failed and we were unable to recover it. 00:31:22.918 [2024-06-07 21:48:22.952409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.918 [2024-06-07 21:48:22.952419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.918 qpair failed and we were unable to recover it. 00:31:22.918 [2024-06-07 21:48:22.952580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.918 [2024-06-07 21:48:22.952590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.918 qpair failed and we were unable to recover it. 00:31:22.918 [2024-06-07 21:48:22.952928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.918 [2024-06-07 21:48:22.952938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.918 qpair failed and we were unable to recover it. 00:31:22.918 [2024-06-07 21:48:22.953205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.918 [2024-06-07 21:48:22.953218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.918 qpair failed and we were unable to recover it. 00:31:22.918 [2024-06-07 21:48:22.953422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.918 [2024-06-07 21:48:22.953433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.918 qpair failed and we were unable to recover it. 00:31:22.918 [2024-06-07 21:48:22.953724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.918 [2024-06-07 21:48:22.953735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.918 qpair failed and we were unable to recover it. 00:31:22.918 [2024-06-07 21:48:22.953940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.918 [2024-06-07 21:48:22.953950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.918 qpair failed and we were unable to recover it. 00:31:22.918 [2024-06-07 21:48:22.954220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.918 [2024-06-07 21:48:22.954231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.918 qpair failed and we were unable to recover it. 00:31:22.918 [2024-06-07 21:48:22.954389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.918 [2024-06-07 21:48:22.954399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.918 qpair failed and we were unable to recover it. 00:31:22.918 [2024-06-07 21:48:22.954616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.918 [2024-06-07 21:48:22.954626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.918 qpair failed and we were unable to recover it. 00:31:22.918 [2024-06-07 21:48:22.954917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.918 [2024-06-07 21:48:22.954927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.918 qpair failed and we were unable to recover it. 00:31:22.918 [2024-06-07 21:48:22.955146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.918 [2024-06-07 21:48:22.955157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.918 qpair failed and we were unable to recover it. 00:31:22.918 [2024-06-07 21:48:22.955357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.918 [2024-06-07 21:48:22.955370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.918 qpair failed and we were unable to recover it. 00:31:22.918 [2024-06-07 21:48:22.955624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.918 [2024-06-07 21:48:22.955634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.918 qpair failed and we were unable to recover it. 00:31:22.918 [2024-06-07 21:48:22.955866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.918 [2024-06-07 21:48:22.955876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.918 qpair failed and we were unable to recover it. 00:31:22.918 [2024-06-07 21:48:22.956033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.918 [2024-06-07 21:48:22.956043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.918 qpair failed and we were unable to recover it. 00:31:22.918 [2024-06-07 21:48:22.956186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.918 [2024-06-07 21:48:22.956196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.918 qpair failed and we were unable to recover it. 00:31:22.918 [2024-06-07 21:48:22.956351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.918 [2024-06-07 21:48:22.956361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.918 qpair failed and we were unable to recover it. 00:31:22.918 [2024-06-07 21:48:22.956508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.919 [2024-06-07 21:48:22.956518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.919 qpair failed and we were unable to recover it. 00:31:22.919 [2024-06-07 21:48:22.956810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.919 [2024-06-07 21:48:22.956821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.919 qpair failed and we were unable to recover it. 00:31:22.919 [2024-06-07 21:48:22.957117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.919 [2024-06-07 21:48:22.957128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.919 qpair failed and we were unable to recover it. 00:31:22.919 [2024-06-07 21:48:22.957346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.919 [2024-06-07 21:48:22.957356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.919 qpair failed and we were unable to recover it. 00:31:22.919 [2024-06-07 21:48:22.957514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.919 [2024-06-07 21:48:22.957524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.919 qpair failed and we were unable to recover it. 00:31:22.919 [2024-06-07 21:48:22.957817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.919 [2024-06-07 21:48:22.957828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.919 qpair failed and we were unable to recover it. 00:31:22.919 [2024-06-07 21:48:22.958127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.919 [2024-06-07 21:48:22.958138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.919 qpair failed and we were unable to recover it. 00:31:22.919 [2024-06-07 21:48:22.958288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.919 [2024-06-07 21:48:22.958299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.919 qpair failed and we were unable to recover it. 00:31:22.919 [2024-06-07 21:48:22.958458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.919 [2024-06-07 21:48:22.958469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.919 qpair failed and we were unable to recover it. 00:31:22.919 [2024-06-07 21:48:22.958691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.919 [2024-06-07 21:48:22.958701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.919 qpair failed and we were unable to recover it. 00:31:22.919 [2024-06-07 21:48:22.958906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.919 [2024-06-07 21:48:22.958916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.919 qpair failed and we were unable to recover it. 00:31:22.919 [2024-06-07 21:48:22.959073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.919 [2024-06-07 21:48:22.959084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.919 qpair failed and we were unable to recover it. 00:31:22.919 [2024-06-07 21:48:22.959294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.919 [2024-06-07 21:48:22.959306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.919 qpair failed and we were unable to recover it. 00:31:22.919 [2024-06-07 21:48:22.959574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.919 [2024-06-07 21:48:22.959584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.919 qpair failed and we were unable to recover it. 00:31:22.919 [2024-06-07 21:48:22.959723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.919 [2024-06-07 21:48:22.959733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.919 qpair failed and we were unable to recover it. 00:31:22.919 [2024-06-07 21:48:22.960031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.919 [2024-06-07 21:48:22.960042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.919 qpair failed and we were unable to recover it. 00:31:22.919 [2024-06-07 21:48:22.960175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.919 [2024-06-07 21:48:22.960185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.919 qpair failed and we were unable to recover it. 00:31:22.919 [2024-06-07 21:48:22.960401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.919 [2024-06-07 21:48:22.960412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.919 qpair failed and we were unable to recover it. 00:31:22.919 [2024-06-07 21:48:22.960688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.919 [2024-06-07 21:48:22.960700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.919 qpair failed and we were unable to recover it. 00:31:22.919 [2024-06-07 21:48:22.960989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.919 [2024-06-07 21:48:22.960999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.919 qpair failed and we were unable to recover it. 00:31:22.919 [2024-06-07 21:48:22.961204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.919 [2024-06-07 21:48:22.961214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.919 qpair failed and we were unable to recover it. 00:31:22.919 [2024-06-07 21:48:22.961357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.919 [2024-06-07 21:48:22.961368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.919 qpair failed and we were unable to recover it. 00:31:22.919 [2024-06-07 21:48:22.961511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.919 [2024-06-07 21:48:22.961521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.919 qpair failed and we were unable to recover it. 00:31:22.919 [2024-06-07 21:48:22.961806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.919 [2024-06-07 21:48:22.961816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.919 qpair failed and we were unable to recover it. 00:31:22.919 [2024-06-07 21:48:22.962048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.919 [2024-06-07 21:48:22.962059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.919 qpair failed and we were unable to recover it. 00:31:22.919 [2024-06-07 21:48:22.962206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.919 [2024-06-07 21:48:22.962215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.919 qpair failed and we were unable to recover it. 00:31:22.919 [2024-06-07 21:48:22.962418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.919 [2024-06-07 21:48:22.962427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.919 qpair failed and we were unable to recover it. 00:31:22.919 [2024-06-07 21:48:22.962624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.919 [2024-06-07 21:48:22.962634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.919 qpair failed and we were unable to recover it. 00:31:22.919 [2024-06-07 21:48:22.962840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.919 [2024-06-07 21:48:22.962851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.919 qpair failed and we were unable to recover it. 00:31:22.919 [2024-06-07 21:48:22.963051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.919 [2024-06-07 21:48:22.963063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.919 qpair failed and we were unable to recover it. 00:31:22.919 [2024-06-07 21:48:22.963207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.919 [2024-06-07 21:48:22.963216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.919 qpair failed and we were unable to recover it. 00:31:22.919 [2024-06-07 21:48:22.963424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.919 [2024-06-07 21:48:22.963435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.919 qpair failed and we were unable to recover it. 00:31:22.919 [2024-06-07 21:48:22.963704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.919 [2024-06-07 21:48:22.963714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.919 qpair failed and we were unable to recover it. 00:31:22.919 [2024-06-07 21:48:22.963924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.919 [2024-06-07 21:48:22.963935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.919 qpair failed and we were unable to recover it. 00:31:22.919 [2024-06-07 21:48:22.964052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.919 [2024-06-07 21:48:22.964066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.919 qpair failed and we were unable to recover it. 00:31:22.919 [2024-06-07 21:48:22.964306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.919 [2024-06-07 21:48:22.964316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.919 qpair failed and we were unable to recover it. 00:31:22.919 [2024-06-07 21:48:22.964456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.919 [2024-06-07 21:48:22.964467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.919 qpair failed and we were unable to recover it. 00:31:22.919 [2024-06-07 21:48:22.964616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.919 [2024-06-07 21:48:22.964625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.919 qpair failed and we were unable to recover it. 00:31:22.919 [2024-06-07 21:48:22.964819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.919 [2024-06-07 21:48:22.964829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.920 qpair failed and we were unable to recover it. 00:31:22.920 [2024-06-07 21:48:22.965061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.920 [2024-06-07 21:48:22.965071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.920 qpair failed and we were unable to recover it. 00:31:22.920 [2024-06-07 21:48:22.965230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.920 [2024-06-07 21:48:22.965240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.920 qpair failed and we were unable to recover it. 00:31:22.920 [2024-06-07 21:48:22.965461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.920 [2024-06-07 21:48:22.965472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.920 qpair failed and we were unable to recover it. 00:31:22.920 [2024-06-07 21:48:22.965629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.920 [2024-06-07 21:48:22.965640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.920 qpair failed and we were unable to recover it. 00:31:22.920 [2024-06-07 21:48:22.965952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.920 [2024-06-07 21:48:22.965962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.920 qpair failed and we were unable to recover it. 00:31:22.920 [2024-06-07 21:48:22.966171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.920 [2024-06-07 21:48:22.966183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.920 qpair failed and we were unable to recover it. 00:31:22.920 [2024-06-07 21:48:22.966391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.920 [2024-06-07 21:48:22.966401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.920 qpair failed and we were unable to recover it. 00:31:22.920 [2024-06-07 21:48:22.966523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.920 [2024-06-07 21:48:22.966533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.920 qpair failed and we were unable to recover it. 00:31:22.920 [2024-06-07 21:48:22.966758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.920 [2024-06-07 21:48:22.966769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.920 qpair failed and we were unable to recover it. 00:31:22.920 [2024-06-07 21:48:22.966932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.920 [2024-06-07 21:48:22.966942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.920 qpair failed and we were unable to recover it. 00:31:22.920 [2024-06-07 21:48:22.967182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.920 [2024-06-07 21:48:22.967193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.920 qpair failed and we were unable to recover it. 00:31:22.920 [2024-06-07 21:48:22.967415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.920 [2024-06-07 21:48:22.967425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.920 qpair failed and we were unable to recover it. 00:31:22.920 [2024-06-07 21:48:22.967626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.920 [2024-06-07 21:48:22.967636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.920 qpair failed and we were unable to recover it. 00:31:22.920 [2024-06-07 21:48:22.967862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.920 [2024-06-07 21:48:22.967872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.920 qpair failed and we were unable to recover it. 00:31:22.920 [2024-06-07 21:48:22.968014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.920 [2024-06-07 21:48:22.968023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.920 qpair failed and we were unable to recover it. 00:31:22.920 [2024-06-07 21:48:22.968225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.920 [2024-06-07 21:48:22.968235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.920 qpair failed and we were unable to recover it. 00:31:22.920 [2024-06-07 21:48:22.968369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.920 [2024-06-07 21:48:22.968378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.920 qpair failed and we were unable to recover it. 00:31:22.920 [2024-06-07 21:48:22.968503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.920 [2024-06-07 21:48:22.968512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.920 qpair failed and we were unable to recover it. 00:31:22.920 [2024-06-07 21:48:22.968737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.920 [2024-06-07 21:48:22.968747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.920 qpair failed and we were unable to recover it. 00:31:22.920 [2024-06-07 21:48:22.968960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.920 [2024-06-07 21:48:22.968970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.920 qpair failed and we were unable to recover it. 00:31:22.920 [2024-06-07 21:48:22.969180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.920 [2024-06-07 21:48:22.969189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.920 qpair failed and we were unable to recover it. 00:31:22.920 [2024-06-07 21:48:22.969416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.920 [2024-06-07 21:48:22.969425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.920 qpair failed and we were unable to recover it. 00:31:22.920 [2024-06-07 21:48:22.969667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.920 [2024-06-07 21:48:22.969677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.920 qpair failed and we were unable to recover it. 00:31:22.920 [2024-06-07 21:48:22.969809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.920 [2024-06-07 21:48:22.969818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.920 qpair failed and we were unable to recover it. 00:31:22.920 [2024-06-07 21:48:22.970028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.920 [2024-06-07 21:48:22.970038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.920 qpair failed and we were unable to recover it. 00:31:22.920 [2024-06-07 21:48:22.970241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.920 [2024-06-07 21:48:22.970252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.920 qpair failed and we were unable to recover it. 00:31:22.920 [2024-06-07 21:48:22.970461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.920 [2024-06-07 21:48:22.970471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.920 qpair failed and we were unable to recover it. 00:31:22.920 [2024-06-07 21:48:22.970628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.920 [2024-06-07 21:48:22.970638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.920 qpair failed and we were unable to recover it. 00:31:22.920 [2024-06-07 21:48:22.970795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.920 [2024-06-07 21:48:22.970805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.920 qpair failed and we were unable to recover it. 00:31:22.920 [2024-06-07 21:48:22.971038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.920 [2024-06-07 21:48:22.971049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.920 qpair failed and we were unable to recover it. 00:31:22.920 [2024-06-07 21:48:22.971341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.920 [2024-06-07 21:48:22.971352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.920 qpair failed and we were unable to recover it. 00:31:22.920 [2024-06-07 21:48:22.971549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.920 [2024-06-07 21:48:22.971559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.920 qpair failed and we were unable to recover it. 00:31:22.920 [2024-06-07 21:48:22.971698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.920 [2024-06-07 21:48:22.971708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.920 qpair failed and we were unable to recover it. 00:31:22.920 [2024-06-07 21:48:22.971915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.920 [2024-06-07 21:48:22.971924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.920 qpair failed and we were unable to recover it. 00:31:22.920 [2024-06-07 21:48:22.972093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.920 [2024-06-07 21:48:22.972103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.920 qpair failed and we were unable to recover it. 00:31:22.920 [2024-06-07 21:48:22.972390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.920 [2024-06-07 21:48:22.972401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.920 qpair failed and we were unable to recover it. 00:31:22.920 [2024-06-07 21:48:22.972544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.920 [2024-06-07 21:48:22.972553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.920 qpair failed and we were unable to recover it. 00:31:22.920 [2024-06-07 21:48:22.972691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.920 [2024-06-07 21:48:22.972700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.920 qpair failed and we were unable to recover it. 00:31:22.921 [2024-06-07 21:48:22.972969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.921 [2024-06-07 21:48:22.972979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.921 qpair failed and we were unable to recover it. 00:31:22.921 [2024-06-07 21:48:22.973191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.921 [2024-06-07 21:48:22.973202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.921 qpair failed and we were unable to recover it. 00:31:22.921 [2024-06-07 21:48:22.973354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.921 [2024-06-07 21:48:22.973364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.921 qpair failed and we were unable to recover it. 00:31:22.921 [2024-06-07 21:48:22.973593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.921 [2024-06-07 21:48:22.973603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.921 qpair failed and we were unable to recover it. 00:31:22.921 [2024-06-07 21:48:22.973833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.921 [2024-06-07 21:48:22.973843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.921 qpair failed and we were unable to recover it. 00:31:22.921 [2024-06-07 21:48:22.974011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.921 [2024-06-07 21:48:22.974022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.921 qpair failed and we were unable to recover it. 00:31:22.921 [2024-06-07 21:48:22.974175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.921 [2024-06-07 21:48:22.974186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.921 qpair failed and we were unable to recover it. 00:31:22.921 [2024-06-07 21:48:22.974461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.921 [2024-06-07 21:48:22.974472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.921 qpair failed and we were unable to recover it. 00:31:22.921 [2024-06-07 21:48:22.974685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.921 [2024-06-07 21:48:22.974695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.921 qpair failed and we were unable to recover it. 00:31:22.921 [2024-06-07 21:48:22.974912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.921 [2024-06-07 21:48:22.974921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.921 qpair failed and we were unable to recover it. 00:31:22.921 [2024-06-07 21:48:22.975189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.921 [2024-06-07 21:48:22.975200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.921 qpair failed and we were unable to recover it. 00:31:22.921 [2024-06-07 21:48:22.975342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.921 [2024-06-07 21:48:22.975351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.921 qpair failed and we were unable to recover it. 00:31:22.921 [2024-06-07 21:48:22.975646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.921 [2024-06-07 21:48:22.975655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.921 qpair failed and we were unable to recover it. 00:31:22.921 [2024-06-07 21:48:22.975922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.921 [2024-06-07 21:48:22.975932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.921 qpair failed and we were unable to recover it. 00:31:22.921 [2024-06-07 21:48:22.976087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.921 [2024-06-07 21:48:22.976098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.921 qpair failed and we were unable to recover it. 00:31:22.921 [2024-06-07 21:48:22.976219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.921 [2024-06-07 21:48:22.976228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.921 qpair failed and we were unable to recover it. 00:31:22.921 [2024-06-07 21:48:22.976381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.921 [2024-06-07 21:48:22.976390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.921 qpair failed and we were unable to recover it. 00:31:22.921 [2024-06-07 21:48:22.976588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.921 [2024-06-07 21:48:22.976598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.921 qpair failed and we were unable to recover it. 00:31:22.921 [2024-06-07 21:48:22.976737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.921 [2024-06-07 21:48:22.976747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.921 qpair failed and we were unable to recover it. 00:31:22.921 [2024-06-07 21:48:22.976964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.921 [2024-06-07 21:48:22.976975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.921 qpair failed and we were unable to recover it. 00:31:22.921 [2024-06-07 21:48:22.977132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.921 [2024-06-07 21:48:22.977142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.921 qpair failed and we were unable to recover it. 00:31:22.921 [2024-06-07 21:48:22.977338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.921 [2024-06-07 21:48:22.977349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.921 qpair failed and we were unable to recover it. 00:31:22.921 [2024-06-07 21:48:22.977476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.921 [2024-06-07 21:48:22.977486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.921 qpair failed and we were unable to recover it. 00:31:22.921 [2024-06-07 21:48:22.977754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.921 [2024-06-07 21:48:22.977764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.921 qpair failed and we were unable to recover it. 00:31:22.921 [2024-06-07 21:48:22.978060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.921 [2024-06-07 21:48:22.978072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.921 qpair failed and we were unable to recover it. 00:31:22.921 [2024-06-07 21:48:22.978343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.921 [2024-06-07 21:48:22.978354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.921 qpair failed and we were unable to recover it. 00:31:22.921 [2024-06-07 21:48:22.978492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.921 [2024-06-07 21:48:22.978502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.921 qpair failed and we were unable to recover it. 00:31:22.921 [2024-06-07 21:48:22.978718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.921 [2024-06-07 21:48:22.978728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.921 qpair failed and we were unable to recover it. 00:31:22.921 [2024-06-07 21:48:22.978940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.921 [2024-06-07 21:48:22.978951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.921 qpair failed and we were unable to recover it. 00:31:22.921 [2024-06-07 21:48:22.979112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.921 [2024-06-07 21:48:22.979123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.921 qpair failed and we were unable to recover it. 00:31:22.921 [2024-06-07 21:48:22.979253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.921 [2024-06-07 21:48:22.979263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.921 qpair failed and we were unable to recover it. 00:31:22.921 [2024-06-07 21:48:22.979471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.921 [2024-06-07 21:48:22.979482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.921 qpair failed and we were unable to recover it. 00:31:22.921 [2024-06-07 21:48:22.979784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.921 [2024-06-07 21:48:22.979796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.921 qpair failed and we were unable to recover it. 00:31:22.921 [2024-06-07 21:48:22.979933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.921 [2024-06-07 21:48:22.979946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.921 qpair failed and we were unable to recover it. 00:31:22.921 [2024-06-07 21:48:22.980145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.921 [2024-06-07 21:48:22.980154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.921 qpair failed and we were unable to recover it. 00:31:22.921 [2024-06-07 21:48:22.980353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.921 [2024-06-07 21:48:22.980363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.921 qpair failed and we were unable to recover it. 00:31:22.921 [2024-06-07 21:48:22.980560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.922 [2024-06-07 21:48:22.980569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.922 qpair failed and we were unable to recover it. 00:31:22.922 [2024-06-07 21:48:22.980708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.922 [2024-06-07 21:48:22.980722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.922 qpair failed and we were unable to recover it. 00:31:22.922 [2024-06-07 21:48:22.980957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.922 [2024-06-07 21:48:22.980967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.922 qpair failed and we were unable to recover it. 00:31:22.922 [2024-06-07 21:48:22.981182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.922 [2024-06-07 21:48:22.981193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.922 qpair failed and we were unable to recover it. 00:31:22.922 [2024-06-07 21:48:22.981401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.922 [2024-06-07 21:48:22.981412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.922 qpair failed and we were unable to recover it. 00:31:22.922 [2024-06-07 21:48:22.981676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.922 [2024-06-07 21:48:22.981686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.922 qpair failed and we were unable to recover it. 00:31:22.922 [2024-06-07 21:48:22.981835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.922 [2024-06-07 21:48:22.981845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.922 qpair failed and we were unable to recover it. 00:31:22.922 [2024-06-07 21:48:22.982140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.922 [2024-06-07 21:48:22.982150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.922 qpair failed and we were unable to recover it. 00:31:22.922 [2024-06-07 21:48:22.982292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.922 [2024-06-07 21:48:22.982304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.922 qpair failed and we were unable to recover it. 00:31:22.922 [2024-06-07 21:48:22.982517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.922 [2024-06-07 21:48:22.982526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.922 qpair failed and we were unable to recover it. 00:31:22.922 [2024-06-07 21:48:22.982831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.922 [2024-06-07 21:48:22.982841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.922 qpair failed and we were unable to recover it. 00:31:22.922 [2024-06-07 21:48:22.983060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.922 [2024-06-07 21:48:22.983070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.922 qpair failed and we were unable to recover it. 00:31:22.922 [2024-06-07 21:48:22.983232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.922 [2024-06-07 21:48:22.983242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.922 qpair failed and we were unable to recover it. 00:31:22.922 [2024-06-07 21:48:22.983465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.922 [2024-06-07 21:48:22.983476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.922 qpair failed and we were unable to recover it. 00:31:22.922 [2024-06-07 21:48:22.983615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.922 [2024-06-07 21:48:22.983624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.922 qpair failed and we were unable to recover it. 00:31:22.922 [2024-06-07 21:48:22.983770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.922 [2024-06-07 21:48:22.983780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.922 qpair failed and we were unable to recover it. 00:31:22.922 [2024-06-07 21:48:22.983913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.922 [2024-06-07 21:48:22.983922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.922 qpair failed and we were unable to recover it. 00:31:22.922 [2024-06-07 21:48:22.984236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.922 [2024-06-07 21:48:22.984246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.922 qpair failed and we were unable to recover it. 00:31:22.922 [2024-06-07 21:48:22.984468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.922 [2024-06-07 21:48:22.984478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.922 qpair failed and we were unable to recover it. 00:31:22.922 [2024-06-07 21:48:22.984691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.922 [2024-06-07 21:48:22.984700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.922 qpair failed and we were unable to recover it. 00:31:22.922 [2024-06-07 21:48:22.984884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.922 [2024-06-07 21:48:22.984895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.922 qpair failed and we were unable to recover it. 00:31:22.922 [2024-06-07 21:48:22.985108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.922 [2024-06-07 21:48:22.985118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.922 qpair failed and we were unable to recover it. 00:31:22.922 [2024-06-07 21:48:22.985262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.922 [2024-06-07 21:48:22.985272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.922 qpair failed and we were unable to recover it. 00:31:22.922 [2024-06-07 21:48:22.985493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.922 [2024-06-07 21:48:22.985503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.922 qpair failed and we were unable to recover it. 00:31:22.922 [2024-06-07 21:48:22.985734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.922 [2024-06-07 21:48:22.985744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.922 qpair failed and we were unable to recover it. 00:31:22.922 [2024-06-07 21:48:22.985883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.922 [2024-06-07 21:48:22.985893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.922 qpair failed and we were unable to recover it. 00:31:22.922 [2024-06-07 21:48:22.986088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.922 [2024-06-07 21:48:22.986099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.922 qpair failed and we were unable to recover it. 00:31:22.922 [2024-06-07 21:48:22.986359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.922 [2024-06-07 21:48:22.986369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.922 qpair failed and we were unable to recover it. 00:31:22.922 [2024-06-07 21:48:22.986502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.922 [2024-06-07 21:48:22.986512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.922 qpair failed and we were unable to recover it. 00:31:22.922 [2024-06-07 21:48:22.986646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.922 [2024-06-07 21:48:22.986655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.922 qpair failed and we were unable to recover it. 00:31:22.922 [2024-06-07 21:48:22.986892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.923 [2024-06-07 21:48:22.986902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.923 qpair failed and we were unable to recover it. 00:31:22.923 [2024-06-07 21:48:22.987047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.923 [2024-06-07 21:48:22.987057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.923 qpair failed and we were unable to recover it. 00:31:22.923 [2024-06-07 21:48:22.987349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.923 [2024-06-07 21:48:22.987359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.923 qpair failed and we were unable to recover it. 00:31:22.923 [2024-06-07 21:48:22.987571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.923 [2024-06-07 21:48:22.987581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.923 qpair failed and we were unable to recover it. 00:31:22.923 [2024-06-07 21:48:22.987790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.923 [2024-06-07 21:48:22.987800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.923 qpair failed and we were unable to recover it. 00:31:22.923 [2024-06-07 21:48:22.988066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.923 [2024-06-07 21:48:22.988076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.923 qpair failed and we were unable to recover it. 00:31:22.923 [2024-06-07 21:48:22.988215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.923 [2024-06-07 21:48:22.988225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.923 qpair failed and we were unable to recover it. 00:31:22.923 [2024-06-07 21:48:22.988493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.923 [2024-06-07 21:48:22.988503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.923 qpair failed and we were unable to recover it. 00:31:22.923 [2024-06-07 21:48:22.988715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.923 [2024-06-07 21:48:22.988724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.923 qpair failed and we were unable to recover it. 00:31:22.923 [2024-06-07 21:48:22.988928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.923 [2024-06-07 21:48:22.988937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.923 qpair failed and we were unable to recover it. 00:31:22.923 [2024-06-07 21:48:22.989201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.923 [2024-06-07 21:48:22.989211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.923 qpair failed and we were unable to recover it. 00:31:22.923 [2024-06-07 21:48:22.989425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.923 [2024-06-07 21:48:22.989437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.923 qpair failed and we were unable to recover it. 00:31:22.923 [2024-06-07 21:48:22.989603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.923 [2024-06-07 21:48:22.989613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.923 qpair failed and we were unable to recover it. 00:31:22.923 [2024-06-07 21:48:22.989815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.923 [2024-06-07 21:48:22.989824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.923 qpair failed and we were unable to recover it. 00:31:22.923 [2024-06-07 21:48:22.990044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.923 [2024-06-07 21:48:22.990054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.923 qpair failed and we were unable to recover it. 00:31:22.923 [2024-06-07 21:48:22.990322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.923 [2024-06-07 21:48:22.990331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.923 qpair failed and we were unable to recover it. 00:31:22.923 [2024-06-07 21:48:22.990552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.923 [2024-06-07 21:48:22.990562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.923 qpair failed and we were unable to recover it. 00:31:22.923 [2024-06-07 21:48:22.990841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.923 [2024-06-07 21:48:22.990851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.923 qpair failed and we were unable to recover it. 00:31:22.923 [2024-06-07 21:48:22.991006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.923 [2024-06-07 21:48:22.991015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.923 qpair failed and we were unable to recover it. 00:31:22.923 [2024-06-07 21:48:22.991211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.923 [2024-06-07 21:48:22.991247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.923 qpair failed and we were unable to recover it. 00:31:22.923 [2024-06-07 21:48:22.991430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.923 [2024-06-07 21:48:22.991449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.923 qpair failed and we were unable to recover it. 00:31:22.923 [2024-06-07 21:48:22.991763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.923 [2024-06-07 21:48:22.991781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.923 qpair failed and we were unable to recover it. 00:31:22.923 [2024-06-07 21:48:22.991950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.923 [2024-06-07 21:48:22.991967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.923 qpair failed and we were unable to recover it. 00:31:22.923 [2024-06-07 21:48:22.992250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.923 [2024-06-07 21:48:22.992269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.923 qpair failed and we were unable to recover it. 00:31:22.923 [2024-06-07 21:48:22.992486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.923 [2024-06-07 21:48:22.992504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.923 qpair failed and we were unable to recover it. 00:31:22.923 [2024-06-07 21:48:22.992798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.923 [2024-06-07 21:48:22.992815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.923 qpair failed and we were unable to recover it. 00:31:22.923 [2024-06-07 21:48:22.993099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.923 [2024-06-07 21:48:22.993117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.923 qpair failed and we were unable to recover it. 00:31:22.923 [2024-06-07 21:48:22.993349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.923 [2024-06-07 21:48:22.993367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.923 qpair failed and we were unable to recover it. 00:31:22.923 [2024-06-07 21:48:22.993527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.923 [2024-06-07 21:48:22.993545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.923 qpair failed and we were unable to recover it. 00:31:22.923 [2024-06-07 21:48:22.993787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.923 [2024-06-07 21:48:22.993805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.923 qpair failed and we were unable to recover it. 00:31:22.923 [2024-06-07 21:48:22.994050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.923 [2024-06-07 21:48:22.994068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.923 qpair failed and we were unable to recover it. 00:31:22.923 [2024-06-07 21:48:22.994395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.923 [2024-06-07 21:48:22.994413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.923 qpair failed and we were unable to recover it. 00:31:22.923 [2024-06-07 21:48:22.994574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.923 [2024-06-07 21:48:22.994591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.923 qpair failed and we were unable to recover it. 00:31:22.923 [2024-06-07 21:48:22.994875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.923 [2024-06-07 21:48:22.994892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.923 qpair failed and we were unable to recover it. 00:31:22.923 [2024-06-07 21:48:22.995070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.923 [2024-06-07 21:48:22.995088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.923 qpair failed and we were unable to recover it. 00:31:22.923 [2024-06-07 21:48:22.995312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.923 [2024-06-07 21:48:22.995330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.923 qpair failed and we were unable to recover it. 00:31:22.923 [2024-06-07 21:48:22.995550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.923 [2024-06-07 21:48:22.995568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.923 qpair failed and we were unable to recover it. 00:31:22.923 [2024-06-07 21:48:22.995784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.923 [2024-06-07 21:48:22.995801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.923 qpair failed and we were unable to recover it. 00:31:22.923 [2024-06-07 21:48:22.996131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.923 [2024-06-07 21:48:22.996151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.923 qpair failed and we were unable to recover it. 00:31:22.923 [2024-06-07 21:48:22.996460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.923 [2024-06-07 21:48:22.996477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.923 qpair failed and we were unable to recover it. 00:31:22.923 [2024-06-07 21:48:22.996765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.924 [2024-06-07 21:48:22.996782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.924 qpair failed and we were unable to recover it. 00:31:22.924 [2024-06-07 21:48:22.997010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.924 [2024-06-07 21:48:22.997032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.924 qpair failed and we were unable to recover it. 00:31:22.924 [2024-06-07 21:48:22.997255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.924 [2024-06-07 21:48:22.997272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.924 qpair failed and we were unable to recover it. 00:31:22.924 [2024-06-07 21:48:22.997486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.924 [2024-06-07 21:48:22.997504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.924 qpair failed and we were unable to recover it. 00:31:22.924 [2024-06-07 21:48:22.997672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.924 [2024-06-07 21:48:22.997689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.924 qpair failed and we were unable to recover it. 00:31:22.924 [2024-06-07 21:48:22.997855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.924 [2024-06-07 21:48:22.997872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.924 qpair failed and we were unable to recover it. 00:31:22.924 [2024-06-07 21:48:22.998102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.924 [2024-06-07 21:48:22.998119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.924 qpair failed and we were unable to recover it. 00:31:22.924 [2024-06-07 21:48:22.998286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.924 [2024-06-07 21:48:22.998303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.924 qpair failed and we were unable to recover it. 00:31:22.924 [2024-06-07 21:48:22.998458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.924 [2024-06-07 21:48:22.998475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.924 qpair failed and we were unable to recover it. 00:31:22.924 [2024-06-07 21:48:22.998649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.924 [2024-06-07 21:48:22.998666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.924 qpair failed and we were unable to recover it. 00:31:22.924 [2024-06-07 21:48:22.998815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.924 [2024-06-07 21:48:22.998832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.924 qpair failed and we were unable to recover it. 00:31:22.924 [2024-06-07 21:48:22.998971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.924 [2024-06-07 21:48:22.998984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.924 qpair failed and we were unable to recover it. 00:31:22.924 [2024-06-07 21:48:22.999148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.924 [2024-06-07 21:48:22.999158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.924 qpair failed and we were unable to recover it. 00:31:22.924 [2024-06-07 21:48:22.999476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.924 [2024-06-07 21:48:22.999486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.924 qpair failed and we were unable to recover it. 00:31:22.924 [2024-06-07 21:48:22.999715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.924 [2024-06-07 21:48:22.999725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.924 qpair failed and we were unable to recover it. 00:31:22.924 [2024-06-07 21:48:22.999929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.924 [2024-06-07 21:48:22.999938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.924 qpair failed and we were unable to recover it. 00:31:22.924 [2024-06-07 21:48:23.000208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.924 [2024-06-07 21:48:23.000217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.924 qpair failed and we were unable to recover it. 00:31:22.924 [2024-06-07 21:48:23.000507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.924 [2024-06-07 21:48:23.000516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.924 qpair failed and we were unable to recover it. 00:31:22.924 [2024-06-07 21:48:23.000663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.924 [2024-06-07 21:48:23.000673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.924 qpair failed and we were unable to recover it. 00:31:22.924 [2024-06-07 21:48:23.000819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.924 [2024-06-07 21:48:23.000828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.924 qpair failed and we were unable to recover it. 00:31:22.924 [2024-06-07 21:48:23.000957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.924 [2024-06-07 21:48:23.000966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.924 qpair failed and we were unable to recover it. 00:31:22.924 [2024-06-07 21:48:23.001121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.924 [2024-06-07 21:48:23.001131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.924 qpair failed and we were unable to recover it. 00:31:22.924 [2024-06-07 21:48:23.001269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.924 [2024-06-07 21:48:23.001279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.924 qpair failed and we were unable to recover it. 00:31:22.924 [2024-06-07 21:48:23.001495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.924 [2024-06-07 21:48:23.001504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.924 qpair failed and we were unable to recover it. 00:31:22.924 [2024-06-07 21:48:23.001699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.924 [2024-06-07 21:48:23.001709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.924 qpair failed and we were unable to recover it. 00:31:22.924 [2024-06-07 21:48:23.001867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.924 [2024-06-07 21:48:23.001878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.924 qpair failed and we were unable to recover it. 00:31:22.924 [2024-06-07 21:48:23.002008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.924 [2024-06-07 21:48:23.002018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.924 qpair failed and we were unable to recover it. 00:31:22.924 [2024-06-07 21:48:23.002226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.924 [2024-06-07 21:48:23.002236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.924 qpair failed and we were unable to recover it. 00:31:22.924 [2024-06-07 21:48:23.002532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.924 [2024-06-07 21:48:23.002541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.924 qpair failed and we were unable to recover it. 00:31:22.924 [2024-06-07 21:48:23.002677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.924 [2024-06-07 21:48:23.002686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.924 qpair failed and we were unable to recover it. 00:31:22.924 [2024-06-07 21:48:23.002913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.924 [2024-06-07 21:48:23.002923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.924 qpair failed and we were unable to recover it. 00:31:22.924 [2024-06-07 21:48:23.003066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.924 [2024-06-07 21:48:23.003076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.924 qpair failed and we were unable to recover it. 00:31:22.924 [2024-06-07 21:48:23.003290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.924 [2024-06-07 21:48:23.003299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.924 qpair failed and we were unable to recover it. 00:31:22.924 [2024-06-07 21:48:23.003532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.924 [2024-06-07 21:48:23.003542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.924 qpair failed and we were unable to recover it. 00:31:22.924 [2024-06-07 21:48:23.003745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.924 [2024-06-07 21:48:23.003754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.924 qpair failed and we were unable to recover it. 00:31:22.924 [2024-06-07 21:48:23.003961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.924 [2024-06-07 21:48:23.003971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.924 qpair failed and we were unable to recover it. 00:31:22.924 [2024-06-07 21:48:23.004115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.924 [2024-06-07 21:48:23.004124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.924 qpair failed and we were unable to recover it. 00:31:22.924 [2024-06-07 21:48:23.004347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.924 [2024-06-07 21:48:23.004356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.924 qpair failed and we were unable to recover it. 00:31:22.924 [2024-06-07 21:48:23.004504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.924 [2024-06-07 21:48:23.004514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.924 qpair failed and we were unable to recover it. 00:31:22.924 [2024-06-07 21:48:23.004725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.924 [2024-06-07 21:48:23.004735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.924 qpair failed and we were unable to recover it. 00:31:22.924 [2024-06-07 21:48:23.004882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.924 [2024-06-07 21:48:23.004891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.924 qpair failed and we were unable to recover it. 00:31:22.924 [2024-06-07 21:48:23.005158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.924 [2024-06-07 21:48:23.005167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.925 qpair failed and we were unable to recover it. 00:31:22.925 [2024-06-07 21:48:23.005358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.925 [2024-06-07 21:48:23.005368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.925 qpair failed and we were unable to recover it. 00:31:22.925 [2024-06-07 21:48:23.005612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.925 [2024-06-07 21:48:23.005621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.925 qpair failed and we were unable to recover it. 00:31:22.925 [2024-06-07 21:48:23.005768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.925 [2024-06-07 21:48:23.005778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.925 qpair failed and we were unable to recover it. 00:31:22.925 [2024-06-07 21:48:23.005915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.925 [2024-06-07 21:48:23.005924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.925 qpair failed and we were unable to recover it. 00:31:22.925 [2024-06-07 21:48:23.006077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.925 [2024-06-07 21:48:23.006087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.925 qpair failed and we were unable to recover it. 00:31:22.925 [2024-06-07 21:48:23.006301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.925 [2024-06-07 21:48:23.006311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.925 qpair failed and we were unable to recover it. 00:31:22.925 [2024-06-07 21:48:23.006477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.925 [2024-06-07 21:48:23.006487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.925 qpair failed and we were unable to recover it. 00:31:22.925 [2024-06-07 21:48:23.006699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.925 [2024-06-07 21:48:23.006708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.925 qpair failed and we were unable to recover it. 00:31:22.925 [2024-06-07 21:48:23.006977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.925 [2024-06-07 21:48:23.006986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.925 qpair failed and we were unable to recover it. 00:31:22.925 [2024-06-07 21:48:23.007257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.925 [2024-06-07 21:48:23.007269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.925 qpair failed and we were unable to recover it. 00:31:22.925 [2024-06-07 21:48:23.007402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.925 [2024-06-07 21:48:23.007412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.925 qpair failed and we were unable to recover it. 00:31:22.925 [2024-06-07 21:48:23.007554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.925 [2024-06-07 21:48:23.007564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.925 qpair failed and we were unable to recover it. 00:31:22.925 [2024-06-07 21:48:23.007692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.925 [2024-06-07 21:48:23.007702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.925 qpair failed and we were unable to recover it. 00:31:22.925 [2024-06-07 21:48:23.007913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.925 [2024-06-07 21:48:23.007923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.925 qpair failed and we were unable to recover it. 00:31:22.925 [2024-06-07 21:48:23.008124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.925 [2024-06-07 21:48:23.008134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.925 qpair failed and we were unable to recover it. 00:31:22.925 [2024-06-07 21:48:23.008353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.925 [2024-06-07 21:48:23.008362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.925 qpair failed and we were unable to recover it. 00:31:22.925 [2024-06-07 21:48:23.008564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.925 [2024-06-07 21:48:23.008574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.925 qpair failed and we were unable to recover it. 00:31:22.925 [2024-06-07 21:48:23.008805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.925 [2024-06-07 21:48:23.008814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.925 qpair failed and we were unable to recover it. 00:31:22.925 [2024-06-07 21:48:23.009034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.925 [2024-06-07 21:48:23.009043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.925 qpair failed and we were unable to recover it. 00:31:22.925 [2024-06-07 21:48:23.009201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.925 [2024-06-07 21:48:23.009211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.925 qpair failed and we were unable to recover it. 00:31:22.925 [2024-06-07 21:48:23.009417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.925 [2024-06-07 21:48:23.009426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.925 qpair failed and we were unable to recover it. 00:31:22.925 [2024-06-07 21:48:23.009625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.925 [2024-06-07 21:48:23.009634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.925 qpair failed and we were unable to recover it. 00:31:22.925 [2024-06-07 21:48:23.009854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.925 [2024-06-07 21:48:23.009864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.925 qpair failed and we were unable to recover it. 00:31:22.925 [2024-06-07 21:48:23.010087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.925 [2024-06-07 21:48:23.010096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.925 qpair failed and we were unable to recover it. 00:31:22.925 [2024-06-07 21:48:23.010324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.925 [2024-06-07 21:48:23.010334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.925 qpair failed and we were unable to recover it. 00:31:22.925 [2024-06-07 21:48:23.010483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.925 [2024-06-07 21:48:23.010493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.925 qpair failed and we were unable to recover it. 00:31:22.925 [2024-06-07 21:48:23.010735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.925 [2024-06-07 21:48:23.010744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.925 qpair failed and we were unable to recover it. 00:31:22.925 [2024-06-07 21:48:23.010981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.925 [2024-06-07 21:48:23.010990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.925 qpair failed and we were unable to recover it. 00:31:22.925 [2024-06-07 21:48:23.011217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.925 [2024-06-07 21:48:23.011227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.925 qpair failed and we were unable to recover it. 00:31:22.925 [2024-06-07 21:48:23.011364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.925 [2024-06-07 21:48:23.011374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.925 qpair failed and we were unable to recover it. 00:31:22.925 [2024-06-07 21:48:23.011526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.925 [2024-06-07 21:48:23.011535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.926 qpair failed and we were unable to recover it. 00:31:22.926 [2024-06-07 21:48:23.011697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.926 [2024-06-07 21:48:23.011706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.926 qpair failed and we were unable to recover it. 00:31:22.926 [2024-06-07 21:48:23.011922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.926 [2024-06-07 21:48:23.011931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.926 qpair failed and we were unable to recover it. 00:31:22.926 [2024-06-07 21:48:23.012140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.926 [2024-06-07 21:48:23.012150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.926 qpair failed and we were unable to recover it. 00:31:22.926 [2024-06-07 21:48:23.012352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.926 [2024-06-07 21:48:23.012361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.926 qpair failed and we were unable to recover it. 00:31:22.926 [2024-06-07 21:48:23.012536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.926 [2024-06-07 21:48:23.012545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.926 qpair failed and we were unable to recover it. 00:31:22.926 [2024-06-07 21:48:23.012677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.926 [2024-06-07 21:48:23.012686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.926 qpair failed and we were unable to recover it. 00:31:22.926 [2024-06-07 21:48:23.012906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.926 [2024-06-07 21:48:23.012915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.926 qpair failed and we were unable to recover it. 00:31:22.926 [2024-06-07 21:48:23.013043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.926 [2024-06-07 21:48:23.013052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.926 qpair failed and we were unable to recover it. 00:31:22.926 [2024-06-07 21:48:23.013186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.926 [2024-06-07 21:48:23.013196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.926 qpair failed and we were unable to recover it. 00:31:22.926 [2024-06-07 21:48:23.013401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.926 [2024-06-07 21:48:23.013411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.926 qpair failed and we were unable to recover it. 00:31:22.926 [2024-06-07 21:48:23.013610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.926 [2024-06-07 21:48:23.013620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.926 qpair failed and we were unable to recover it. 00:31:22.926 [2024-06-07 21:48:23.013709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.926 [2024-06-07 21:48:23.013718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.926 qpair failed and we were unable to recover it. 00:31:22.926 [2024-06-07 21:48:23.013873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.926 [2024-06-07 21:48:23.013883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.926 qpair failed and we were unable to recover it. 00:31:22.926 [2024-06-07 21:48:23.014043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.926 [2024-06-07 21:48:23.014053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.926 qpair failed and we were unable to recover it. 00:31:22.926 [2024-06-07 21:48:23.014329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.926 [2024-06-07 21:48:23.014339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.926 qpair failed and we were unable to recover it. 00:31:22.926 [2024-06-07 21:48:23.014479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.926 [2024-06-07 21:48:23.014488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.926 qpair failed and we were unable to recover it. 00:31:22.926 [2024-06-07 21:48:23.014618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.926 [2024-06-07 21:48:23.014628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.926 qpair failed and we were unable to recover it. 00:31:22.926 [2024-06-07 21:48:23.014845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.926 [2024-06-07 21:48:23.014854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.926 qpair failed and we were unable to recover it. 00:31:22.926 [2024-06-07 21:48:23.015000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.926 [2024-06-07 21:48:23.015011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.926 qpair failed and we were unable to recover it. 00:31:22.926 [2024-06-07 21:48:23.015230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.926 [2024-06-07 21:48:23.015240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.926 qpair failed and we were unable to recover it. 00:31:22.926 [2024-06-07 21:48:23.015382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.926 [2024-06-07 21:48:23.015392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.926 qpair failed and we were unable to recover it. 00:31:22.926 [2024-06-07 21:48:23.015544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.926 [2024-06-07 21:48:23.015553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.926 qpair failed and we were unable to recover it. 00:31:22.926 [2024-06-07 21:48:23.015751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.926 [2024-06-07 21:48:23.015761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.926 qpair failed and we were unable to recover it. 00:31:22.926 [2024-06-07 21:48:23.015887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.926 [2024-06-07 21:48:23.015896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.926 qpair failed and we were unable to recover it. 00:31:22.926 [2024-06-07 21:48:23.016167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.926 [2024-06-07 21:48:23.016177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.926 qpair failed and we were unable to recover it. 00:31:22.926 [2024-06-07 21:48:23.016403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.926 [2024-06-07 21:48:23.016413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.926 qpair failed and we were unable to recover it. 00:31:22.926 [2024-06-07 21:48:23.016573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.926 [2024-06-07 21:48:23.016583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.927 qpair failed and we were unable to recover it. 00:31:22.927 [2024-06-07 21:48:23.016778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.927 [2024-06-07 21:48:23.016787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.927 qpair failed and we were unable to recover it. 00:31:22.927 [2024-06-07 21:48:23.017008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.927 [2024-06-07 21:48:23.017017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.927 qpair failed and we were unable to recover it. 00:31:22.927 [2024-06-07 21:48:23.017238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.927 [2024-06-07 21:48:23.017247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.927 qpair failed and we were unable to recover it. 00:31:22.927 [2024-06-07 21:48:23.017491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.927 [2024-06-07 21:48:23.017501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.927 qpair failed and we were unable to recover it. 00:31:22.927 [2024-06-07 21:48:23.017710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.927 [2024-06-07 21:48:23.017720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.927 qpair failed and we were unable to recover it. 00:31:22.927 [2024-06-07 21:48:23.017988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.927 [2024-06-07 21:48:23.017997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.927 qpair failed and we were unable to recover it. 00:31:22.927 [2024-06-07 21:48:23.018148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.927 [2024-06-07 21:48:23.018158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.927 qpair failed and we were unable to recover it. 00:31:22.927 [2024-06-07 21:48:23.018297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.927 [2024-06-07 21:48:23.018307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.927 qpair failed and we were unable to recover it. 00:31:22.927 [2024-06-07 21:48:23.018628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.927 [2024-06-07 21:48:23.018637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.927 qpair failed and we were unable to recover it. 00:31:22.927 [2024-06-07 21:48:23.018786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.927 [2024-06-07 21:48:23.018796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.927 qpair failed and we were unable to recover it. 00:31:22.927 [2024-06-07 21:48:23.018996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.927 [2024-06-07 21:48:23.019006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.927 qpair failed and we were unable to recover it. 00:31:22.927 [2024-06-07 21:48:23.019214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.927 [2024-06-07 21:48:23.019224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.927 qpair failed and we were unable to recover it. 00:31:22.927 [2024-06-07 21:48:23.019438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.927 [2024-06-07 21:48:23.019448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.927 qpair failed and we were unable to recover it. 00:31:22.927 [2024-06-07 21:48:23.019580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.927 [2024-06-07 21:48:23.019590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.927 qpair failed and we were unable to recover it. 00:31:22.927 [2024-06-07 21:48:23.019722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.927 [2024-06-07 21:48:23.019732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.927 qpair failed and we were unable to recover it. 00:31:22.927 [2024-06-07 21:48:23.020008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.927 [2024-06-07 21:48:23.020017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.927 qpair failed and we were unable to recover it. 00:31:22.927 [2024-06-07 21:48:23.020226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.927 [2024-06-07 21:48:23.020236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.927 qpair failed and we were unable to recover it. 00:31:22.927 [2024-06-07 21:48:23.020525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.927 [2024-06-07 21:48:23.020535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.927 qpair failed and we were unable to recover it. 00:31:22.927 [2024-06-07 21:48:23.020688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.927 [2024-06-07 21:48:23.020698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.927 qpair failed and we were unable to recover it. 00:31:22.927 [2024-06-07 21:48:23.020837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.927 [2024-06-07 21:48:23.020846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.927 qpair failed and we were unable to recover it. 00:31:22.927 [2024-06-07 21:48:23.020997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.927 [2024-06-07 21:48:23.021006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.927 qpair failed and we were unable to recover it. 00:31:22.928 [2024-06-07 21:48:23.021252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.928 [2024-06-07 21:48:23.021262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.928 qpair failed and we were unable to recover it. 00:31:22.928 [2024-06-07 21:48:23.021392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.928 [2024-06-07 21:48:23.021402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.928 qpair failed and we were unable to recover it. 00:31:22.928 [2024-06-07 21:48:23.021601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.928 [2024-06-07 21:48:23.021611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.928 qpair failed and we were unable to recover it. 00:31:22.928 [2024-06-07 21:48:23.021804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.928 [2024-06-07 21:48:23.021813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.928 qpair failed and we were unable to recover it. 00:31:22.928 [2024-06-07 21:48:23.021966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.928 [2024-06-07 21:48:23.021976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.928 qpair failed and we were unable to recover it. 00:31:22.928 [2024-06-07 21:48:23.022185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.928 [2024-06-07 21:48:23.022194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.928 qpair failed and we were unable to recover it. 00:31:22.928 [2024-06-07 21:48:23.022325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.928 [2024-06-07 21:48:23.022335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.928 qpair failed and we were unable to recover it. 00:31:22.928 [2024-06-07 21:48:23.022655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.928 [2024-06-07 21:48:23.022664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.928 qpair failed and we were unable to recover it. 00:31:22.928 [2024-06-07 21:48:23.022785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.928 [2024-06-07 21:48:23.022794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.928 qpair failed and we were unable to recover it. 00:31:22.928 [2024-06-07 21:48:23.023033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.928 [2024-06-07 21:48:23.023043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.928 qpair failed and we were unable to recover it. 00:31:22.928 [2024-06-07 21:48:23.023194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.928 [2024-06-07 21:48:23.023204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.928 qpair failed and we were unable to recover it. 00:31:22.928 [2024-06-07 21:48:23.023438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.928 [2024-06-07 21:48:23.023447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.928 qpair failed and we were unable to recover it. 00:31:22.928 [2024-06-07 21:48:23.023689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.928 [2024-06-07 21:48:23.023698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.928 qpair failed and we were unable to recover it. 00:31:22.928 [2024-06-07 21:48:23.023897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.928 [2024-06-07 21:48:23.023907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.928 qpair failed and we were unable to recover it. 00:31:22.928 [2024-06-07 21:48:23.024045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.928 [2024-06-07 21:48:23.024055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.929 qpair failed and we were unable to recover it. 00:31:22.929 [2024-06-07 21:48:23.024199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.929 [2024-06-07 21:48:23.024209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.929 qpair failed and we were unable to recover it. 00:31:22.929 [2024-06-07 21:48:23.024349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.929 [2024-06-07 21:48:23.024358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.929 qpair failed and we were unable to recover it. 00:31:22.929 [2024-06-07 21:48:23.024497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.929 [2024-06-07 21:48:23.024506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.929 qpair failed and we were unable to recover it. 00:31:22.929 [2024-06-07 21:48:23.024710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.929 [2024-06-07 21:48:23.024719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.929 qpair failed and we were unable to recover it. 00:31:22.929 [2024-06-07 21:48:23.024920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.929 [2024-06-07 21:48:23.024929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.929 qpair failed and we were unable to recover it. 00:31:22.929 [2024-06-07 21:48:23.025066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.929 [2024-06-07 21:48:23.025075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.929 qpair failed and we were unable to recover it. 00:31:22.929 [2024-06-07 21:48:23.025341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.929 [2024-06-07 21:48:23.025351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.929 qpair failed and we were unable to recover it. 00:31:22.929 [2024-06-07 21:48:23.025625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.929 [2024-06-07 21:48:23.025634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.929 qpair failed and we were unable to recover it. 00:31:22.929 [2024-06-07 21:48:23.025923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.929 [2024-06-07 21:48:23.025933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.929 qpair failed and we were unable to recover it. 00:31:22.929 [2024-06-07 21:48:23.026135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.929 [2024-06-07 21:48:23.026144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.929 qpair failed and we were unable to recover it. 00:31:22.929 [2024-06-07 21:48:23.026349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.929 [2024-06-07 21:48:23.026358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.929 qpair failed and we were unable to recover it. 00:31:22.929 [2024-06-07 21:48:23.026569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.929 [2024-06-07 21:48:23.026579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.929 qpair failed and we were unable to recover it. 00:31:22.929 [2024-06-07 21:48:23.026788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.929 [2024-06-07 21:48:23.026797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.929 qpair failed and we were unable to recover it. 00:31:22.929 [2024-06-07 21:48:23.026930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.929 [2024-06-07 21:48:23.026939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.929 qpair failed and we were unable to recover it. 00:31:22.929 [2024-06-07 21:48:23.027072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.929 [2024-06-07 21:48:23.027082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.929 qpair failed and we were unable to recover it. 00:31:22.929 [2024-06-07 21:48:23.027218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.929 [2024-06-07 21:48:23.027227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.929 qpair failed and we were unable to recover it. 00:31:22.929 [2024-06-07 21:48:23.027441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.929 [2024-06-07 21:48:23.027451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.929 qpair failed and we were unable to recover it. 00:31:22.929 [2024-06-07 21:48:23.027666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.929 [2024-06-07 21:48:23.027676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.929 qpair failed and we were unable to recover it. 00:31:22.929 [2024-06-07 21:48:23.027820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.929 [2024-06-07 21:48:23.027829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.929 qpair failed and we were unable to recover it. 00:31:22.929 [2024-06-07 21:48:23.028075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.929 [2024-06-07 21:48:23.028084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.929 qpair failed and we were unable to recover it. 00:31:22.929 [2024-06-07 21:48:23.028355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.929 [2024-06-07 21:48:23.028364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.929 qpair failed and we were unable to recover it. 00:31:22.929 [2024-06-07 21:48:23.028588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.929 [2024-06-07 21:48:23.028598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.929 qpair failed and we were unable to recover it. 00:31:22.929 [2024-06-07 21:48:23.028733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.929 [2024-06-07 21:48:23.028744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.929 qpair failed and we were unable to recover it. 00:31:22.929 [2024-06-07 21:48:23.028892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.929 [2024-06-07 21:48:23.028901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.929 qpair failed and we were unable to recover it. 00:31:22.929 [2024-06-07 21:48:23.029044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.929 [2024-06-07 21:48:23.029054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.930 qpair failed and we were unable to recover it. 00:31:22.930 [2024-06-07 21:48:23.029322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.930 [2024-06-07 21:48:23.029331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.930 qpair failed and we were unable to recover it. 00:31:22.930 [2024-06-07 21:48:23.029568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.930 [2024-06-07 21:48:23.029578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.930 qpair failed and we were unable to recover it. 00:31:22.930 [2024-06-07 21:48:23.029790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.930 [2024-06-07 21:48:23.029800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.930 qpair failed and we were unable to recover it. 00:31:22.930 [2024-06-07 21:48:23.030040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.930 [2024-06-07 21:48:23.030051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.930 qpair failed and we were unable to recover it. 00:31:22.930 [2024-06-07 21:48:23.030281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.930 [2024-06-07 21:48:23.030290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.930 qpair failed and we were unable to recover it. 00:31:22.930 [2024-06-07 21:48:23.030436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.930 [2024-06-07 21:48:23.030445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.930 qpair failed and we were unable to recover it. 00:31:22.930 [2024-06-07 21:48:23.030663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.930 [2024-06-07 21:48:23.030672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.930 qpair failed and we were unable to recover it. 00:31:22.930 [2024-06-07 21:48:23.030868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.930 [2024-06-07 21:48:23.030878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.930 qpair failed and we were unable to recover it. 00:31:22.930 [2024-06-07 21:48:23.031033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.930 [2024-06-07 21:48:23.031043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.930 qpair failed and we were unable to recover it. 00:31:22.930 [2024-06-07 21:48:23.031173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.930 [2024-06-07 21:48:23.031183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.930 qpair failed and we were unable to recover it. 00:31:22.930 [2024-06-07 21:48:23.031474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.930 [2024-06-07 21:48:23.031484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.930 qpair failed and we were unable to recover it. 00:31:22.930 [2024-06-07 21:48:23.031657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.930 [2024-06-07 21:48:23.031667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.930 qpair failed and we were unable to recover it. 00:31:22.930 [2024-06-07 21:48:23.031889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.930 [2024-06-07 21:48:23.031898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.930 qpair failed and we were unable to recover it. 00:31:22.930 [2024-06-07 21:48:23.032104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.930 [2024-06-07 21:48:23.032113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.930 qpair failed and we were unable to recover it. 00:31:22.930 [2024-06-07 21:48:23.032256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.930 [2024-06-07 21:48:23.032265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.930 qpair failed and we were unable to recover it. 00:31:22.930 [2024-06-07 21:48:23.032476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.930 [2024-06-07 21:48:23.032485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.930 qpair failed and we were unable to recover it. 00:31:22.930 [2024-06-07 21:48:23.032635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.930 [2024-06-07 21:48:23.032644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.930 qpair failed and we were unable to recover it. 00:31:22.930 [2024-06-07 21:48:23.032850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.930 [2024-06-07 21:48:23.032860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.930 qpair failed and we were unable to recover it. 00:31:22.930 [2024-06-07 21:48:23.033007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.930 [2024-06-07 21:48:23.033016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.930 qpair failed and we were unable to recover it. 00:31:22.930 [2024-06-07 21:48:23.033290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.930 [2024-06-07 21:48:23.033300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.930 qpair failed and we were unable to recover it. 00:31:22.930 [2024-06-07 21:48:23.033443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.930 [2024-06-07 21:48:23.033453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.930 qpair failed and we were unable to recover it. 00:31:22.931 [2024-06-07 21:48:23.033659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.931 [2024-06-07 21:48:23.033669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.931 qpair failed and we were unable to recover it. 00:31:22.931 [2024-06-07 21:48:23.033818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.931 [2024-06-07 21:48:23.033827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.931 qpair failed and we were unable to recover it. 00:31:22.931 [2024-06-07 21:48:23.034041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.931 [2024-06-07 21:48:23.034051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.931 qpair failed and we were unable to recover it. 00:31:22.931 [2024-06-07 21:48:23.034262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.931 [2024-06-07 21:48:23.034271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.931 qpair failed and we were unable to recover it. 00:31:22.931 [2024-06-07 21:48:23.034479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.931 [2024-06-07 21:48:23.034488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.931 qpair failed and we were unable to recover it. 00:31:22.931 [2024-06-07 21:48:23.034627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.931 [2024-06-07 21:48:23.034636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.931 qpair failed and we were unable to recover it. 00:31:22.931 [2024-06-07 21:48:23.034854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.931 [2024-06-07 21:48:23.034864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.931 qpair failed and we were unable to recover it. 00:31:22.931 [2024-06-07 21:48:23.035074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.931 [2024-06-07 21:48:23.035083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.931 qpair failed and we were unable to recover it. 00:31:22.931 [2024-06-07 21:48:23.035284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.931 [2024-06-07 21:48:23.035294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.931 qpair failed and we were unable to recover it. 00:31:22.931 [2024-06-07 21:48:23.035441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.931 [2024-06-07 21:48:23.035450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.931 qpair failed and we were unable to recover it. 00:31:22.931 [2024-06-07 21:48:23.035645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.931 [2024-06-07 21:48:23.035654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.931 qpair failed and we were unable to recover it. 00:31:22.931 [2024-06-07 21:48:23.035810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.931 [2024-06-07 21:48:23.035819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.931 qpair failed and we were unable to recover it. 00:31:22.931 [2024-06-07 21:48:23.036041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.931 [2024-06-07 21:48:23.036052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.931 qpair failed and we were unable to recover it. 00:31:22.931 [2024-06-07 21:48:23.036200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.931 [2024-06-07 21:48:23.036209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.931 qpair failed and we were unable to recover it. 00:31:22.931 [2024-06-07 21:48:23.036353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.931 [2024-06-07 21:48:23.036362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.931 qpair failed and we were unable to recover it. 00:31:22.931 [2024-06-07 21:48:23.036567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.931 [2024-06-07 21:48:23.036576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.931 qpair failed and we were unable to recover it. 00:31:22.931 [2024-06-07 21:48:23.036775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.931 [2024-06-07 21:48:23.036786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.931 qpair failed and we were unable to recover it. 00:31:22.931 [2024-06-07 21:48:23.036903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.931 [2024-06-07 21:48:23.036912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.931 qpair failed and we were unable to recover it. 00:31:22.931 [2024-06-07 21:48:23.037055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.931 [2024-06-07 21:48:23.037065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.931 qpair failed and we were unable to recover it. 00:31:22.931 [2024-06-07 21:48:23.037358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.931 [2024-06-07 21:48:23.037367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.931 qpair failed and we were unable to recover it. 00:31:22.931 [2024-06-07 21:48:23.037655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.931 [2024-06-07 21:48:23.037665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.931 qpair failed and we were unable to recover it. 00:31:22.931 [2024-06-07 21:48:23.037817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.931 [2024-06-07 21:48:23.037827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.931 qpair failed and we were unable to recover it. 00:31:22.931 [2024-06-07 21:48:23.038034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.931 [2024-06-07 21:48:23.038044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.931 qpair failed and we were unable to recover it. 00:31:22.931 [2024-06-07 21:48:23.038250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.931 [2024-06-07 21:48:23.038260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.931 qpair failed and we were unable to recover it. 00:31:22.931 [2024-06-07 21:48:23.038403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.931 [2024-06-07 21:48:23.038412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.931 qpair failed and we were unable to recover it. 00:31:22.931 [2024-06-07 21:48:23.038558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.932 [2024-06-07 21:48:23.038568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.932 qpair failed and we were unable to recover it. 00:31:22.932 [2024-06-07 21:48:23.038763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.932 [2024-06-07 21:48:23.038772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.932 qpair failed and we were unable to recover it. 00:31:22.932 [2024-06-07 21:48:23.038981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.932 [2024-06-07 21:48:23.038990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.932 qpair failed and we were unable to recover it. 00:31:22.932 [2024-06-07 21:48:23.039144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.932 [2024-06-07 21:48:23.039154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.932 qpair failed and we were unable to recover it. 00:31:22.932 [2024-06-07 21:48:23.039302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.932 [2024-06-07 21:48:23.039311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.932 qpair failed and we were unable to recover it. 00:31:22.932 [2024-06-07 21:48:23.039464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.932 [2024-06-07 21:48:23.039474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.932 qpair failed and we were unable to recover it. 00:31:22.932 [2024-06-07 21:48:23.039686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.932 [2024-06-07 21:48:23.039696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.932 qpair failed and we were unable to recover it. 00:31:22.932 [2024-06-07 21:48:23.039988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.932 [2024-06-07 21:48:23.039997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.932 qpair failed and we were unable to recover it. 00:31:22.932 [2024-06-07 21:48:23.040203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.932 [2024-06-07 21:48:23.040212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.932 qpair failed and we were unable to recover it. 00:31:22.932 [2024-06-07 21:48:23.040356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.932 [2024-06-07 21:48:23.040366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.932 qpair failed and we were unable to recover it. 00:31:22.932 [2024-06-07 21:48:23.040650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.932 [2024-06-07 21:48:23.040659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.932 qpair failed and we were unable to recover it. 00:31:22.932 [2024-06-07 21:48:23.040926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.932 [2024-06-07 21:48:23.040935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.932 qpair failed and we were unable to recover it. 00:31:22.932 [2024-06-07 21:48:23.041134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.932 [2024-06-07 21:48:23.041144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.932 qpair failed and we were unable to recover it. 00:31:22.932 [2024-06-07 21:48:23.041379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.932 [2024-06-07 21:48:23.041389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.932 qpair failed and we were unable to recover it. 00:31:22.932 [2024-06-07 21:48:23.041524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.932 [2024-06-07 21:48:23.041533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.932 qpair failed and we were unable to recover it. 00:31:22.932 [2024-06-07 21:48:23.041678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.932 [2024-06-07 21:48:23.041687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.932 qpair failed and we were unable to recover it. 00:31:22.932 [2024-06-07 21:48:23.041893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.932 [2024-06-07 21:48:23.041903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.932 qpair failed and we were unable to recover it. 00:31:22.932 [2024-06-07 21:48:23.042121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.932 [2024-06-07 21:48:23.042131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.932 qpair failed and we were unable to recover it. 00:31:22.932 [2024-06-07 21:48:23.042332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.932 [2024-06-07 21:48:23.042341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.932 qpair failed and we were unable to recover it. 00:31:22.932 [2024-06-07 21:48:23.042533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.932 [2024-06-07 21:48:23.042543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.932 qpair failed and we were unable to recover it. 00:31:22.932 [2024-06-07 21:48:23.042680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.932 [2024-06-07 21:48:23.042689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.932 qpair failed and we were unable to recover it. 00:31:22.932 [2024-06-07 21:48:23.042827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.932 [2024-06-07 21:48:23.042836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.932 qpair failed and we were unable to recover it. 00:31:22.932 [2024-06-07 21:48:23.043044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.932 [2024-06-07 21:48:23.043053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.932 qpair failed and we were unable to recover it. 00:31:22.932 [2024-06-07 21:48:23.043253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.932 [2024-06-07 21:48:23.043263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.932 qpair failed and we were unable to recover it. 00:31:22.932 [2024-06-07 21:48:23.043410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.932 [2024-06-07 21:48:23.043420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.932 qpair failed and we were unable to recover it. 00:31:22.932 [2024-06-07 21:48:23.043628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.932 [2024-06-07 21:48:23.043637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.932 qpair failed and we were unable to recover it. 00:31:22.932 [2024-06-07 21:48:23.043884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.932 [2024-06-07 21:48:23.043894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.932 qpair failed and we were unable to recover it. 00:31:22.933 [2024-06-07 21:48:23.044089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.933 [2024-06-07 21:48:23.044098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.933 qpair failed and we were unable to recover it. 00:31:22.933 [2024-06-07 21:48:23.044305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.933 [2024-06-07 21:48:23.044314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.933 qpair failed and we were unable to recover it. 00:31:22.933 [2024-06-07 21:48:23.044509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.933 [2024-06-07 21:48:23.044519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.933 qpair failed and we were unable to recover it. 00:31:22.933 [2024-06-07 21:48:23.044672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.933 [2024-06-07 21:48:23.044681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.933 qpair failed and we were unable to recover it. 00:31:22.933 [2024-06-07 21:48:23.044812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.933 [2024-06-07 21:48:23.044823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.933 qpair failed and we were unable to recover it. 00:31:22.933 [2024-06-07 21:48:23.045022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.933 [2024-06-07 21:48:23.045037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.933 qpair failed and we were unable to recover it. 00:31:22.933 [2024-06-07 21:48:23.045304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.933 [2024-06-07 21:48:23.045313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.933 qpair failed and we were unable to recover it. 00:31:22.933 [2024-06-07 21:48:23.045476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.933 [2024-06-07 21:48:23.045485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.933 qpair failed and we were unable to recover it. 00:31:22.933 [2024-06-07 21:48:23.045702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.933 [2024-06-07 21:48:23.045712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.933 qpair failed and we were unable to recover it. 00:31:22.933 [2024-06-07 21:48:23.045850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.933 [2024-06-07 21:48:23.045859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.933 qpair failed and we were unable to recover it. 00:31:22.933 [2024-06-07 21:48:23.046076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.933 [2024-06-07 21:48:23.046086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.933 qpair failed and we were unable to recover it. 00:31:22.933 [2024-06-07 21:48:23.046219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.933 [2024-06-07 21:48:23.046228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.933 qpair failed and we were unable to recover it. 00:31:22.933 [2024-06-07 21:48:23.046437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.933 [2024-06-07 21:48:23.046446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.933 qpair failed and we were unable to recover it. 00:31:22.933 [2024-06-07 21:48:23.046584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.933 [2024-06-07 21:48:23.046593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.933 qpair failed and we were unable to recover it. 00:31:22.933 [2024-06-07 21:48:23.046794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.933 [2024-06-07 21:48:23.046804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.933 qpair failed and we were unable to recover it. 00:31:22.933 [2024-06-07 21:48:23.046934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.933 [2024-06-07 21:48:23.046944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.933 qpair failed and we were unable to recover it. 00:31:22.933 [2024-06-07 21:48:23.047171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.933 [2024-06-07 21:48:23.047181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.933 qpair failed and we were unable to recover it. 00:31:22.933 [2024-06-07 21:48:23.047475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.933 [2024-06-07 21:48:23.047484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.933 qpair failed and we were unable to recover it. 00:31:22.933 [2024-06-07 21:48:23.047708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.933 [2024-06-07 21:48:23.047717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.933 qpair failed and we were unable to recover it. 00:31:22.933 [2024-06-07 21:48:23.047842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.933 [2024-06-07 21:48:23.047851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.933 qpair failed and we were unable to recover it. 00:31:22.933 [2024-06-07 21:48:23.048009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.933 [2024-06-07 21:48:23.048019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.933 qpair failed and we were unable to recover it. 00:31:22.933 [2024-06-07 21:48:23.048343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.933 [2024-06-07 21:48:23.048352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.933 qpair failed and we were unable to recover it. 00:31:22.933 [2024-06-07 21:48:23.048502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.933 [2024-06-07 21:48:23.048512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.933 qpair failed and we were unable to recover it. 00:31:22.933 [2024-06-07 21:48:23.048810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.933 [2024-06-07 21:48:23.048819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.933 qpair failed and we were unable to recover it. 00:31:22.933 [2024-06-07 21:48:23.048975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.933 [2024-06-07 21:48:23.048984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.933 qpair failed and we were unable to recover it. 00:31:22.933 [2024-06-07 21:48:23.049112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.933 [2024-06-07 21:48:23.049121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.934 qpair failed and we were unable to recover it. 00:31:22.934 [2024-06-07 21:48:23.049334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.934 [2024-06-07 21:48:23.049343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.934 qpair failed and we were unable to recover it. 00:31:22.934 [2024-06-07 21:48:23.049499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.934 [2024-06-07 21:48:23.049508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.934 qpair failed and we were unable to recover it. 00:31:22.934 [2024-06-07 21:48:23.049712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.934 [2024-06-07 21:48:23.049722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.934 qpair failed and we were unable to recover it. 00:31:22.934 [2024-06-07 21:48:23.049876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.934 [2024-06-07 21:48:23.049885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.934 qpair failed and we were unable to recover it. 00:31:22.934 [2024-06-07 21:48:23.050018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.934 [2024-06-07 21:48:23.050033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.934 qpair failed and we were unable to recover it. 00:31:22.934 [2024-06-07 21:48:23.050176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.934 [2024-06-07 21:48:23.050185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.934 qpair failed and we were unable to recover it. 00:31:22.934 [2024-06-07 21:48:23.050453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.934 [2024-06-07 21:48:23.050463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.934 qpair failed and we were unable to recover it. 00:31:22.934 [2024-06-07 21:48:23.050608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.934 [2024-06-07 21:48:23.050617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.934 qpair failed and we were unable to recover it. 00:31:22.934 [2024-06-07 21:48:23.050770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.934 [2024-06-07 21:48:23.050779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.934 qpair failed and we were unable to recover it. 00:31:22.934 [2024-06-07 21:48:23.050995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.934 [2024-06-07 21:48:23.051005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.934 qpair failed and we were unable to recover it. 00:31:22.934 [2024-06-07 21:48:23.051158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.934 [2024-06-07 21:48:23.051168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.934 qpair failed and we were unable to recover it. 00:31:22.934 [2024-06-07 21:48:23.051403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.934 [2024-06-07 21:48:23.051413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.934 qpair failed and we were unable to recover it. 00:31:22.934 [2024-06-07 21:48:23.051672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.934 [2024-06-07 21:48:23.051682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.934 qpair failed and we were unable to recover it. 00:31:22.934 [2024-06-07 21:48:23.051882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.934 [2024-06-07 21:48:23.051891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.934 qpair failed and we were unable to recover it. 00:31:22.934 [2024-06-07 21:48:23.052033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.934 [2024-06-07 21:48:23.052043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.934 qpair failed and we were unable to recover it. 00:31:22.934 [2024-06-07 21:48:23.052206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.934 [2024-06-07 21:48:23.052236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.934 qpair failed and we were unable to recover it. 00:31:22.934 [2024-06-07 21:48:23.052393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.934 [2024-06-07 21:48:23.052403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.934 qpair failed and we were unable to recover it. 00:31:22.934 [2024-06-07 21:48:23.052650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.934 [2024-06-07 21:48:23.052659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.934 qpair failed and we were unable to recover it. 00:31:22.934 [2024-06-07 21:48:23.052925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.934 [2024-06-07 21:48:23.052937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.934 qpair failed and we were unable to recover it. 00:31:22.934 [2024-06-07 21:48:23.053135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.934 [2024-06-07 21:48:23.053145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.934 qpair failed and we were unable to recover it. 00:31:22.934 [2024-06-07 21:48:23.053349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.934 [2024-06-07 21:48:23.053359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.934 qpair failed and we were unable to recover it. 00:31:22.934 [2024-06-07 21:48:23.053624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.934 [2024-06-07 21:48:23.053634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.934 qpair failed and we were unable to recover it. 00:31:22.935 [2024-06-07 21:48:23.053923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.935 [2024-06-07 21:48:23.053933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.935 qpair failed and we were unable to recover it. 00:31:22.935 [2024-06-07 21:48:23.054077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.935 [2024-06-07 21:48:23.054087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.935 qpair failed and we were unable to recover it. 00:31:22.935 [2024-06-07 21:48:23.054363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.935 [2024-06-07 21:48:23.054373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.935 qpair failed and we were unable to recover it. 00:31:22.935 [2024-06-07 21:48:23.054587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.935 [2024-06-07 21:48:23.054597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.935 qpair failed and we were unable to recover it. 00:31:22.935 [2024-06-07 21:48:23.054752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.935 [2024-06-07 21:48:23.054761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.935 qpair failed and we were unable to recover it. 00:31:22.935 [2024-06-07 21:48:23.055029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.935 [2024-06-07 21:48:23.055039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.935 qpair failed and we were unable to recover it. 00:31:22.935 [2024-06-07 21:48:23.055190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.935 [2024-06-07 21:48:23.055200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.935 qpair failed and we were unable to recover it. 00:31:22.935 [2024-06-07 21:48:23.055432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.935 [2024-06-07 21:48:23.055442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.935 qpair failed and we were unable to recover it. 00:31:22.935 [2024-06-07 21:48:23.055580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.935 [2024-06-07 21:48:23.055589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.935 qpair failed and we were unable to recover it. 00:31:22.935 [2024-06-07 21:48:23.055748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.935 [2024-06-07 21:48:23.055757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.935 qpair failed and we were unable to recover it. 00:31:22.935 [2024-06-07 21:48:23.055980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.935 [2024-06-07 21:48:23.055989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.935 qpair failed and we were unable to recover it. 00:31:22.935 [2024-06-07 21:48:23.056137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.935 [2024-06-07 21:48:23.056147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.935 qpair failed and we were unable to recover it. 00:31:22.935 [2024-06-07 21:48:23.056299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.935 [2024-06-07 21:48:23.056309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.935 qpair failed and we were unable to recover it. 00:31:22.935 [2024-06-07 21:48:23.056531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.936 [2024-06-07 21:48:23.056541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.936 qpair failed and we were unable to recover it. 00:31:22.936 [2024-06-07 21:48:23.056765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.936 [2024-06-07 21:48:23.056775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.936 qpair failed and we were unable to recover it. 00:31:22.936 [2024-06-07 21:48:23.057070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.936 [2024-06-07 21:48:23.057079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.936 qpair failed and we were unable to recover it. 00:31:22.936 [2024-06-07 21:48:23.057233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.936 [2024-06-07 21:48:23.057242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.936 qpair failed and we were unable to recover it. 00:31:22.936 [2024-06-07 21:48:23.057441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.936 [2024-06-07 21:48:23.057451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.936 qpair failed and we were unable to recover it. 00:31:22.936 [2024-06-07 21:48:23.057655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.936 [2024-06-07 21:48:23.057664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.936 qpair failed and we were unable to recover it. 00:31:22.936 [2024-06-07 21:48:23.057814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.936 [2024-06-07 21:48:23.057823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.936 qpair failed and we were unable to recover it. 00:31:22.936 [2024-06-07 21:48:23.057969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.936 [2024-06-07 21:48:23.057978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.936 qpair failed and we were unable to recover it. 00:31:22.936 [2024-06-07 21:48:23.058127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.936 [2024-06-07 21:48:23.058136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.936 qpair failed and we were unable to recover it. 00:31:22.936 [2024-06-07 21:48:23.058426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.936 [2024-06-07 21:48:23.058436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.936 qpair failed and we were unable to recover it. 00:31:22.936 [2024-06-07 21:48:23.058641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.936 [2024-06-07 21:48:23.058651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.936 qpair failed and we were unable to recover it. 00:31:22.936 [2024-06-07 21:48:23.058867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.936 [2024-06-07 21:48:23.058877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.936 qpair failed and we were unable to recover it. 00:31:22.936 [2024-06-07 21:48:23.059085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.936 [2024-06-07 21:48:23.059094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.936 qpair failed and we were unable to recover it. 00:31:22.936 [2024-06-07 21:48:23.059251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.936 [2024-06-07 21:48:23.059261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.936 qpair failed and we were unable to recover it. 00:31:22.936 [2024-06-07 21:48:23.059463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.936 [2024-06-07 21:48:23.059473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.936 qpair failed and we were unable to recover it. 00:31:22.936 [2024-06-07 21:48:23.059623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.936 [2024-06-07 21:48:23.059632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.936 qpair failed and we were unable to recover it. 00:31:22.936 [2024-06-07 21:48:23.059768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.936 [2024-06-07 21:48:23.059778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.936 qpair failed and we were unable to recover it. 00:31:22.936 [2024-06-07 21:48:23.059993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.936 [2024-06-07 21:48:23.060003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.936 qpair failed and we were unable to recover it. 00:31:22.936 [2024-06-07 21:48:23.060147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.936 [2024-06-07 21:48:23.060158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.936 qpair failed and we were unable to recover it. 00:31:22.936 [2024-06-07 21:48:23.060373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.936 [2024-06-07 21:48:23.060383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.936 qpair failed and we were unable to recover it. 00:31:22.936 [2024-06-07 21:48:23.060535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.936 [2024-06-07 21:48:23.060544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.936 qpair failed and we were unable to recover it. 00:31:22.936 [2024-06-07 21:48:23.060837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.936 [2024-06-07 21:48:23.060846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.936 qpair failed and we were unable to recover it. 00:31:22.936 [2024-06-07 21:48:23.061055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.936 [2024-06-07 21:48:23.061065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.937 qpair failed and we were unable to recover it. 00:31:22.937 [2024-06-07 21:48:23.061216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.937 [2024-06-07 21:48:23.061230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.937 qpair failed and we were unable to recover it. 00:31:22.937 [2024-06-07 21:48:23.061444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.937 [2024-06-07 21:48:23.061453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.937 qpair failed and we were unable to recover it. 00:31:22.937 [2024-06-07 21:48:23.061652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.937 [2024-06-07 21:48:23.061661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.937 qpair failed and we were unable to recover it. 00:31:22.937 [2024-06-07 21:48:23.061805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.937 [2024-06-07 21:48:23.061814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.937 qpair failed and we were unable to recover it. 00:31:22.937 [2024-06-07 21:48:23.061964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.937 [2024-06-07 21:48:23.061973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.937 qpair failed and we were unable to recover it. 00:31:22.937 [2024-06-07 21:48:23.062173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.937 [2024-06-07 21:48:23.062183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.937 qpair failed and we were unable to recover it. 00:31:22.937 [2024-06-07 21:48:23.062324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.937 [2024-06-07 21:48:23.062333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.937 qpair failed and we were unable to recover it. 00:31:22.937 [2024-06-07 21:48:23.062570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.937 [2024-06-07 21:48:23.062579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.937 qpair failed and we were unable to recover it. 00:31:22.937 [2024-06-07 21:48:23.062776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.937 [2024-06-07 21:48:23.062787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.937 qpair failed and we were unable to recover it. 00:31:22.937 [2024-06-07 21:48:23.062995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.937 [2024-06-07 21:48:23.063005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.937 qpair failed and we were unable to recover it. 00:31:22.937 [2024-06-07 21:48:23.063249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.937 [2024-06-07 21:48:23.063259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.937 qpair failed and we were unable to recover it. 00:31:22.937 [2024-06-07 21:48:23.063492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.937 [2024-06-07 21:48:23.063501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.937 qpair failed and we were unable to recover it. 00:31:22.937 [2024-06-07 21:48:23.063705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.937 [2024-06-07 21:48:23.063714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.937 qpair failed and we were unable to recover it. 00:31:22.937 [2024-06-07 21:48:23.063923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.937 [2024-06-07 21:48:23.063932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.937 qpair failed and we were unable to recover it. 00:31:22.937 [2024-06-07 21:48:23.064160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.937 [2024-06-07 21:48:23.064170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.937 qpair failed and we were unable to recover it. 00:31:22.937 [2024-06-07 21:48:23.064376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.937 [2024-06-07 21:48:23.064386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.937 qpair failed and we were unable to recover it. 00:31:22.937 [2024-06-07 21:48:23.064590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.937 [2024-06-07 21:48:23.064599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.937 qpair failed and we were unable to recover it. 00:31:22.937 [2024-06-07 21:48:23.064829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.937 [2024-06-07 21:48:23.064838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.937 qpair failed and we were unable to recover it. 00:31:22.937 [2024-06-07 21:48:23.065054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.937 [2024-06-07 21:48:23.065064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.937 qpair failed and we were unable to recover it. 00:31:22.937 [2024-06-07 21:48:23.065216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.937 [2024-06-07 21:48:23.065225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.937 qpair failed and we were unable to recover it. 00:31:22.937 [2024-06-07 21:48:23.065383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.937 [2024-06-07 21:48:23.065392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.937 qpair failed and we were unable to recover it. 00:31:22.937 [2024-06-07 21:48:23.065536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.937 [2024-06-07 21:48:23.065546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.937 qpair failed and we were unable to recover it. 00:31:22.937 [2024-06-07 21:48:23.065760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.937 [2024-06-07 21:48:23.065770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.937 qpair failed and we were unable to recover it. 00:31:22.938 [2024-06-07 21:48:23.065961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.938 [2024-06-07 21:48:23.065970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.938 qpair failed and we were unable to recover it. 00:31:22.938 [2024-06-07 21:48:23.066125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.938 [2024-06-07 21:48:23.066134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.938 qpair failed and we were unable to recover it. 00:31:22.938 [2024-06-07 21:48:23.066274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.938 [2024-06-07 21:48:23.066283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.938 qpair failed and we were unable to recover it. 00:31:22.938 [2024-06-07 21:48:23.066421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.938 [2024-06-07 21:48:23.066430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.938 qpair failed and we were unable to recover it. 00:31:22.938 [2024-06-07 21:48:23.066630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.938 [2024-06-07 21:48:23.066639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.938 qpair failed and we were unable to recover it. 00:31:22.938 [2024-06-07 21:48:23.066783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.938 [2024-06-07 21:48:23.066793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.938 qpair failed and we were unable to recover it. 00:31:22.938 [2024-06-07 21:48:23.067004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.938 [2024-06-07 21:48:23.067013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.938 qpair failed and we were unable to recover it. 00:31:22.938 [2024-06-07 21:48:23.067107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.938 [2024-06-07 21:48:23.067117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.938 qpair failed and we were unable to recover it. 00:31:22.938 [2024-06-07 21:48:23.067298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.938 [2024-06-07 21:48:23.067307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.938 qpair failed and we were unable to recover it. 00:31:22.938 [2024-06-07 21:48:23.067448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.938 [2024-06-07 21:48:23.067457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.938 qpair failed and we were unable to recover it. 00:31:22.938 [2024-06-07 21:48:23.067594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.938 [2024-06-07 21:48:23.067603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.938 qpair failed and we were unable to recover it. 00:31:22.938 [2024-06-07 21:48:23.067727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.938 [2024-06-07 21:48:23.067736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.938 qpair failed and we were unable to recover it. 00:31:22.938 [2024-06-07 21:48:23.067936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.938 [2024-06-07 21:48:23.067946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.938 qpair failed and we were unable to recover it. 00:31:22.938 [2024-06-07 21:48:23.068266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.938 [2024-06-07 21:48:23.068276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.938 qpair failed and we were unable to recover it. 00:31:22.938 [2024-06-07 21:48:23.068484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.938 [2024-06-07 21:48:23.068494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.938 qpair failed and we were unable to recover it. 00:31:22.938 [2024-06-07 21:48:23.068735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.938 [2024-06-07 21:48:23.068744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.938 qpair failed and we were unable to recover it. 00:31:22.938 [2024-06-07 21:48:23.068966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.938 [2024-06-07 21:48:23.068976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.938 qpair failed and we were unable to recover it. 00:31:22.938 [2024-06-07 21:48:23.069110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.938 [2024-06-07 21:48:23.069122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.938 qpair failed and we were unable to recover it. 00:31:22.938 [2024-06-07 21:48:23.069456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.938 [2024-06-07 21:48:23.069466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.938 qpair failed and we were unable to recover it. 00:31:22.938 [2024-06-07 21:48:23.069612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.938 [2024-06-07 21:48:23.069621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.938 qpair failed and we were unable to recover it. 00:31:22.938 [2024-06-07 21:48:23.069828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.938 [2024-06-07 21:48:23.069838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.938 qpair failed and we were unable to recover it. 00:31:22.938 [2024-06-07 21:48:23.070050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.938 [2024-06-07 21:48:23.070060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.938 qpair failed and we were unable to recover it. 00:31:22.938 [2024-06-07 21:48:23.070343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.938 [2024-06-07 21:48:23.070353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.938 qpair failed and we were unable to recover it. 00:31:22.938 [2024-06-07 21:48:23.070496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.938 [2024-06-07 21:48:23.070505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.939 qpair failed and we were unable to recover it. 00:31:22.939 [2024-06-07 21:48:23.070641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.939 [2024-06-07 21:48:23.070650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.939 qpair failed and we were unable to recover it. 00:31:22.939 [2024-06-07 21:48:23.070960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.939 [2024-06-07 21:48:23.070970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.939 qpair failed and we were unable to recover it. 00:31:22.939 [2024-06-07 21:48:23.071211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.939 [2024-06-07 21:48:23.071221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.939 qpair failed and we were unable to recover it. 00:31:22.939 [2024-06-07 21:48:23.071359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.939 [2024-06-07 21:48:23.071369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.939 qpair failed and we were unable to recover it. 00:31:22.939 [2024-06-07 21:48:23.071518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.939 [2024-06-07 21:48:23.071528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.939 qpair failed and we were unable to recover it. 00:31:22.939 [2024-06-07 21:48:23.071680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.939 [2024-06-07 21:48:23.071689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.939 qpair failed and we were unable to recover it. 00:31:22.939 [2024-06-07 21:48:23.071866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.939 [2024-06-07 21:48:23.071875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.939 qpair failed and we were unable to recover it. 00:31:22.939 [2024-06-07 21:48:23.072028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.939 [2024-06-07 21:48:23.072038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.939 qpair failed and we were unable to recover it. 00:31:22.939 [2024-06-07 21:48:23.072245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.939 [2024-06-07 21:48:23.072255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.939 qpair failed and we were unable to recover it. 00:31:22.939 [2024-06-07 21:48:23.072451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.939 [2024-06-07 21:48:23.072461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.939 qpair failed and we were unable to recover it. 00:31:22.939 [2024-06-07 21:48:23.072660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.939 [2024-06-07 21:48:23.072669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.939 qpair failed and we were unable to recover it. 00:31:22.939 [2024-06-07 21:48:23.072803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.939 [2024-06-07 21:48:23.072812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.939 qpair failed and we were unable to recover it. 00:31:22.939 [2024-06-07 21:48:23.073009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.939 [2024-06-07 21:48:23.073018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.939 qpair failed and we were unable to recover it. 00:31:22.939 [2024-06-07 21:48:23.073221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.939 [2024-06-07 21:48:23.073230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.939 qpair failed and we were unable to recover it. 00:31:22.939 [2024-06-07 21:48:23.073522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.939 [2024-06-07 21:48:23.073531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.939 qpair failed and we were unable to recover it. 00:31:22.939 [2024-06-07 21:48:23.073802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.939 [2024-06-07 21:48:23.073811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.939 qpair failed and we were unable to recover it. 00:31:22.939 [2024-06-07 21:48:23.074043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.939 [2024-06-07 21:48:23.074053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.939 qpair failed and we were unable to recover it. 00:31:22.939 [2024-06-07 21:48:23.074336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.939 [2024-06-07 21:48:23.074346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.939 qpair failed and we were unable to recover it. 00:31:22.939 [2024-06-07 21:48:23.074619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.939 [2024-06-07 21:48:23.074628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.939 qpair failed and we were unable to recover it. 00:31:22.939 [2024-06-07 21:48:23.074842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.939 [2024-06-07 21:48:23.074851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.939 qpair failed and we were unable to recover it. 00:31:22.939 [2024-06-07 21:48:23.075063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.939 [2024-06-07 21:48:23.075073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.939 qpair failed and we were unable to recover it. 00:31:22.939 [2024-06-07 21:48:23.075303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.939 [2024-06-07 21:48:23.075312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.940 qpair failed and we were unable to recover it. 00:31:22.940 [2024-06-07 21:48:23.075524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.940 [2024-06-07 21:48:23.075534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.940 qpair failed and we were unable to recover it. 00:31:22.940 [2024-06-07 21:48:23.075802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.940 [2024-06-07 21:48:23.075811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.940 qpair failed and we were unable to recover it. 00:31:22.940 [2024-06-07 21:48:23.075950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.940 [2024-06-07 21:48:23.075960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.940 qpair failed and we were unable to recover it. 00:31:22.940 [2024-06-07 21:48:23.076102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.940 [2024-06-07 21:48:23.076113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.940 qpair failed and we were unable to recover it. 00:31:22.940 [2024-06-07 21:48:23.076258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.940 [2024-06-07 21:48:23.076268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.940 qpair failed and we were unable to recover it. 00:31:22.940 [2024-06-07 21:48:23.076418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.940 [2024-06-07 21:48:23.076427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.940 qpair failed and we were unable to recover it. 00:31:22.940 [2024-06-07 21:48:23.076708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.940 [2024-06-07 21:48:23.076717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.940 qpair failed and we were unable to recover it. 00:31:22.940 [2024-06-07 21:48:23.076857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.940 [2024-06-07 21:48:23.076866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.940 qpair failed and we were unable to recover it. 00:31:22.940 [2024-06-07 21:48:23.077131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.940 [2024-06-07 21:48:23.077141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.940 qpair failed and we were unable to recover it. 00:31:22.940 [2024-06-07 21:48:23.077289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.940 [2024-06-07 21:48:23.077299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.940 qpair failed and we were unable to recover it. 00:31:22.940 [2024-06-07 21:48:23.077492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.940 [2024-06-07 21:48:23.077501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.940 qpair failed and we were unable to recover it. 00:31:22.940 [2024-06-07 21:48:23.077640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.940 [2024-06-07 21:48:23.077651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.940 qpair failed and we were unable to recover it. 00:31:22.940 [2024-06-07 21:48:23.077935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.940 [2024-06-07 21:48:23.077944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.940 qpair failed and we were unable to recover it. 00:31:22.940 [2024-06-07 21:48:23.078168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.940 [2024-06-07 21:48:23.078178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.940 qpair failed and we were unable to recover it. 00:31:22.940 [2024-06-07 21:48:23.078310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.940 [2024-06-07 21:48:23.078319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.940 qpair failed and we were unable to recover it. 00:31:22.940 [2024-06-07 21:48:23.078543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.940 [2024-06-07 21:48:23.078552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.940 qpair failed and we were unable to recover it. 00:31:22.940 [2024-06-07 21:48:23.078681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.940 [2024-06-07 21:48:23.078691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.940 qpair failed and we were unable to recover it. 00:31:22.940 [2024-06-07 21:48:23.078961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.940 [2024-06-07 21:48:23.078971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.940 qpair failed and we were unable to recover it. 00:31:22.940 [2024-06-07 21:48:23.079121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.940 [2024-06-07 21:48:23.079130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.940 qpair failed and we were unable to recover it. 00:31:22.940 [2024-06-07 21:48:23.079334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.940 [2024-06-07 21:48:23.079345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.940 qpair failed and we were unable to recover it. 00:31:22.941 [2024-06-07 21:48:23.079682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.941 [2024-06-07 21:48:23.079691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.941 qpair failed and we were unable to recover it. 00:31:22.941 [2024-06-07 21:48:23.079916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.941 [2024-06-07 21:48:23.079925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.941 qpair failed and we were unable to recover it. 00:31:22.941 [2024-06-07 21:48:23.080124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.941 [2024-06-07 21:48:23.080133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.941 qpair failed and we were unable to recover it. 00:31:22.941 [2024-06-07 21:48:23.080328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.941 [2024-06-07 21:48:23.080338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.941 qpair failed and we were unable to recover it. 00:31:22.941 [2024-06-07 21:48:23.080495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.941 [2024-06-07 21:48:23.080505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.941 qpair failed and we were unable to recover it. 00:31:22.941 [2024-06-07 21:48:23.080642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.941 [2024-06-07 21:48:23.080652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.941 qpair failed and we were unable to recover it. 00:31:22.941 [2024-06-07 21:48:23.080860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.941 [2024-06-07 21:48:23.080869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.941 qpair failed and we were unable to recover it. 00:31:22.941 [2024-06-07 21:48:23.081087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.941 [2024-06-07 21:48:23.081097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.941 qpair failed and we were unable to recover it. 00:31:22.941 [2024-06-07 21:48:23.081252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.941 [2024-06-07 21:48:23.081263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.941 qpair failed and we were unable to recover it. 00:31:22.941 [2024-06-07 21:48:23.081413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.941 [2024-06-07 21:48:23.081423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.941 qpair failed and we were unable to recover it. 00:31:22.941 [2024-06-07 21:48:23.081579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.941 [2024-06-07 21:48:23.081589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.941 qpair failed and we were unable to recover it. 00:31:22.941 [2024-06-07 21:48:23.081749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.941 [2024-06-07 21:48:23.081758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.941 qpair failed and we were unable to recover it. 00:31:22.941 [2024-06-07 21:48:23.082040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.941 [2024-06-07 21:48:23.082050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.941 qpair failed and we were unable to recover it. 00:31:22.941 [2024-06-07 21:48:23.082264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.941 [2024-06-07 21:48:23.082274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.941 qpair failed and we were unable to recover it. 00:31:22.941 [2024-06-07 21:48:23.082469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.941 [2024-06-07 21:48:23.082478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.941 qpair failed and we were unable to recover it. 00:31:22.941 [2024-06-07 21:48:23.082604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.941 [2024-06-07 21:48:23.082613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.941 qpair failed and we were unable to recover it. 00:31:22.941 [2024-06-07 21:48:23.082877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.941 [2024-06-07 21:48:23.082887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.941 qpair failed and we were unable to recover it. 00:31:22.941 [2024-06-07 21:48:23.083035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.941 [2024-06-07 21:48:23.083044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.941 qpair failed and we were unable to recover it. 00:31:22.941 [2024-06-07 21:48:23.083268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.941 [2024-06-07 21:48:23.083278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.941 qpair failed and we were unable to recover it. 00:31:22.941 [2024-06-07 21:48:23.083478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.941 [2024-06-07 21:48:23.083488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.941 qpair failed and we were unable to recover it. 00:31:22.941 [2024-06-07 21:48:23.083626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.942 [2024-06-07 21:48:23.083635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.942 qpair failed and we were unable to recover it. 00:31:22.942 [2024-06-07 21:48:23.083768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.942 [2024-06-07 21:48:23.083777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.942 qpair failed and we were unable to recover it. 00:31:22.942 [2024-06-07 21:48:23.083934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.942 [2024-06-07 21:48:23.083943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.942 qpair failed and we were unable to recover it. 00:31:22.942 [2024-06-07 21:48:23.084161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.942 [2024-06-07 21:48:23.084170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.942 qpair failed and we were unable to recover it. 00:31:22.942 [2024-06-07 21:48:23.084316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.942 [2024-06-07 21:48:23.084325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.942 qpair failed and we were unable to recover it. 00:31:22.942 [2024-06-07 21:48:23.084484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.942 [2024-06-07 21:48:23.084493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.942 qpair failed and we were unable to recover it. 00:31:22.942 [2024-06-07 21:48:23.084632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.942 [2024-06-07 21:48:23.084642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.942 qpair failed and we were unable to recover it. 00:31:22.942 [2024-06-07 21:48:23.084775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.942 [2024-06-07 21:48:23.084784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.942 qpair failed and we were unable to recover it. 00:31:22.942 [2024-06-07 21:48:23.085054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.942 [2024-06-07 21:48:23.085063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.942 qpair failed and we were unable to recover it. 00:31:22.942 [2024-06-07 21:48:23.085247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.942 [2024-06-07 21:48:23.085256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.942 qpair failed and we were unable to recover it. 00:31:22.942 [2024-06-07 21:48:23.085412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.942 [2024-06-07 21:48:23.085422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.942 qpair failed and we were unable to recover it. 00:31:22.942 [2024-06-07 21:48:23.085546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.942 [2024-06-07 21:48:23.085557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.942 qpair failed and we were unable to recover it. 00:31:22.942 [2024-06-07 21:48:23.085765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.942 [2024-06-07 21:48:23.085774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.942 qpair failed and we were unable to recover it. 00:31:22.942 [2024-06-07 21:48:23.086003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.942 [2024-06-07 21:48:23.086013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.942 qpair failed and we were unable to recover it. 00:31:22.942 [2024-06-07 21:48:23.086235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.942 [2024-06-07 21:48:23.086245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.942 qpair failed and we were unable to recover it. 00:31:22.942 [2024-06-07 21:48:23.086465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.942 [2024-06-07 21:48:23.086474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.942 qpair failed and we were unable to recover it. 00:31:22.942 [2024-06-07 21:48:23.086617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.942 [2024-06-07 21:48:23.086627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.942 qpair failed and we were unable to recover it. 00:31:22.942 [2024-06-07 21:48:23.086836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.942 [2024-06-07 21:48:23.086846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.942 qpair failed and we were unable to recover it. 00:31:22.942 [2024-06-07 21:48:23.087111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.943 [2024-06-07 21:48:23.087121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.943 qpair failed and we were unable to recover it. 00:31:22.943 [2024-06-07 21:48:23.087272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.943 [2024-06-07 21:48:23.087281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.943 qpair failed and we were unable to recover it. 00:31:22.943 [2024-06-07 21:48:23.087421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.943 [2024-06-07 21:48:23.087430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.943 qpair failed and we were unable to recover it. 00:31:22.943 [2024-06-07 21:48:23.087695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.943 [2024-06-07 21:48:23.087705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.943 qpair failed and we were unable to recover it. 00:31:22.943 [2024-06-07 21:48:23.087937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.943 [2024-06-07 21:48:23.087946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.943 qpair failed and we were unable to recover it. 00:31:22.943 [2024-06-07 21:48:23.088158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.943 [2024-06-07 21:48:23.088168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.943 qpair failed and we were unable to recover it. 00:31:22.943 [2024-06-07 21:48:23.088322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.943 [2024-06-07 21:48:23.088331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.943 qpair failed and we were unable to recover it. 00:31:22.943 [2024-06-07 21:48:23.088538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.943 [2024-06-07 21:48:23.088548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.943 qpair failed and we were unable to recover it. 00:31:22.943 [2024-06-07 21:48:23.088694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.943 [2024-06-07 21:48:23.088704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.943 qpair failed and we were unable to recover it. 00:31:22.943 [2024-06-07 21:48:23.088853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.943 [2024-06-07 21:48:23.088863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.943 qpair failed and we were unable to recover it. 00:31:22.943 [2024-06-07 21:48:23.089178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.943 [2024-06-07 21:48:23.089188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.943 qpair failed and we were unable to recover it. 00:31:22.943 [2024-06-07 21:48:23.089320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.943 [2024-06-07 21:48:23.089329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.943 qpair failed and we were unable to recover it. 00:31:22.943 [2024-06-07 21:48:23.089539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.943 [2024-06-07 21:48:23.089548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.943 qpair failed and we were unable to recover it. 00:31:22.943 [2024-06-07 21:48:23.089760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.944 [2024-06-07 21:48:23.089770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.944 qpair failed and we were unable to recover it. 00:31:22.944 [2024-06-07 21:48:23.089975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.944 [2024-06-07 21:48:23.089984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.944 qpair failed and we were unable to recover it. 00:31:22.944 [2024-06-07 21:48:23.090252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.944 [2024-06-07 21:48:23.090262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.944 qpair failed and we were unable to recover it. 00:31:22.944 [2024-06-07 21:48:23.090415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.944 [2024-06-07 21:48:23.090425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.944 qpair failed and we were unable to recover it. 00:31:22.944 [2024-06-07 21:48:23.090565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.944 [2024-06-07 21:48:23.090574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.944 qpair failed and we were unable to recover it. 00:31:22.944 [2024-06-07 21:48:23.090841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.944 [2024-06-07 21:48:23.090851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.944 qpair failed and we were unable to recover it. 00:31:22.944 [2024-06-07 21:48:23.091075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.944 [2024-06-07 21:48:23.091085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.944 qpair failed and we were unable to recover it. 00:31:22.944 [2024-06-07 21:48:23.091298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.944 [2024-06-07 21:48:23.091308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.944 qpair failed and we were unable to recover it. 00:31:22.944 [2024-06-07 21:48:23.091591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.944 [2024-06-07 21:48:23.091600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.944 qpair failed and we were unable to recover it. 00:31:22.944 [2024-06-07 21:48:23.091787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.944 [2024-06-07 21:48:23.091797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.944 qpair failed and we were unable to recover it. 00:31:22.944 [2024-06-07 21:48:23.092008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.944 [2024-06-07 21:48:23.092017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.944 qpair failed and we were unable to recover it. 00:31:22.944 [2024-06-07 21:48:23.092307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.944 [2024-06-07 21:48:23.092376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7b4000b90 with addr=10.0.0.2, port=4420 00:31:22.944 qpair failed and we were unable to recover it. 00:31:22.944 [2024-06-07 21:48:23.092644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.944 [2024-06-07 21:48:23.092679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7b4000b90 with addr=10.0.0.2, port=4420 00:31:22.944 qpair failed and we were unable to recover it. 00:31:22.944 [2024-06-07 21:48:23.092855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.944 [2024-06-07 21:48:23.092878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.944 qpair failed and we were unable to recover it. 00:31:22.944 [2024-06-07 21:48:23.093194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.945 [2024-06-07 21:48:23.093213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.945 qpair failed and we were unable to recover it. 00:31:22.945 [2024-06-07 21:48:23.093389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.945 [2024-06-07 21:48:23.093406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.945 qpair failed and we were unable to recover it. 00:31:22.945 [2024-06-07 21:48:23.093699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.945 [2024-06-07 21:48:23.093717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.945 qpair failed and we were unable to recover it. 00:31:22.945 [2024-06-07 21:48:23.093948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.945 [2024-06-07 21:48:23.093966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.945 qpair failed and we were unable to recover it. 00:31:22.945 [2024-06-07 21:48:23.094195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.945 [2024-06-07 21:48:23.094214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c4000b90 with addr=10.0.0.2, port=4420 00:31:22.945 qpair failed and we were unable to recover it. 00:31:22.945 [2024-06-07 21:48:23.094499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.945 [2024-06-07 21:48:23.094510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.945 qpair failed and we were unable to recover it. 00:31:22.945 [2024-06-07 21:48:23.094728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.945 [2024-06-07 21:48:23.094740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.945 qpair failed and we were unable to recover it. 00:31:22.945 [2024-06-07 21:48:23.094871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.945 [2024-06-07 21:48:23.094880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.945 qpair failed and we were unable to recover it. 00:31:22.945 [2024-06-07 21:48:23.095020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.945 [2024-06-07 21:48:23.095035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.945 qpair failed and we were unable to recover it. 00:31:22.945 [2024-06-07 21:48:23.095180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.945 [2024-06-07 21:48:23.095189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.945 qpair failed and we were unable to recover it. 00:31:22.945 [2024-06-07 21:48:23.095414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.945 [2024-06-07 21:48:23.095423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.945 qpair failed and we were unable to recover it. 00:31:22.945 [2024-06-07 21:48:23.095556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.945 [2024-06-07 21:48:23.095565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.945 qpair failed and we were unable to recover it. 00:31:22.945 [2024-06-07 21:48:23.095762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.945 [2024-06-07 21:48:23.095772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.945 qpair failed and we were unable to recover it. 00:31:22.945 [2024-06-07 21:48:23.095989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.945 [2024-06-07 21:48:23.095998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.945 qpair failed and we were unable to recover it. 00:31:22.945 [2024-06-07 21:48:23.096205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.945 [2024-06-07 21:48:23.096215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.945 qpair failed and we were unable to recover it. 00:31:22.945 [2024-06-07 21:48:23.096424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.945 [2024-06-07 21:48:23.096434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.945 qpair failed and we were unable to recover it. 00:31:22.945 [2024-06-07 21:48:23.096643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.945 [2024-06-07 21:48:23.096653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.945 qpair failed and we were unable to recover it. 00:31:22.945 [2024-06-07 21:48:23.096796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.945 [2024-06-07 21:48:23.096805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.945 qpair failed and we were unable to recover it. 00:31:22.945 [2024-06-07 21:48:23.096942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.945 [2024-06-07 21:48:23.096951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.945 qpair failed and we were unable to recover it. 00:31:22.945 [2024-06-07 21:48:23.097244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.945 [2024-06-07 21:48:23.097254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.945 qpair failed and we were unable to recover it. 00:31:22.945 [2024-06-07 21:48:23.097530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.945 [2024-06-07 21:48:23.097539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.945 qpair failed and we were unable to recover it. 00:31:22.945 [2024-06-07 21:48:23.097778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.945 [2024-06-07 21:48:23.097788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.946 qpair failed and we were unable to recover it. 00:31:22.946 [2024-06-07 21:48:23.097937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.946 [2024-06-07 21:48:23.097946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.946 qpair failed and we were unable to recover it. 00:31:22.946 [2024-06-07 21:48:23.098083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.946 [2024-06-07 21:48:23.098093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.946 qpair failed and we were unable to recover it. 00:31:22.946 [2024-06-07 21:48:23.098360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.946 [2024-06-07 21:48:23.098370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.946 qpair failed and we were unable to recover it. 00:31:22.946 [2024-06-07 21:48:23.098522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.946 [2024-06-07 21:48:23.098532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.946 qpair failed and we were unable to recover it. 00:31:22.946 [2024-06-07 21:48:23.098662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.946 [2024-06-07 21:48:23.098672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.946 qpair failed and we were unable to recover it. 00:31:22.946 [2024-06-07 21:48:23.098820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.946 [2024-06-07 21:48:23.098830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.946 qpair failed and we were unable to recover it. 00:31:22.946 [2024-06-07 21:48:23.098984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.946 [2024-06-07 21:48:23.098994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.946 qpair failed and we were unable to recover it. 00:31:22.946 [2024-06-07 21:48:23.099136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.946 [2024-06-07 21:48:23.099146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.946 qpair failed and we were unable to recover it. 00:31:22.946 [2024-06-07 21:48:23.099380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.946 [2024-06-07 21:48:23.099389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.946 qpair failed and we were unable to recover it. 00:31:22.946 [2024-06-07 21:48:23.099544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.946 [2024-06-07 21:48:23.099553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.946 qpair failed and we were unable to recover it. 00:31:22.946 [2024-06-07 21:48:23.099750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.946 [2024-06-07 21:48:23.099760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.946 qpair failed and we were unable to recover it. 00:31:22.946 [2024-06-07 21:48:23.099901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.946 [2024-06-07 21:48:23.099910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.946 qpair failed and we were unable to recover it. 00:31:22.946 [2024-06-07 21:48:23.100141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.946 [2024-06-07 21:48:23.100151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.946 qpair failed and we were unable to recover it. 00:31:22.946 [2024-06-07 21:48:23.100422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.946 [2024-06-07 21:48:23.100432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.946 qpair failed and we were unable to recover it. 00:31:22.946 [2024-06-07 21:48:23.100707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.946 [2024-06-07 21:48:23.100717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.946 qpair failed and we were unable to recover it. 00:31:22.946 [2024-06-07 21:48:23.100866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.946 [2024-06-07 21:48:23.100875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.946 qpair failed and we were unable to recover it. 00:31:22.946 [2024-06-07 21:48:23.101085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.946 [2024-06-07 21:48:23.101094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.946 qpair failed and we were unable to recover it. 00:31:22.946 [2024-06-07 21:48:23.101314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.946 [2024-06-07 21:48:23.101323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.947 qpair failed and we were unable to recover it. 00:31:22.947 [2024-06-07 21:48:23.101619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.947 [2024-06-07 21:48:23.101628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.947 qpair failed and we were unable to recover it. 00:31:22.947 [2024-06-07 21:48:23.101773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.947 [2024-06-07 21:48:23.101783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.947 qpair failed and we were unable to recover it. 00:31:22.947 [2024-06-07 21:48:23.101983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.947 [2024-06-07 21:48:23.101992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.947 qpair failed and we were unable to recover it. 00:31:22.947 [2024-06-07 21:48:23.102276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.947 [2024-06-07 21:48:23.102286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.947 qpair failed and we were unable to recover it. 00:31:22.947 [2024-06-07 21:48:23.102490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.947 [2024-06-07 21:48:23.102499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.947 qpair failed and we were unable to recover it. 00:31:22.947 [2024-06-07 21:48:23.102706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.947 [2024-06-07 21:48:23.102715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.947 qpair failed and we were unable to recover it. 00:31:22.947 [2024-06-07 21:48:23.103014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.947 [2024-06-07 21:48:23.103029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.947 qpair failed and we were unable to recover it. 00:31:22.947 [2024-06-07 21:48:23.103287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.947 [2024-06-07 21:48:23.103296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.947 qpair failed and we were unable to recover it. 00:31:22.947 [2024-06-07 21:48:23.103470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.947 [2024-06-07 21:48:23.103479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.947 qpair failed and we were unable to recover it. 00:31:22.947 [2024-06-07 21:48:23.103646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.947 [2024-06-07 21:48:23.103655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.947 qpair failed and we were unable to recover it. 00:31:22.947 [2024-06-07 21:48:23.103802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.947 [2024-06-07 21:48:23.103812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.947 qpair failed and we were unable to recover it. 00:31:22.947 [2024-06-07 21:48:23.104020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.947 [2024-06-07 21:48:23.104033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.947 qpair failed and we were unable to recover it. 00:31:22.947 [2024-06-07 21:48:23.104249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.947 [2024-06-07 21:48:23.104259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.947 qpair failed and we were unable to recover it. 00:31:22.947 [2024-06-07 21:48:23.104474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.947 [2024-06-07 21:48:23.104484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.947 qpair failed and we were unable to recover it. 00:31:22.947 [2024-06-07 21:48:23.104571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.947 [2024-06-07 21:48:23.104580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.947 qpair failed and we were unable to recover it. 00:31:22.947 [2024-06-07 21:48:23.104716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.947 [2024-06-07 21:48:23.104725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.947 qpair failed and we were unable to recover it. 00:31:22.947 [2024-06-07 21:48:23.104860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.947 [2024-06-07 21:48:23.104870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.947 qpair failed and we were unable to recover it. 00:31:22.947 [2024-06-07 21:48:23.105002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.947 [2024-06-07 21:48:23.105011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.947 qpair failed and we were unable to recover it. 00:31:22.947 [2024-06-07 21:48:23.105172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.947 [2024-06-07 21:48:23.105182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.947 qpair failed and we were unable to recover it. 00:31:22.947 [2024-06-07 21:48:23.105330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.947 [2024-06-07 21:48:23.105340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.947 qpair failed and we were unable to recover it. 00:31:22.947 [2024-06-07 21:48:23.105544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.948 [2024-06-07 21:48:23.105553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.948 qpair failed and we were unable to recover it. 00:31:22.948 [2024-06-07 21:48:23.105698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.948 [2024-06-07 21:48:23.105708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.948 qpair failed and we were unable to recover it. 00:31:22.948 [2024-06-07 21:48:23.105860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.948 [2024-06-07 21:48:23.105869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.948 qpair failed and we were unable to recover it. 00:31:22.948 [2024-06-07 21:48:23.106160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.948 [2024-06-07 21:48:23.106170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.948 qpair failed and we were unable to recover it. 00:31:22.948 [2024-06-07 21:48:23.106396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.948 [2024-06-07 21:48:23.106406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.948 qpair failed and we were unable to recover it. 00:31:22.948 [2024-06-07 21:48:23.106567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.948 [2024-06-07 21:48:23.106577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.948 qpair failed and we were unable to recover it. 00:31:22.948 [2024-06-07 21:48:23.106730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.948 [2024-06-07 21:48:23.106740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.948 qpair failed and we were unable to recover it. 00:31:22.948 [2024-06-07 21:48:23.106949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.948 [2024-06-07 21:48:23.106958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.948 qpair failed and we were unable to recover it. 00:31:22.948 [2024-06-07 21:48:23.107107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.948 [2024-06-07 21:48:23.107117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.948 qpair failed and we were unable to recover it. 00:31:22.948 [2024-06-07 21:48:23.107278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.948 [2024-06-07 21:48:23.107287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.948 qpair failed and we were unable to recover it. 00:31:22.948 [2024-06-07 21:48:23.107612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.948 [2024-06-07 21:48:23.107622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.948 qpair failed and we were unable to recover it. 00:31:22.948 [2024-06-07 21:48:23.107825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.948 [2024-06-07 21:48:23.107834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.948 qpair failed and we were unable to recover it. 00:31:22.948 [2024-06-07 21:48:23.108099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.948 [2024-06-07 21:48:23.108108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.948 qpair failed and we were unable to recover it. 00:31:22.948 [2024-06-07 21:48:23.108242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.948 [2024-06-07 21:48:23.108251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.948 qpair failed and we were unable to recover it. 00:31:22.948 [2024-06-07 21:48:23.108449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.948 [2024-06-07 21:48:23.108459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.948 qpair failed and we were unable to recover it. 00:31:22.948 [2024-06-07 21:48:23.108678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.948 [2024-06-07 21:48:23.108687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.948 qpair failed and we were unable to recover it. 00:31:22.948 [2024-06-07 21:48:23.108954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.948 [2024-06-07 21:48:23.108964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.948 qpair failed and we were unable to recover it. 00:31:22.948 [2024-06-07 21:48:23.109197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.948 [2024-06-07 21:48:23.109207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.948 qpair failed and we were unable to recover it. 00:31:22.948 [2024-06-07 21:48:23.109354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.948 [2024-06-07 21:48:23.109364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.948 qpair failed and we were unable to recover it. 00:31:22.949 [2024-06-07 21:48:23.109560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.949 [2024-06-07 21:48:23.109570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.949 qpair failed and we were unable to recover it. 00:31:22.949 [2024-06-07 21:48:23.109838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.949 [2024-06-07 21:48:23.109848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.949 qpair failed and we were unable to recover it. 00:31:22.949 [2024-06-07 21:48:23.109974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.949 [2024-06-07 21:48:23.109983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.949 qpair failed and we were unable to recover it. 00:31:22.949 [2024-06-07 21:48:23.110140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.949 [2024-06-07 21:48:23.110150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.949 qpair failed and we were unable to recover it. 00:31:22.949 [2024-06-07 21:48:23.110390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.949 [2024-06-07 21:48:23.110399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.949 qpair failed and we were unable to recover it. 00:31:22.949 [2024-06-07 21:48:23.110611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.949 [2024-06-07 21:48:23.110620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.949 qpair failed and we were unable to recover it. 00:31:22.949 [2024-06-07 21:48:23.110767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.949 [2024-06-07 21:48:23.110776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.949 qpair failed and we were unable to recover it. 00:31:22.949 [2024-06-07 21:48:23.111013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.949 [2024-06-07 21:48:23.111029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.949 qpair failed and we were unable to recover it. 00:31:22.949 [2024-06-07 21:48:23.111182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.949 [2024-06-07 21:48:23.111192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.949 qpair failed and we were unable to recover it. 00:31:22.949 [2024-06-07 21:48:23.111289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.949 [2024-06-07 21:48:23.111298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.949 qpair failed and we were unable to recover it. 00:31:22.949 [2024-06-07 21:48:23.111443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.949 [2024-06-07 21:48:23.111452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.949 qpair failed and we were unable to recover it. 00:31:22.949 [2024-06-07 21:48:23.111659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.949 [2024-06-07 21:48:23.111668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.949 qpair failed and we were unable to recover it. 00:31:22.949 [2024-06-07 21:48:23.111867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.949 [2024-06-07 21:48:23.111877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.949 qpair failed and we were unable to recover it. 00:31:22.949 [2024-06-07 21:48:23.112154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.949 [2024-06-07 21:48:23.112163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.949 qpair failed and we were unable to recover it. 00:31:22.949 [2024-06-07 21:48:23.112455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.949 [2024-06-07 21:48:23.112465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.949 qpair failed and we were unable to recover it. 00:31:22.949 [2024-06-07 21:48:23.112680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.949 [2024-06-07 21:48:23.112690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.949 qpair failed and we were unable to recover it. 00:31:22.949 [2024-06-07 21:48:23.112839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.949 [2024-06-07 21:48:23.112849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.949 qpair failed and we were unable to recover it. 00:31:22.949 [2024-06-07 21:48:23.113116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.949 [2024-06-07 21:48:23.113126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.949 qpair failed and we were unable to recover it. 00:31:22.949 [2024-06-07 21:48:23.113282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.949 [2024-06-07 21:48:23.113291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.949 qpair failed and we were unable to recover it. 00:31:22.949 [2024-06-07 21:48:23.113493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.949 [2024-06-07 21:48:23.113503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.949 qpair failed and we were unable to recover it. 00:31:22.949 [2024-06-07 21:48:23.113658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.949 [2024-06-07 21:48:23.113667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.949 qpair failed and we were unable to recover it. 00:31:22.949 [2024-06-07 21:48:23.113865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.949 [2024-06-07 21:48:23.113874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.949 qpair failed and we were unable to recover it. 00:31:22.949 [2024-06-07 21:48:23.114140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.949 [2024-06-07 21:48:23.114149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.949 qpair failed and we were unable to recover it. 00:31:22.949 [2024-06-07 21:48:23.114298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.949 [2024-06-07 21:48:23.114308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.950 qpair failed and we were unable to recover it. 00:31:22.950 [2024-06-07 21:48:23.114501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.950 [2024-06-07 21:48:23.114511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.950 qpair failed and we were unable to recover it. 00:31:22.950 [2024-06-07 21:48:23.114728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.950 [2024-06-07 21:48:23.114738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.950 qpair failed and we were unable to recover it. 00:31:22.950 [2024-06-07 21:48:23.114890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.950 [2024-06-07 21:48:23.114899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.950 qpair failed and we were unable to recover it. 00:31:22.950 [2024-06-07 21:48:23.115106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.950 [2024-06-07 21:48:23.115115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.950 qpair failed and we were unable to recover it. 00:31:22.950 [2024-06-07 21:48:23.115383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.950 [2024-06-07 21:48:23.115392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.950 qpair failed and we were unable to recover it. 00:31:22.950 [2024-06-07 21:48:23.115662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.950 [2024-06-07 21:48:23.115671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.950 qpair failed and we were unable to recover it. 00:31:22.950 [2024-06-07 21:48:23.115923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.950 [2024-06-07 21:48:23.115932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.950 qpair failed and we were unable to recover it. 00:31:22.950 [2024-06-07 21:48:23.116230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.950 [2024-06-07 21:48:23.116240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.950 qpair failed and we were unable to recover it. 00:31:22.950 [2024-06-07 21:48:23.116398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.950 [2024-06-07 21:48:23.116408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.950 qpair failed and we were unable to recover it. 00:31:22.950 [2024-06-07 21:48:23.116697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.950 [2024-06-07 21:48:23.116707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.950 qpair failed and we were unable to recover it. 00:31:22.950 [2024-06-07 21:48:23.116855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.950 [2024-06-07 21:48:23.116865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.950 qpair failed and we were unable to recover it. 00:31:22.950 [2024-06-07 21:48:23.117021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.950 [2024-06-07 21:48:23.117034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.950 qpair failed and we were unable to recover it. 00:31:22.950 [2024-06-07 21:48:23.117244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.950 [2024-06-07 21:48:23.117253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.950 qpair failed and we were unable to recover it. 00:31:22.950 [2024-06-07 21:48:23.117392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.950 [2024-06-07 21:48:23.117401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.950 qpair failed and we were unable to recover it. 00:31:22.950 [2024-06-07 21:48:23.117545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.950 [2024-06-07 21:48:23.117554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.950 qpair failed and we were unable to recover it. 00:31:22.950 [2024-06-07 21:48:23.117749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.950 [2024-06-07 21:48:23.117759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.950 qpair failed and we were unable to recover it. 00:31:22.950 [2024-06-07 21:48:23.117961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.950 [2024-06-07 21:48:23.117970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.950 qpair failed and we were unable to recover it. 00:31:22.950 [2024-06-07 21:48:23.118179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.950 [2024-06-07 21:48:23.118189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.950 qpair failed and we were unable to recover it. 00:31:22.950 [2024-06-07 21:48:23.118477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.950 [2024-06-07 21:48:23.118487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.950 qpair failed and we were unable to recover it. 00:31:22.950 [2024-06-07 21:48:23.118645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.950 [2024-06-07 21:48:23.118654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.950 qpair failed and we were unable to recover it. 00:31:22.951 [2024-06-07 21:48:23.118858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.951 [2024-06-07 21:48:23.118867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.951 qpair failed and we were unable to recover it. 00:31:22.951 [2024-06-07 21:48:23.119022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.951 [2024-06-07 21:48:23.119035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.951 qpair failed and we were unable to recover it. 00:31:22.951 [2024-06-07 21:48:23.119236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.951 [2024-06-07 21:48:23.119246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.951 qpair failed and we were unable to recover it. 00:31:22.951 [2024-06-07 21:48:23.119441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.951 [2024-06-07 21:48:23.119452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.951 qpair failed and we were unable to recover it. 00:31:22.951 [2024-06-07 21:48:23.119667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.951 [2024-06-07 21:48:23.119676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.951 qpair failed and we were unable to recover it. 00:31:22.951 [2024-06-07 21:48:23.119876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.951 [2024-06-07 21:48:23.119885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.951 qpair failed and we were unable to recover it. 00:31:22.951 [2024-06-07 21:48:23.120095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.951 [2024-06-07 21:48:23.120105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.951 qpair failed and we were unable to recover it. 00:31:22.951 [2024-06-07 21:48:23.120398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.951 [2024-06-07 21:48:23.120409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.951 qpair failed and we were unable to recover it. 00:31:22.951 [2024-06-07 21:48:23.120705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.951 [2024-06-07 21:48:23.120714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.951 qpair failed and we were unable to recover it. 00:31:22.951 [2024-06-07 21:48:23.120859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.951 [2024-06-07 21:48:23.120869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.951 qpair failed and we were unable to recover it. 00:31:22.951 [2024-06-07 21:48:23.121123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.951 [2024-06-07 21:48:23.121132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.951 qpair failed and we were unable to recover it. 00:31:22.951 [2024-06-07 21:48:23.121472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.951 [2024-06-07 21:48:23.121482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.951 qpair failed and we were unable to recover it. 00:31:22.951 [2024-06-07 21:48:23.121695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.951 [2024-06-07 21:48:23.121704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.951 qpair failed and we were unable to recover it. 00:31:22.951 [2024-06-07 21:48:23.121917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.951 [2024-06-07 21:48:23.121926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.951 qpair failed and we were unable to recover it. 00:31:22.951 [2024-06-07 21:48:23.122125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.951 [2024-06-07 21:48:23.122136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.951 qpair failed and we were unable to recover it. 00:31:22.951 [2024-06-07 21:48:23.122262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.951 [2024-06-07 21:48:23.122272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.951 qpair failed and we were unable to recover it. 00:31:22.951 [2024-06-07 21:48:23.122480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.951 [2024-06-07 21:48:23.122490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.951 qpair failed and we were unable to recover it. 00:31:22.951 [2024-06-07 21:48:23.122636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.951 [2024-06-07 21:48:23.122646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.951 qpair failed and we were unable to recover it. 00:31:22.951 [2024-06-07 21:48:23.122841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.951 [2024-06-07 21:48:23.122851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.951 qpair failed and we were unable to recover it. 00:31:22.951 [2024-06-07 21:48:23.123121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.951 [2024-06-07 21:48:23.123131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.951 qpair failed and we were unable to recover it. 00:31:22.951 [2024-06-07 21:48:23.123400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.951 [2024-06-07 21:48:23.123410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.951 qpair failed and we were unable to recover it. 00:31:22.951 [2024-06-07 21:48:23.123558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.951 [2024-06-07 21:48:23.123567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.951 qpair failed and we were unable to recover it. 00:31:22.951 [2024-06-07 21:48:23.123726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.951 [2024-06-07 21:48:23.123736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.951 qpair failed and we were unable to recover it. 00:31:22.951 [2024-06-07 21:48:23.123895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.951 [2024-06-07 21:48:23.123904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.951 qpair failed and we were unable to recover it. 00:31:22.952 [2024-06-07 21:48:23.124007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.952 [2024-06-07 21:48:23.124017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.952 qpair failed and we were unable to recover it. 00:31:22.952 [2024-06-07 21:48:23.124282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.952 [2024-06-07 21:48:23.124292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.952 qpair failed and we were unable to recover it. 00:31:22.952 [2024-06-07 21:48:23.124404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.952 [2024-06-07 21:48:23.124413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.952 qpair failed and we were unable to recover it. 00:31:22.952 [2024-06-07 21:48:23.124631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.952 [2024-06-07 21:48:23.124641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.952 qpair failed and we were unable to recover it. 00:31:22.952 [2024-06-07 21:48:23.124856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.952 [2024-06-07 21:48:23.124865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.952 qpair failed and we were unable to recover it. 00:31:22.952 [2024-06-07 21:48:23.125081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.952 [2024-06-07 21:48:23.125090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.952 qpair failed and we were unable to recover it. 00:31:22.952 [2024-06-07 21:48:23.125228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.952 [2024-06-07 21:48:23.125238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.952 qpair failed and we were unable to recover it. 00:31:22.952 [2024-06-07 21:48:23.125468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.952 [2024-06-07 21:48:23.125477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.952 qpair failed and we were unable to recover it. 00:31:22.952 [2024-06-07 21:48:23.125617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.952 [2024-06-07 21:48:23.125626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.952 qpair failed and we were unable to recover it. 00:31:22.952 [2024-06-07 21:48:23.125770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.952 [2024-06-07 21:48:23.125780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.952 qpair failed and we were unable to recover it. 00:31:22.952 [2024-06-07 21:48:23.125911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.952 [2024-06-07 21:48:23.125921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.952 qpair failed and we were unable to recover it. 00:31:22.952 [2024-06-07 21:48:23.126131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.952 [2024-06-07 21:48:23.126141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.952 qpair failed and we were unable to recover it. 00:31:22.952 [2024-06-07 21:48:23.126338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.952 [2024-06-07 21:48:23.126348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.952 qpair failed and we were unable to recover it. 00:31:22.953 [2024-06-07 21:48:23.126475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.953 [2024-06-07 21:48:23.126485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.953 qpair failed and we were unable to recover it. 00:31:22.953 [2024-06-07 21:48:23.126691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.953 [2024-06-07 21:48:23.126701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.953 qpair failed and we were unable to recover it. 00:31:22.953 [2024-06-07 21:48:23.126845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.953 [2024-06-07 21:48:23.126854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.953 qpair failed and we were unable to recover it. 00:31:22.953 [2024-06-07 21:48:23.127054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.953 [2024-06-07 21:48:23.127064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.953 qpair failed and we were unable to recover it. 00:31:22.953 [2024-06-07 21:48:23.127278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.953 [2024-06-07 21:48:23.127287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.953 qpair failed and we were unable to recover it. 00:31:22.953 [2024-06-07 21:48:23.127494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.953 [2024-06-07 21:48:23.127504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.953 qpair failed and we were unable to recover it. 00:31:22.953 [2024-06-07 21:48:23.127659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.953 [2024-06-07 21:48:23.127670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.953 qpair failed and we were unable to recover it. 00:31:22.953 [2024-06-07 21:48:23.127821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.953 [2024-06-07 21:48:23.127830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.953 qpair failed and we were unable to recover it. 00:31:22.953 [2024-06-07 21:48:23.128040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.953 [2024-06-07 21:48:23.128050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.953 qpair failed and we were unable to recover it. 00:31:22.953 [2024-06-07 21:48:23.128250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.953 [2024-06-07 21:48:23.128260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.953 qpair failed and we were unable to recover it. 00:31:22.953 [2024-06-07 21:48:23.128402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.953 [2024-06-07 21:48:23.128412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.953 qpair failed and we were unable to recover it. 00:31:22.953 [2024-06-07 21:48:23.128550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.953 [2024-06-07 21:48:23.128559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.953 qpair failed and we were unable to recover it. 00:31:22.953 [2024-06-07 21:48:23.128756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.953 [2024-06-07 21:48:23.128765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.953 qpair failed and we were unable to recover it. 00:31:22.953 [2024-06-07 21:48:23.128904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.953 [2024-06-07 21:48:23.128914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.953 qpair failed and we were unable to recover it. 00:31:22.953 [2024-06-07 21:48:23.129126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.953 [2024-06-07 21:48:23.129135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.953 qpair failed and we were unable to recover it. 00:31:22.953 [2024-06-07 21:48:23.129279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.953 [2024-06-07 21:48:23.129289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.953 qpair failed and we were unable to recover it. 00:31:22.953 [2024-06-07 21:48:23.129567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.953 [2024-06-07 21:48:23.129576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.953 qpair failed and we were unable to recover it. 00:31:22.953 [2024-06-07 21:48:23.129896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.953 [2024-06-07 21:48:23.129905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.953 qpair failed and we were unable to recover it. 00:31:22.953 [2024-06-07 21:48:23.130113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.953 [2024-06-07 21:48:23.130123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.953 qpair failed and we were unable to recover it. 00:31:22.953 [2024-06-07 21:48:23.130250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.953 [2024-06-07 21:48:23.130260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.953 qpair failed and we were unable to recover it. 00:31:22.953 [2024-06-07 21:48:23.130393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.953 [2024-06-07 21:48:23.130402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.953 qpair failed and we were unable to recover it. 00:31:22.953 [2024-06-07 21:48:23.130631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.953 [2024-06-07 21:48:23.130640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.954 qpair failed and we were unable to recover it. 00:31:22.954 [2024-06-07 21:48:23.130844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.954 [2024-06-07 21:48:23.130854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.954 qpair failed and we were unable to recover it. 00:31:22.954 [2024-06-07 21:48:23.131196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.954 [2024-06-07 21:48:23.131206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.954 qpair failed and we were unable to recover it. 00:31:22.954 [2024-06-07 21:48:23.131422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.954 [2024-06-07 21:48:23.131431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.954 qpair failed and we were unable to recover it. 00:31:22.954 [2024-06-07 21:48:23.131581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.954 [2024-06-07 21:48:23.131590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.954 qpair failed and we were unable to recover it. 00:31:22.954 [2024-06-07 21:48:23.131803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.954 [2024-06-07 21:48:23.131813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.954 qpair failed and we were unable to recover it. 00:31:22.954 [2024-06-07 21:48:23.132029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.954 [2024-06-07 21:48:23.132039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.954 qpair failed and we were unable to recover it. 00:31:22.954 [2024-06-07 21:48:23.132222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.954 [2024-06-07 21:48:23.132231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.954 qpair failed and we were unable to recover it. 00:31:22.954 [2024-06-07 21:48:23.132441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.954 [2024-06-07 21:48:23.132451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.954 qpair failed and we were unable to recover it. 00:31:22.954 [2024-06-07 21:48:23.132663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.954 [2024-06-07 21:48:23.132673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.954 qpair failed and we were unable to recover it. 00:31:22.954 [2024-06-07 21:48:23.132896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.954 [2024-06-07 21:48:23.132905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.954 qpair failed and we were unable to recover it. 00:31:22.954 [2024-06-07 21:48:23.133045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.954 [2024-06-07 21:48:23.133055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.954 qpair failed and we were unable to recover it. 00:31:22.954 [2024-06-07 21:48:23.133182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.954 [2024-06-07 21:48:23.133193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.954 qpair failed and we were unable to recover it. 00:31:22.954 [2024-06-07 21:48:23.133333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.954 [2024-06-07 21:48:23.133342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.954 qpair failed and we were unable to recover it. 00:31:22.954 [2024-06-07 21:48:23.133474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.954 [2024-06-07 21:48:23.133483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.954 qpair failed and we were unable to recover it. 00:31:22.954 [2024-06-07 21:48:23.133682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.954 [2024-06-07 21:48:23.133691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.954 qpair failed and we were unable to recover it. 00:31:22.954 [2024-06-07 21:48:23.133909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.954 [2024-06-07 21:48:23.133918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.954 qpair failed and we were unable to recover it. 00:31:22.954 [2024-06-07 21:48:23.134065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.954 [2024-06-07 21:48:23.134075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.954 qpair failed and we were unable to recover it. 00:31:22.954 [2024-06-07 21:48:23.134202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.954 [2024-06-07 21:48:23.134211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.954 qpair failed and we were unable to recover it. 00:31:22.954 [2024-06-07 21:48:23.134432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.954 [2024-06-07 21:48:23.134441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.954 qpair failed and we were unable to recover it. 00:31:22.954 [2024-06-07 21:48:23.134598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.954 [2024-06-07 21:48:23.134607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.954 qpair failed and we were unable to recover it. 00:31:22.954 [2024-06-07 21:48:23.134876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.954 [2024-06-07 21:48:23.134885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.954 qpair failed and we were unable to recover it. 00:31:22.954 [2024-06-07 21:48:23.135084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.954 [2024-06-07 21:48:23.135094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.955 qpair failed and we were unable to recover it. 00:31:22.955 [2024-06-07 21:48:23.135231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.955 [2024-06-07 21:48:23.135240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.955 qpair failed and we were unable to recover it. 00:31:22.955 [2024-06-07 21:48:23.135393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.955 [2024-06-07 21:48:23.135403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.955 qpair failed and we were unable to recover it. 00:31:22.955 [2024-06-07 21:48:23.135550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.955 [2024-06-07 21:48:23.135559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.955 qpair failed and we were unable to recover it. 00:31:22.955 [2024-06-07 21:48:23.135652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.955 [2024-06-07 21:48:23.135661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.955 qpair failed and we were unable to recover it. 00:31:22.955 [2024-06-07 21:48:23.135870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.955 [2024-06-07 21:48:23.135879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.955 qpair failed and we were unable to recover it. 00:31:22.955 [2024-06-07 21:48:23.136015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.955 [2024-06-07 21:48:23.136028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.955 qpair failed and we were unable to recover it. 00:31:22.955 [2024-06-07 21:48:23.136295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.955 [2024-06-07 21:48:23.136304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.955 qpair failed and we were unable to recover it. 00:31:22.955 [2024-06-07 21:48:23.136519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.955 [2024-06-07 21:48:23.136529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.955 qpair failed and we were unable to recover it. 00:31:22.955 [2024-06-07 21:48:23.136672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.955 [2024-06-07 21:48:23.136682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.955 qpair failed and we were unable to recover it. 00:31:22.955 [2024-06-07 21:48:23.136897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.955 [2024-06-07 21:48:23.136906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.955 qpair failed and we were unable to recover it. 00:31:22.955 [2024-06-07 21:48:23.137053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.955 [2024-06-07 21:48:23.137062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.955 qpair failed and we were unable to recover it. 00:31:22.955 [2024-06-07 21:48:23.137188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.955 [2024-06-07 21:48:23.137197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.955 qpair failed and we were unable to recover it. 00:31:22.955 [2024-06-07 21:48:23.137358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.955 [2024-06-07 21:48:23.137368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.955 qpair failed and we were unable to recover it. 00:31:22.955 [2024-06-07 21:48:23.137581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.955 [2024-06-07 21:48:23.137590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.955 qpair failed and we were unable to recover it. 00:31:22.955 [2024-06-07 21:48:23.137834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.955 [2024-06-07 21:48:23.137844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.955 qpair failed and we were unable to recover it. 00:31:22.955 [2024-06-07 21:48:23.137970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.955 [2024-06-07 21:48:23.137979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.955 qpair failed and we were unable to recover it. 00:31:22.955 [2024-06-07 21:48:23.138165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.955 [2024-06-07 21:48:23.138175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.955 qpair failed and we were unable to recover it. 00:31:22.955 [2024-06-07 21:48:23.138476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.955 [2024-06-07 21:48:23.138485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.955 qpair failed and we were unable to recover it. 00:31:22.955 [2024-06-07 21:48:23.138692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.955 [2024-06-07 21:48:23.138702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.955 qpair failed and we were unable to recover it. 00:31:22.955 [2024-06-07 21:48:23.138841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.955 [2024-06-07 21:48:23.138850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.955 qpair failed and we were unable to recover it. 00:31:22.955 [2024-06-07 21:48:23.138991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.955 [2024-06-07 21:48:23.139001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.955 qpair failed and we were unable to recover it. 00:31:22.955 [2024-06-07 21:48:23.139202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.955 [2024-06-07 21:48:23.139212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.955 qpair failed and we were unable to recover it. 00:31:22.955 [2024-06-07 21:48:23.139477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.955 [2024-06-07 21:48:23.139487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.955 qpair failed and we were unable to recover it. 00:31:22.955 [2024-06-07 21:48:23.139681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.956 [2024-06-07 21:48:23.139690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.956 qpair failed and we were unable to recover it. 00:31:22.956 [2024-06-07 21:48:23.139942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.956 [2024-06-07 21:48:23.139951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.956 qpair failed and we were unable to recover it. 00:31:22.956 [2024-06-07 21:48:23.140181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.956 [2024-06-07 21:48:23.140191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.956 qpair failed and we were unable to recover it. 00:31:22.956 [2024-06-07 21:48:23.140389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.956 [2024-06-07 21:48:23.140399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.956 qpair failed and we were unable to recover it. 00:31:22.956 [2024-06-07 21:48:23.140544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.956 [2024-06-07 21:48:23.140553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.956 qpair failed and we were unable to recover it. 00:31:22.956 [2024-06-07 21:48:23.140832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.956 [2024-06-07 21:48:23.140841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.956 qpair failed and we were unable to recover it. 00:31:22.956 [2024-06-07 21:48:23.141043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.956 [2024-06-07 21:48:23.141055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.956 qpair failed and we were unable to recover it. 00:31:22.956 [2024-06-07 21:48:23.141195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.956 [2024-06-07 21:48:23.141204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.956 qpair failed and we were unable to recover it. 00:31:22.956 [2024-06-07 21:48:23.141403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.956 [2024-06-07 21:48:23.141413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.956 qpair failed and we were unable to recover it. 00:31:22.956 [2024-06-07 21:48:23.141570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.956 [2024-06-07 21:48:23.141579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.956 qpair failed and we were unable to recover it. 00:31:22.956 [2024-06-07 21:48:23.141728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.956 [2024-06-07 21:48:23.141738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.956 qpair failed and we were unable to recover it. 00:31:22.956 [2024-06-07 21:48:23.141889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.956 [2024-06-07 21:48:23.141899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.956 qpair failed and we were unable to recover it. 00:31:22.956 [2024-06-07 21:48:23.142106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.956 [2024-06-07 21:48:23.142116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.956 qpair failed and we were unable to recover it. 00:31:22.956 [2024-06-07 21:48:23.142317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.956 [2024-06-07 21:48:23.142326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.956 qpair failed and we were unable to recover it. 00:31:22.956 [2024-06-07 21:48:23.142466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.956 [2024-06-07 21:48:23.142475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.956 qpair failed and we were unable to recover it. 00:31:22.956 [2024-06-07 21:48:23.142615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.956 [2024-06-07 21:48:23.142624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.956 qpair failed and we were unable to recover it. 00:31:22.956 [2024-06-07 21:48:23.142845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.956 [2024-06-07 21:48:23.142854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.956 qpair failed and we were unable to recover it. 00:31:22.956 [2024-06-07 21:48:23.143055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.956 [2024-06-07 21:48:23.143065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.956 qpair failed and we were unable to recover it. 00:31:22.956 [2024-06-07 21:48:23.143269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.956 [2024-06-07 21:48:23.143278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.956 qpair failed and we were unable to recover it. 00:31:22.956 [2024-06-07 21:48:23.143475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.956 [2024-06-07 21:48:23.143484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.956 qpair failed and we were unable to recover it. 00:31:22.956 [2024-06-07 21:48:23.143689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.956 [2024-06-07 21:48:23.143699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.956 qpair failed and we were unable to recover it. 00:31:22.956 [2024-06-07 21:48:23.143987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.956 [2024-06-07 21:48:23.143997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.956 qpair failed and we were unable to recover it. 00:31:22.956 [2024-06-07 21:48:23.144123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.956 [2024-06-07 21:48:23.144132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.956 qpair failed and we were unable to recover it. 00:31:22.957 [2024-06-07 21:48:23.144281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.957 [2024-06-07 21:48:23.144291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.957 qpair failed and we were unable to recover it. 00:31:22.957 [2024-06-07 21:48:23.144577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.957 [2024-06-07 21:48:23.144586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.957 qpair failed and we were unable to recover it. 00:31:22.957 [2024-06-07 21:48:23.144794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.957 [2024-06-07 21:48:23.144803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.957 qpair failed and we were unable to recover it. 00:31:22.957 [2024-06-07 21:48:23.144999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.957 [2024-06-07 21:48:23.145009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.957 qpair failed and we were unable to recover it. 00:31:22.957 [2024-06-07 21:48:23.145278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.957 [2024-06-07 21:48:23.145288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.957 qpair failed and we were unable to recover it. 00:31:22.957 [2024-06-07 21:48:23.145417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.957 [2024-06-07 21:48:23.145426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.957 qpair failed and we were unable to recover it. 00:31:22.957 [2024-06-07 21:48:23.145636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.957 [2024-06-07 21:48:23.145646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.957 qpair failed and we were unable to recover it. 00:31:22.957 [2024-06-07 21:48:23.145794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.957 [2024-06-07 21:48:23.145804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.957 qpair failed and we were unable to recover it. 00:31:22.957 [2024-06-07 21:48:23.146015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.957 [2024-06-07 21:48:23.146034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.957 qpair failed and we were unable to recover it. 00:31:22.957 [2024-06-07 21:48:23.146268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.957 [2024-06-07 21:48:23.146277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.957 qpair failed and we were unable to recover it. 00:31:22.957 [2024-06-07 21:48:23.146567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.957 [2024-06-07 21:48:23.146577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.957 qpair failed and we were unable to recover it. 00:31:22.957 [2024-06-07 21:48:23.146735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.957 [2024-06-07 21:48:23.146744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.957 qpair failed and we were unable to recover it. 00:31:22.957 [2024-06-07 21:48:23.146943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.957 [2024-06-07 21:48:23.146952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.957 qpair failed and we were unable to recover it. 00:31:22.957 [2024-06-07 21:48:23.147170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.957 [2024-06-07 21:48:23.147180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.957 qpair failed and we were unable to recover it. 00:31:22.957 [2024-06-07 21:48:23.147425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.957 [2024-06-07 21:48:23.147435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.957 qpair failed and we were unable to recover it. 00:31:22.957 [2024-06-07 21:48:23.147594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.957 [2024-06-07 21:48:23.147604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.957 qpair failed and we were unable to recover it. 00:31:22.957 [2024-06-07 21:48:23.147813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.957 [2024-06-07 21:48:23.147823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.957 qpair failed and we were unable to recover it. 00:31:22.957 [2024-06-07 21:48:23.148032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.957 [2024-06-07 21:48:23.148042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.957 qpair failed and we were unable to recover it. 00:31:22.957 [2024-06-07 21:48:23.148249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.957 [2024-06-07 21:48:23.148259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.957 qpair failed and we were unable to recover it. 00:31:22.957 [2024-06-07 21:48:23.148490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.957 [2024-06-07 21:48:23.148499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.957 qpair failed and we were unable to recover it. 00:31:22.957 [2024-06-07 21:48:23.148696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.957 [2024-06-07 21:48:23.148705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.957 qpair failed and we were unable to recover it. 00:31:22.957 [2024-06-07 21:48:23.148934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.957 [2024-06-07 21:48:23.148944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.957 qpair failed and we were unable to recover it. 00:31:22.957 [2024-06-07 21:48:23.149238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.958 [2024-06-07 21:48:23.149248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.958 qpair failed and we were unable to recover it. 00:31:22.958 [2024-06-07 21:48:23.149469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.958 [2024-06-07 21:48:23.149481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.958 qpair failed and we were unable to recover it. 00:31:22.958 [2024-06-07 21:48:23.149634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.958 [2024-06-07 21:48:23.149644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.958 qpair failed and we were unable to recover it. 00:31:22.958 [2024-06-07 21:48:23.149972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.958 [2024-06-07 21:48:23.149982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.958 qpair failed and we were unable to recover it. 00:31:22.958 [2024-06-07 21:48:23.150180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.958 [2024-06-07 21:48:23.150189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.958 qpair failed and we were unable to recover it. 00:31:22.958 [2024-06-07 21:48:23.150343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.958 [2024-06-07 21:48:23.150352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.958 qpair failed and we were unable to recover it. 00:31:22.958 [2024-06-07 21:48:23.150648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.958 [2024-06-07 21:48:23.150657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.958 qpair failed and we were unable to recover it. 00:31:22.958 [2024-06-07 21:48:23.150789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.958 [2024-06-07 21:48:23.150799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.958 qpair failed and we were unable to recover it. 00:31:22.958 [2024-06-07 21:48:23.151000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.958 [2024-06-07 21:48:23.151009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.958 qpair failed and we were unable to recover it. 00:31:22.958 [2024-06-07 21:48:23.151282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.958 [2024-06-07 21:48:23.151292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.958 qpair failed and we were unable to recover it. 00:31:22.958 [2024-06-07 21:48:23.151448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.958 [2024-06-07 21:48:23.151458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.958 qpair failed and we were unable to recover it. 00:31:22.958 [2024-06-07 21:48:23.151587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.958 [2024-06-07 21:48:23.151596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.958 qpair failed and we were unable to recover it. 00:31:22.958 [2024-06-07 21:48:23.151799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.958 [2024-06-07 21:48:23.151809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:22.958 qpair failed and we were unable to recover it. 00:31:23.242 [2024-06-07 21:48:23.152035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.242 [2024-06-07 21:48:23.152045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.242 qpair failed and we were unable to recover it. 00:31:23.242 [2024-06-07 21:48:23.152341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.242 [2024-06-07 21:48:23.152351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.242 qpair failed and we were unable to recover it. 00:31:23.242 [2024-06-07 21:48:23.152565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.242 [2024-06-07 21:48:23.152575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.242 qpair failed and we were unable to recover it. 00:31:23.242 [2024-06-07 21:48:23.152807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.242 [2024-06-07 21:48:23.152817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.242 qpair failed and we were unable to recover it. 00:31:23.242 [2024-06-07 21:48:23.152960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.242 [2024-06-07 21:48:23.152969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.242 qpair failed and we were unable to recover it. 00:31:23.242 [2024-06-07 21:48:23.153121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.242 [2024-06-07 21:48:23.153131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.242 qpair failed and we were unable to recover it. 00:31:23.242 [2024-06-07 21:48:23.153291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.242 [2024-06-07 21:48:23.153300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.242 qpair failed and we were unable to recover it. 00:31:23.242 [2024-06-07 21:48:23.153566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.242 [2024-06-07 21:48:23.153576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.242 qpair failed and we were unable to recover it. 00:31:23.242 [2024-06-07 21:48:23.153701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.242 [2024-06-07 21:48:23.153711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.242 qpair failed and we were unable to recover it. 00:31:23.242 [2024-06-07 21:48:23.154004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.242 [2024-06-07 21:48:23.154014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.242 qpair failed and we were unable to recover it. 00:31:23.242 [2024-06-07 21:48:23.154170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.242 [2024-06-07 21:48:23.154181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.242 qpair failed and we were unable to recover it. 00:31:23.242 [2024-06-07 21:48:23.154457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.242 [2024-06-07 21:48:23.154467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.242 qpair failed and we were unable to recover it. 00:31:23.242 [2024-06-07 21:48:23.154614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.242 [2024-06-07 21:48:23.154624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.242 qpair failed and we were unable to recover it. 00:31:23.242 [2024-06-07 21:48:23.154844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.242 [2024-06-07 21:48:23.154853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.242 qpair failed and we were unable to recover it. 00:31:23.242 [2024-06-07 21:48:23.154997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.242 [2024-06-07 21:48:23.155006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.242 qpair failed and we were unable to recover it. 00:31:23.242 [2024-06-07 21:48:23.155211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.242 [2024-06-07 21:48:23.155221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.242 qpair failed and we were unable to recover it. 00:31:23.242 [2024-06-07 21:48:23.155445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.242 [2024-06-07 21:48:23.155454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.242 qpair failed and we were unable to recover it. 00:31:23.242 [2024-06-07 21:48:23.155775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.242 [2024-06-07 21:48:23.155785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.242 qpair failed and we were unable to recover it. 00:31:23.242 [2024-06-07 21:48:23.155982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.242 [2024-06-07 21:48:23.155992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.242 qpair failed and we were unable to recover it. 00:31:23.242 [2024-06-07 21:48:23.156131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.242 [2024-06-07 21:48:23.156141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.242 qpair failed and we were unable to recover it. 00:31:23.242 [2024-06-07 21:48:23.156409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.242 [2024-06-07 21:48:23.156418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.242 qpair failed and we were unable to recover it. 00:31:23.242 [2024-06-07 21:48:23.156551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.243 [2024-06-07 21:48:23.156560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.243 qpair failed and we were unable to recover it. 00:31:23.243 [2024-06-07 21:48:23.156774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.243 [2024-06-07 21:48:23.156784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.243 qpair failed and we were unable to recover it. 00:31:23.243 [2024-06-07 21:48:23.156896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.243 [2024-06-07 21:48:23.156905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.243 qpair failed and we were unable to recover it. 00:31:23.243 [2024-06-07 21:48:23.157066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.243 [2024-06-07 21:48:23.157076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.243 qpair failed and we were unable to recover it. 00:31:23.243 [2024-06-07 21:48:23.157287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.243 [2024-06-07 21:48:23.157297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.243 qpair failed and we were unable to recover it. 00:31:23.243 [2024-06-07 21:48:23.157501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.243 [2024-06-07 21:48:23.157511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.243 qpair failed and we were unable to recover it. 00:31:23.243 [2024-06-07 21:48:23.157712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.243 [2024-06-07 21:48:23.157722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.243 qpair failed and we were unable to recover it. 00:31:23.243 [2024-06-07 21:48:23.157879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.243 [2024-06-07 21:48:23.157892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.243 qpair failed and we were unable to recover it. 00:31:23.243 [2024-06-07 21:48:23.158184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.243 [2024-06-07 21:48:23.158194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.243 qpair failed and we were unable to recover it. 00:31:23.243 [2024-06-07 21:48:23.158409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.243 [2024-06-07 21:48:23.158419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.243 qpair failed and we were unable to recover it. 00:31:23.243 [2024-06-07 21:48:23.158597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.243 [2024-06-07 21:48:23.158606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.243 qpair failed and we were unable to recover it. 00:31:23.243 [2024-06-07 21:48:23.158751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.243 [2024-06-07 21:48:23.158760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.243 qpair failed and we were unable to recover it. 00:31:23.243 [2024-06-07 21:48:23.158906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.243 [2024-06-07 21:48:23.158915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.243 qpair failed and we were unable to recover it. 00:31:23.243 [2024-06-07 21:48:23.159133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.243 [2024-06-07 21:48:23.159143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.243 qpair failed and we were unable to recover it. 00:31:23.243 [2024-06-07 21:48:23.159287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.243 [2024-06-07 21:48:23.159296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.243 qpair failed and we were unable to recover it. 00:31:23.243 [2024-06-07 21:48:23.159525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.243 [2024-06-07 21:48:23.159534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.243 qpair failed and we were unable to recover it. 00:31:23.243 [2024-06-07 21:48:23.159750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.243 [2024-06-07 21:48:23.159759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.243 qpair failed and we were unable to recover it. 00:31:23.243 [2024-06-07 21:48:23.159905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.243 [2024-06-07 21:48:23.159914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.243 qpair failed and we were unable to recover it. 00:31:23.243 [2024-06-07 21:48:23.160050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.243 [2024-06-07 21:48:23.160060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.243 qpair failed and we were unable to recover it. 00:31:23.243 [2024-06-07 21:48:23.160257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.243 [2024-06-07 21:48:23.160267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.243 qpair failed and we were unable to recover it. 00:31:23.243 [2024-06-07 21:48:23.160505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.243 [2024-06-07 21:48:23.160515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.243 qpair failed and we were unable to recover it. 00:31:23.243 [2024-06-07 21:48:23.160785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.243 [2024-06-07 21:48:23.160796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.243 qpair failed and we were unable to recover it. 00:31:23.243 [2024-06-07 21:48:23.160934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.243 [2024-06-07 21:48:23.160943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.243 qpair failed and we were unable to recover it. 00:31:23.243 [2024-06-07 21:48:23.161162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.243 [2024-06-07 21:48:23.161171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.243 qpair failed and we were unable to recover it. 00:31:23.243 [2024-06-07 21:48:23.161304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.243 [2024-06-07 21:48:23.161314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.243 qpair failed and we were unable to recover it. 00:31:23.243 [2024-06-07 21:48:23.161573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.243 [2024-06-07 21:48:23.161582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.243 qpair failed and we were unable to recover it. 00:31:23.243 [2024-06-07 21:48:23.161729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.243 [2024-06-07 21:48:23.161738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.243 qpair failed and we were unable to recover it. 00:31:23.243 [2024-06-07 21:48:23.161872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.243 [2024-06-07 21:48:23.161882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.243 qpair failed and we were unable to recover it. 00:31:23.243 [2024-06-07 21:48:23.162149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.243 [2024-06-07 21:48:23.162159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.243 qpair failed and we were unable to recover it. 00:31:23.243 [2024-06-07 21:48:23.162290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.243 [2024-06-07 21:48:23.162300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.243 qpair failed and we were unable to recover it. 00:31:23.243 [2024-06-07 21:48:23.162503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.243 [2024-06-07 21:48:23.162513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.243 qpair failed and we were unable to recover it. 00:31:23.243 [2024-06-07 21:48:23.162712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.243 [2024-06-07 21:48:23.162722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.243 qpair failed and we were unable to recover it. 00:31:23.243 [2024-06-07 21:48:23.162904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.243 [2024-06-07 21:48:23.162914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.243 qpair failed and we were unable to recover it. 00:31:23.243 [2024-06-07 21:48:23.163049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.243 [2024-06-07 21:48:23.163059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.243 qpair failed and we were unable to recover it. 00:31:23.243 [2024-06-07 21:48:23.163204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.244 [2024-06-07 21:48:23.163213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.244 qpair failed and we were unable to recover it. 00:31:23.244 [2024-06-07 21:48:23.163453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.244 [2024-06-07 21:48:23.163462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.244 qpair failed and we were unable to recover it. 00:31:23.244 [2024-06-07 21:48:23.163615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.244 [2024-06-07 21:48:23.163625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.244 qpair failed and we were unable to recover it. 00:31:23.244 [2024-06-07 21:48:23.163770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.244 [2024-06-07 21:48:23.163780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.244 qpair failed and we were unable to recover it. 00:31:23.244 [2024-06-07 21:48:23.163977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.244 [2024-06-07 21:48:23.163986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.244 qpair failed and we were unable to recover it. 00:31:23.244 [2024-06-07 21:48:23.164156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.244 [2024-06-07 21:48:23.164166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.244 qpair failed and we were unable to recover it. 00:31:23.244 [2024-06-07 21:48:23.164306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.244 [2024-06-07 21:48:23.164315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.244 qpair failed and we were unable to recover it. 00:31:23.244 [2024-06-07 21:48:23.164445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.244 [2024-06-07 21:48:23.164454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.244 qpair failed and we were unable to recover it. 00:31:23.244 [2024-06-07 21:48:23.164609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.244 [2024-06-07 21:48:23.164619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.244 qpair failed and we were unable to recover it. 00:31:23.244 [2024-06-07 21:48:23.164836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.244 [2024-06-07 21:48:23.164845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.244 qpair failed and we were unable to recover it. 00:31:23.244 [2024-06-07 21:48:23.165057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.244 [2024-06-07 21:48:23.165068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.244 qpair failed and we were unable to recover it. 00:31:23.244 [2024-06-07 21:48:23.165266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.244 [2024-06-07 21:48:23.165275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.244 qpair failed and we were unable to recover it. 00:31:23.244 [2024-06-07 21:48:23.165545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.244 [2024-06-07 21:48:23.165555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.244 qpair failed and we were unable to recover it. 00:31:23.244 [2024-06-07 21:48:23.165711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.244 [2024-06-07 21:48:23.165723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.244 qpair failed and we were unable to recover it. 00:31:23.244 [2024-06-07 21:48:23.165990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.244 [2024-06-07 21:48:23.165999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.244 qpair failed and we were unable to recover it. 00:31:23.244 [2024-06-07 21:48:23.166211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.244 [2024-06-07 21:48:23.166220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.244 qpair failed and we were unable to recover it. 00:31:23.244 [2024-06-07 21:48:23.166345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.244 [2024-06-07 21:48:23.166355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.244 qpair failed and we were unable to recover it. 00:31:23.244 [2024-06-07 21:48:23.166446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.244 [2024-06-07 21:48:23.166456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.244 qpair failed and we were unable to recover it. 00:31:23.244 [2024-06-07 21:48:23.166616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.244 [2024-06-07 21:48:23.166625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.244 qpair failed and we were unable to recover it. 00:31:23.244 [2024-06-07 21:48:23.166775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.244 [2024-06-07 21:48:23.166784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.244 qpair failed and we were unable to recover it. 00:31:23.244 [2024-06-07 21:48:23.166925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.244 [2024-06-07 21:48:23.166935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.244 qpair failed and we were unable to recover it. 00:31:23.244 [2024-06-07 21:48:23.167085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.244 [2024-06-07 21:48:23.167095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.244 qpair failed and we were unable to recover it. 00:31:23.244 [2024-06-07 21:48:23.167352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.244 [2024-06-07 21:48:23.167362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.244 qpair failed and we were unable to recover it. 00:31:23.244 [2024-06-07 21:48:23.167559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.244 [2024-06-07 21:48:23.167568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.244 qpair failed and we were unable to recover it. 00:31:23.244 [2024-06-07 21:48:23.167797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.244 [2024-06-07 21:48:23.167806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.244 qpair failed and we were unable to recover it. 00:31:23.244 [2024-06-07 21:48:23.168002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.244 [2024-06-07 21:48:23.168012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.244 qpair failed and we were unable to recover it. 00:31:23.244 [2024-06-07 21:48:23.168163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.244 [2024-06-07 21:48:23.168173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.244 qpair failed and we were unable to recover it. 00:31:23.244 [2024-06-07 21:48:23.168310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.244 [2024-06-07 21:48:23.168320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.244 qpair failed and we were unable to recover it. 00:31:23.244 [2024-06-07 21:48:23.168467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.244 [2024-06-07 21:48:23.168476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.244 qpair failed and we were unable to recover it. 00:31:23.244 [2024-06-07 21:48:23.168619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.244 [2024-06-07 21:48:23.168629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.244 qpair failed and we were unable to recover it. 00:31:23.244 [2024-06-07 21:48:23.168773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.244 [2024-06-07 21:48:23.168782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.244 qpair failed and we were unable to recover it. 00:31:23.244 [2024-06-07 21:48:23.168923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.244 [2024-06-07 21:48:23.168932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.244 qpair failed and we were unable to recover it. 00:31:23.244 [2024-06-07 21:48:23.169064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.244 [2024-06-07 21:48:23.169075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.244 qpair failed and we were unable to recover it. 00:31:23.244 [2024-06-07 21:48:23.169202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.244 [2024-06-07 21:48:23.169212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.244 qpair failed and we were unable to recover it. 00:31:23.244 [2024-06-07 21:48:23.169420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.244 [2024-06-07 21:48:23.169429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.244 qpair failed and we were unable to recover it. 00:31:23.244 [2024-06-07 21:48:23.169568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.244 [2024-06-07 21:48:23.169578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.244 qpair failed and we were unable to recover it. 00:31:23.244 [2024-06-07 21:48:23.169774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.244 [2024-06-07 21:48:23.169784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.244 qpair failed and we were unable to recover it. 00:31:23.245 [2024-06-07 21:48:23.169911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.245 [2024-06-07 21:48:23.169921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.245 qpair failed and we were unable to recover it. 00:31:23.245 [2024-06-07 21:48:23.170142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.245 [2024-06-07 21:48:23.170152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.245 qpair failed and we were unable to recover it. 00:31:23.245 [2024-06-07 21:48:23.170368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.245 [2024-06-07 21:48:23.170378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.245 qpair failed and we were unable to recover it. 00:31:23.245 [2024-06-07 21:48:23.170536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.245 [2024-06-07 21:48:23.170546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.245 qpair failed and we were unable to recover it. 00:31:23.245 [2024-06-07 21:48:23.170692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.245 [2024-06-07 21:48:23.170702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.245 qpair failed and we were unable to recover it. 00:31:23.245 [2024-06-07 21:48:23.170905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.245 [2024-06-07 21:48:23.170914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.245 qpair failed and we were unable to recover it. 00:31:23.245 [2024-06-07 21:48:23.171010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.245 [2024-06-07 21:48:23.171020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.245 qpair failed and we were unable to recover it. 00:31:23.245 [2024-06-07 21:48:23.171229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.245 [2024-06-07 21:48:23.171239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.245 qpair failed and we were unable to recover it. 00:31:23.245 [2024-06-07 21:48:23.171443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.245 [2024-06-07 21:48:23.171453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.245 qpair failed and we were unable to recover it. 00:31:23.245 [2024-06-07 21:48:23.171550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.245 [2024-06-07 21:48:23.171559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.245 qpair failed and we were unable to recover it. 00:31:23.245 [2024-06-07 21:48:23.171701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.245 [2024-06-07 21:48:23.171710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.245 qpair failed and we were unable to recover it. 00:31:23.245 [2024-06-07 21:48:23.171956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.245 [2024-06-07 21:48:23.171965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.245 qpair failed and we were unable to recover it. 00:31:23.245 [2024-06-07 21:48:23.172118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.245 [2024-06-07 21:48:23.172127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.245 qpair failed and we were unable to recover it. 00:31:23.245 [2024-06-07 21:48:23.172267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.245 [2024-06-07 21:48:23.172277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.245 qpair failed and we were unable to recover it. 00:31:23.245 [2024-06-07 21:48:23.172544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.245 [2024-06-07 21:48:23.172553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.245 qpair failed and we were unable to recover it. 00:31:23.245 [2024-06-07 21:48:23.172777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.245 [2024-06-07 21:48:23.172787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.245 qpair failed and we were unable to recover it. 00:31:23.245 [2024-06-07 21:48:23.172928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.245 [2024-06-07 21:48:23.172940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.245 qpair failed and we were unable to recover it. 00:31:23.245 [2024-06-07 21:48:23.173148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.245 [2024-06-07 21:48:23.173158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.245 qpair failed and we were unable to recover it. 00:31:23.245 [2024-06-07 21:48:23.173300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.245 [2024-06-07 21:48:23.173311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.245 qpair failed and we were unable to recover it. 00:31:23.245 [2024-06-07 21:48:23.173457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.245 [2024-06-07 21:48:23.173466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.245 qpair failed and we were unable to recover it. 00:31:23.245 [2024-06-07 21:48:23.173618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.245 [2024-06-07 21:48:23.173628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.245 qpair failed and we were unable to recover it. 00:31:23.245 [2024-06-07 21:48:23.173753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.245 [2024-06-07 21:48:23.173762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.245 qpair failed and we were unable to recover it. 00:31:23.245 [2024-06-07 21:48:23.173985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.245 [2024-06-07 21:48:23.173995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.245 qpair failed and we were unable to recover it. 00:31:23.245 [2024-06-07 21:48:23.174127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.245 [2024-06-07 21:48:23.174137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.245 qpair failed and we were unable to recover it. 00:31:23.245 [2024-06-07 21:48:23.174283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.245 [2024-06-07 21:48:23.174292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.245 qpair failed and we were unable to recover it. 00:31:23.245 [2024-06-07 21:48:23.174525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.245 [2024-06-07 21:48:23.174534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.245 qpair failed and we were unable to recover it. 00:31:23.245 [2024-06-07 21:48:23.174729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.245 [2024-06-07 21:48:23.174739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.245 qpair failed and we were unable to recover it. 00:31:23.245 [2024-06-07 21:48:23.174934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.245 [2024-06-07 21:48:23.174943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.245 qpair failed and we were unable to recover it. 00:31:23.245 [2024-06-07 21:48:23.175087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.245 [2024-06-07 21:48:23.175097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.245 qpair failed and we were unable to recover it. 00:31:23.245 [2024-06-07 21:48:23.175236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.245 [2024-06-07 21:48:23.175246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.245 qpair failed and we were unable to recover it. 00:31:23.245 [2024-06-07 21:48:23.175371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.245 [2024-06-07 21:48:23.175381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.245 qpair failed and we were unable to recover it. 00:31:23.245 [2024-06-07 21:48:23.175527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.245 [2024-06-07 21:48:23.175537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.245 qpair failed and we were unable to recover it. 00:31:23.245 [2024-06-07 21:48:23.175732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.245 [2024-06-07 21:48:23.175741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.245 qpair failed and we were unable to recover it. 00:31:23.245 [2024-06-07 21:48:23.175943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.245 [2024-06-07 21:48:23.175953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.245 qpair failed and we were unable to recover it. 00:31:23.245 [2024-06-07 21:48:23.176092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.245 [2024-06-07 21:48:23.176102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.245 qpair failed and we were unable to recover it. 00:31:23.245 [2024-06-07 21:48:23.176246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.245 [2024-06-07 21:48:23.176256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.245 qpair failed and we were unable to recover it. 00:31:23.245 [2024-06-07 21:48:23.176388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.245 [2024-06-07 21:48:23.176397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.245 qpair failed and we were unable to recover it. 00:31:23.246 [2024-06-07 21:48:23.176613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.246 [2024-06-07 21:48:23.176622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.246 qpair failed and we were unable to recover it. 00:31:23.246 [2024-06-07 21:48:23.176770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.246 [2024-06-07 21:48:23.176779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.246 qpair failed and we were unable to recover it. 00:31:23.246 [2024-06-07 21:48:23.176979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.246 [2024-06-07 21:48:23.176989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.246 qpair failed and we were unable to recover it. 00:31:23.246 [2024-06-07 21:48:23.177128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.246 [2024-06-07 21:48:23.177138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.246 qpair failed and we were unable to recover it. 00:31:23.246 [2024-06-07 21:48:23.177346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.246 [2024-06-07 21:48:23.177355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.246 qpair failed and we were unable to recover it. 00:31:23.246 [2024-06-07 21:48:23.177505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.246 [2024-06-07 21:48:23.177515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.246 qpair failed and we were unable to recover it. 00:31:23.246 [2024-06-07 21:48:23.177718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.246 [2024-06-07 21:48:23.177728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.246 qpair failed and we were unable to recover it. 00:31:23.246 [2024-06-07 21:48:23.177853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.246 [2024-06-07 21:48:23.177862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.246 qpair failed and we were unable to recover it. 00:31:23.246 [2024-06-07 21:48:23.178075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.246 [2024-06-07 21:48:23.178085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.246 qpair failed and we were unable to recover it. 00:31:23.246 [2024-06-07 21:48:23.178219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.246 [2024-06-07 21:48:23.178228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.246 qpair failed and we were unable to recover it. 00:31:23.246 [2024-06-07 21:48:23.178355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.246 [2024-06-07 21:48:23.178364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.246 qpair failed and we were unable to recover it. 00:31:23.246 [2024-06-07 21:48:23.178571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.246 [2024-06-07 21:48:23.178580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.246 qpair failed and we were unable to recover it. 00:31:23.246 [2024-06-07 21:48:23.178727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.246 [2024-06-07 21:48:23.178736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.246 qpair failed and we were unable to recover it. 00:31:23.246 [2024-06-07 21:48:23.178868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.246 [2024-06-07 21:48:23.178878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.246 qpair failed and we were unable to recover it. 00:31:23.246 [2024-06-07 21:48:23.179035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.246 [2024-06-07 21:48:23.179045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.246 qpair failed and we were unable to recover it. 00:31:23.246 [2024-06-07 21:48:23.179202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.246 [2024-06-07 21:48:23.179212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.246 qpair failed and we were unable to recover it. 00:31:23.246 [2024-06-07 21:48:23.179354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.246 [2024-06-07 21:48:23.179363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.246 qpair failed and we were unable to recover it. 00:31:23.246 [2024-06-07 21:48:23.179491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.246 [2024-06-07 21:48:23.179500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.246 qpair failed and we were unable to recover it. 00:31:23.246 [2024-06-07 21:48:23.179738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.246 [2024-06-07 21:48:23.179748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.246 qpair failed and we were unable to recover it. 00:31:23.246 [2024-06-07 21:48:23.179884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.246 [2024-06-07 21:48:23.179895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.246 qpair failed and we were unable to recover it. 00:31:23.246 [2024-06-07 21:48:23.180038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.246 [2024-06-07 21:48:23.180048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.246 qpair failed and we were unable to recover it. 00:31:23.246 [2024-06-07 21:48:23.180177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.246 [2024-06-07 21:48:23.180186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.246 qpair failed and we were unable to recover it. 00:31:23.246 [2024-06-07 21:48:23.180329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.246 [2024-06-07 21:48:23.180338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.246 qpair failed and we were unable to recover it. 00:31:23.246 [2024-06-07 21:48:23.180462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.246 [2024-06-07 21:48:23.180471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.246 qpair failed and we were unable to recover it. 00:31:23.246 [2024-06-07 21:48:23.180608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.246 [2024-06-07 21:48:23.180617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.246 qpair failed and we were unable to recover it. 00:31:23.246 [2024-06-07 21:48:23.180887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.246 [2024-06-07 21:48:23.180897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.246 qpair failed and we were unable to recover it. 00:31:23.246 [2024-06-07 21:48:23.181041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.246 [2024-06-07 21:48:23.181052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.246 qpair failed and we were unable to recover it. 00:31:23.246 [2024-06-07 21:48:23.181273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.246 [2024-06-07 21:48:23.181283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.246 qpair failed and we were unable to recover it. 00:31:23.246 [2024-06-07 21:48:23.181431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.246 [2024-06-07 21:48:23.181441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.246 qpair failed and we were unable to recover it. 00:31:23.246 [2024-06-07 21:48:23.181576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.246 [2024-06-07 21:48:23.181585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.246 qpair failed and we were unable to recover it. 00:31:23.246 [2024-06-07 21:48:23.181728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.246 [2024-06-07 21:48:23.181737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.246 qpair failed and we were unable to recover it. 00:31:23.246 [2024-06-07 21:48:23.181880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.246 [2024-06-07 21:48:23.181890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.246 qpair failed and we were unable to recover it. 00:31:23.246 [2024-06-07 21:48:23.182055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.246 [2024-06-07 21:48:23.182064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.246 qpair failed and we were unable to recover it. 00:31:23.246 [2024-06-07 21:48:23.182213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.246 [2024-06-07 21:48:23.182222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.246 qpair failed and we were unable to recover it. 00:31:23.246 [2024-06-07 21:48:23.182367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.246 [2024-06-07 21:48:23.182376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.246 qpair failed and we were unable to recover it. 00:31:23.246 [2024-06-07 21:48:23.182501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.246 [2024-06-07 21:48:23.182510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.246 qpair failed and we were unable to recover it. 00:31:23.247 [2024-06-07 21:48:23.182639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.247 [2024-06-07 21:48:23.182649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.247 qpair failed and we were unable to recover it. 00:31:23.247 [2024-06-07 21:48:23.182782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.247 [2024-06-07 21:48:23.182792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.247 qpair failed and we were unable to recover it. 00:31:23.247 [2024-06-07 21:48:23.183029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.247 [2024-06-07 21:48:23.183039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.247 qpair failed and we were unable to recover it. 00:31:23.247 [2024-06-07 21:48:23.183198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.247 [2024-06-07 21:48:23.183207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.247 qpair failed and we were unable to recover it. 00:31:23.247 [2024-06-07 21:48:23.183351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.247 [2024-06-07 21:48:23.183361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.247 qpair failed and we were unable to recover it. 00:31:23.247 [2024-06-07 21:48:23.183559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.247 [2024-06-07 21:48:23.183568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.247 qpair failed and we were unable to recover it. 00:31:23.247 [2024-06-07 21:48:23.183775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.247 [2024-06-07 21:48:23.183784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.247 qpair failed and we were unable to recover it. 00:31:23.247 [2024-06-07 21:48:23.183922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.247 [2024-06-07 21:48:23.183932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.247 qpair failed and we were unable to recover it. 00:31:23.247 [2024-06-07 21:48:23.184196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.247 [2024-06-07 21:48:23.184205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.247 qpair failed and we were unable to recover it. 00:31:23.247 [2024-06-07 21:48:23.184412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.247 [2024-06-07 21:48:23.184421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.247 qpair failed and we were unable to recover it. 00:31:23.247 [2024-06-07 21:48:23.184556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.247 [2024-06-07 21:48:23.184565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.247 qpair failed and we were unable to recover it. 00:31:23.247 [2024-06-07 21:48:23.184764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.247 [2024-06-07 21:48:23.184773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.247 qpair failed and we were unable to recover it. 00:31:23.247 [2024-06-07 21:48:23.184902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.247 [2024-06-07 21:48:23.184912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.247 qpair failed and we were unable to recover it. 00:31:23.247 [2024-06-07 21:48:23.185120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.247 [2024-06-07 21:48:23.185130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.247 qpair failed and we were unable to recover it. 00:31:23.247 [2024-06-07 21:48:23.185274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.247 [2024-06-07 21:48:23.185284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.247 qpair failed and we were unable to recover it. 00:31:23.247 [2024-06-07 21:48:23.185421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.247 [2024-06-07 21:48:23.185431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.247 qpair failed and we were unable to recover it. 00:31:23.247 [2024-06-07 21:48:23.185651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.247 [2024-06-07 21:48:23.185661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.247 qpair failed and we were unable to recover it. 00:31:23.247 [2024-06-07 21:48:23.185802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.247 [2024-06-07 21:48:23.185811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.247 qpair failed and we were unable to recover it. 00:31:23.247 [2024-06-07 21:48:23.186039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.247 [2024-06-07 21:48:23.186049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.247 qpair failed and we were unable to recover it. 00:31:23.247 [2024-06-07 21:48:23.186245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.247 [2024-06-07 21:48:23.186255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.247 qpair failed and we were unable to recover it. 00:31:23.247 [2024-06-07 21:48:23.186494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.247 [2024-06-07 21:48:23.186503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.247 qpair failed and we were unable to recover it. 00:31:23.247 [2024-06-07 21:48:23.186649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.247 [2024-06-07 21:48:23.186658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.247 qpair failed and we were unable to recover it. 00:31:23.247 [2024-06-07 21:48:23.186807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.247 [2024-06-07 21:48:23.186817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.247 qpair failed and we were unable to recover it. 00:31:23.247 [2024-06-07 21:48:23.186964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.247 [2024-06-07 21:48:23.186975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.247 qpair failed and we were unable to recover it. 00:31:23.247 [2024-06-07 21:48:23.187112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.247 [2024-06-07 21:48:23.187122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.247 qpair failed and we were unable to recover it. 00:31:23.247 [2024-06-07 21:48:23.187335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.247 [2024-06-07 21:48:23.187345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.247 qpair failed and we were unable to recover it. 00:31:23.247 [2024-06-07 21:48:23.187613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.247 [2024-06-07 21:48:23.187622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.247 qpair failed and we were unable to recover it. 00:31:23.247 [2024-06-07 21:48:23.187761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.248 [2024-06-07 21:48:23.187770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.248 qpair failed and we were unable to recover it. 00:31:23.248 [2024-06-07 21:48:23.188052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.248 [2024-06-07 21:48:23.188061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.248 qpair failed and we were unable to recover it. 00:31:23.248 [2024-06-07 21:48:23.188207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.248 [2024-06-07 21:48:23.188216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.248 qpair failed and we were unable to recover it. 00:31:23.248 [2024-06-07 21:48:23.188419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.248 [2024-06-07 21:48:23.188428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.248 qpair failed and we were unable to recover it. 00:31:23.248 [2024-06-07 21:48:23.188572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.248 [2024-06-07 21:48:23.188582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.248 qpair failed and we were unable to recover it. 00:31:23.248 [2024-06-07 21:48:23.188717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.248 [2024-06-07 21:48:23.188726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.248 qpair failed and we were unable to recover it. 00:31:23.248 [2024-06-07 21:48:23.188872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.248 [2024-06-07 21:48:23.188882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.248 qpair failed and we were unable to recover it. 00:31:23.248 [2024-06-07 21:48:23.189167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.248 [2024-06-07 21:48:23.189177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.248 qpair failed and we were unable to recover it. 00:31:23.248 [2024-06-07 21:48:23.189383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.248 [2024-06-07 21:48:23.189393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.248 qpair failed and we were unable to recover it. 00:31:23.248 [2024-06-07 21:48:23.189555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.248 [2024-06-07 21:48:23.189565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.248 qpair failed and we were unable to recover it. 00:31:23.248 [2024-06-07 21:48:23.189708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.248 [2024-06-07 21:48:23.189717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.248 qpair failed and we were unable to recover it. 00:31:23.248 [2024-06-07 21:48:23.189885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.248 [2024-06-07 21:48:23.189894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.248 qpair failed and we were unable to recover it. 00:31:23.248 [2024-06-07 21:48:23.190078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.248 [2024-06-07 21:48:23.190087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.248 qpair failed and we were unable to recover it. 00:31:23.248 [2024-06-07 21:48:23.190351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.248 [2024-06-07 21:48:23.190360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.248 qpair failed and we were unable to recover it. 00:31:23.248 [2024-06-07 21:48:23.190560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.248 [2024-06-07 21:48:23.190570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.248 qpair failed and we were unable to recover it. 00:31:23.248 [2024-06-07 21:48:23.190722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.248 [2024-06-07 21:48:23.190731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.248 qpair failed and we were unable to recover it. 00:31:23.248 [2024-06-07 21:48:23.190941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.248 [2024-06-07 21:48:23.190950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.248 qpair failed and we were unable to recover it. 00:31:23.248 [2024-06-07 21:48:23.191199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.248 [2024-06-07 21:48:23.191210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.248 qpair failed and we were unable to recover it. 00:31:23.248 [2024-06-07 21:48:23.191356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.248 [2024-06-07 21:48:23.191366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.248 qpair failed and we were unable to recover it. 00:31:23.248 [2024-06-07 21:48:23.191499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.248 [2024-06-07 21:48:23.191508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.248 qpair failed and we were unable to recover it. 00:31:23.248 [2024-06-07 21:48:23.191619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.248 [2024-06-07 21:48:23.191629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.248 qpair failed and we were unable to recover it. 00:31:23.248 [2024-06-07 21:48:23.191826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.248 [2024-06-07 21:48:23.191836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.248 qpair failed and we were unable to recover it. 00:31:23.248 [2024-06-07 21:48:23.192073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.248 [2024-06-07 21:48:23.192082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.248 qpair failed and we were unable to recover it. 00:31:23.248 [2024-06-07 21:48:23.192228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.248 [2024-06-07 21:48:23.192237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.248 qpair failed and we were unable to recover it. 00:31:23.248 [2024-06-07 21:48:23.192379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.248 [2024-06-07 21:48:23.192388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.248 qpair failed and we were unable to recover it. 00:31:23.248 [2024-06-07 21:48:23.192531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.248 [2024-06-07 21:48:23.192541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.248 qpair failed and we were unable to recover it. 00:31:23.248 [2024-06-07 21:48:23.192680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.248 [2024-06-07 21:48:23.192689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.248 qpair failed and we were unable to recover it. 00:31:23.248 [2024-06-07 21:48:23.192827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.248 [2024-06-07 21:48:23.192836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.248 qpair failed and we were unable to recover it. 00:31:23.248 [2024-06-07 21:48:23.192961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.248 [2024-06-07 21:48:23.192971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.248 qpair failed and we were unable to recover it. 00:31:23.248 [2024-06-07 21:48:23.193154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.248 [2024-06-07 21:48:23.193163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.248 qpair failed and we were unable to recover it. 00:31:23.248 [2024-06-07 21:48:23.193298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.248 [2024-06-07 21:48:23.193307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.248 qpair failed and we were unable to recover it. 00:31:23.248 [2024-06-07 21:48:23.193447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.248 [2024-06-07 21:48:23.193457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.248 qpair failed and we were unable to recover it. 00:31:23.249 [2024-06-07 21:48:23.193603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.249 [2024-06-07 21:48:23.193613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.249 qpair failed and we were unable to recover it. 00:31:23.249 [2024-06-07 21:48:23.193744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.249 [2024-06-07 21:48:23.193754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.249 qpair failed and we were unable to recover it. 00:31:23.249 [2024-06-07 21:48:23.193952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.249 [2024-06-07 21:48:23.193961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.249 qpair failed and we were unable to recover it. 00:31:23.249 [2024-06-07 21:48:23.194156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.249 [2024-06-07 21:48:23.194165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.249 qpair failed and we were unable to recover it. 00:31:23.249 [2024-06-07 21:48:23.194363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.249 [2024-06-07 21:48:23.194376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.249 qpair failed and we were unable to recover it. 00:31:23.249 [2024-06-07 21:48:23.194593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.249 [2024-06-07 21:48:23.194603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.249 qpair failed and we were unable to recover it. 00:31:23.249 [2024-06-07 21:48:23.194800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.249 [2024-06-07 21:48:23.194810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.249 qpair failed and we were unable to recover it. 00:31:23.249 [2024-06-07 21:48:23.195016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.249 [2024-06-07 21:48:23.195046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.249 qpair failed and we were unable to recover it. 00:31:23.249 [2024-06-07 21:48:23.195248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.249 [2024-06-07 21:48:23.195258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.249 qpair failed and we were unable to recover it. 00:31:23.249 [2024-06-07 21:48:23.195403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.249 [2024-06-07 21:48:23.195412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.249 qpair failed and we were unable to recover it. 00:31:23.249 [2024-06-07 21:48:23.195541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.249 [2024-06-07 21:48:23.195550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.249 qpair failed and we were unable to recover it. 00:31:23.249 [2024-06-07 21:48:23.195686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.249 [2024-06-07 21:48:23.195695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.249 qpair failed and we were unable to recover it. 00:31:23.249 [2024-06-07 21:48:23.195825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.249 [2024-06-07 21:48:23.195834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.249 qpair failed and we were unable to recover it. 00:31:23.249 [2024-06-07 21:48:23.195993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.249 [2024-06-07 21:48:23.196002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.249 qpair failed and we were unable to recover it. 00:31:23.249 [2024-06-07 21:48:23.196185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.249 [2024-06-07 21:48:23.196195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.249 qpair failed and we were unable to recover it. 00:31:23.249 [2024-06-07 21:48:23.196392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.249 [2024-06-07 21:48:23.196401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.249 qpair failed and we were unable to recover it. 00:31:23.249 [2024-06-07 21:48:23.196534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.249 [2024-06-07 21:48:23.196544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.249 qpair failed and we were unable to recover it. 00:31:23.249 [2024-06-07 21:48:23.196756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.249 [2024-06-07 21:48:23.196765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.249 qpair failed and we were unable to recover it. 00:31:23.249 [2024-06-07 21:48:23.196908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.249 [2024-06-07 21:48:23.196917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.249 qpair failed and we were unable to recover it. 00:31:23.249 [2024-06-07 21:48:23.197113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.249 [2024-06-07 21:48:23.197123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.249 qpair failed and we were unable to recover it. 00:31:23.249 [2024-06-07 21:48:23.197258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.249 [2024-06-07 21:48:23.197268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.249 qpair failed and we were unable to recover it. 00:31:23.249 [2024-06-07 21:48:23.197473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.249 [2024-06-07 21:48:23.197482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.249 qpair failed and we were unable to recover it. 00:31:23.249 [2024-06-07 21:48:23.197679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.249 [2024-06-07 21:48:23.197689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.249 qpair failed and we were unable to recover it. 00:31:23.249 [2024-06-07 21:48:23.197983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.249 [2024-06-07 21:48:23.197992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.249 qpair failed and we were unable to recover it. 00:31:23.249 [2024-06-07 21:48:23.198157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.249 [2024-06-07 21:48:23.198167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.249 qpair failed and we were unable to recover it. 00:31:23.249 [2024-06-07 21:48:23.198376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.249 [2024-06-07 21:48:23.198385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.249 qpair failed and we were unable to recover it. 00:31:23.249 [2024-06-07 21:48:23.198517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.249 [2024-06-07 21:48:23.198527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.249 qpair failed and we were unable to recover it. 00:31:23.249 [2024-06-07 21:48:23.198680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.249 [2024-06-07 21:48:23.198689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.249 qpair failed and we were unable to recover it. 00:31:23.249 [2024-06-07 21:48:23.198896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.249 [2024-06-07 21:48:23.198906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.249 qpair failed and we were unable to recover it. 00:31:23.249 [2024-06-07 21:48:23.199044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.249 [2024-06-07 21:48:23.199054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.249 qpair failed and we were unable to recover it. 00:31:23.249 [2024-06-07 21:48:23.199253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.249 [2024-06-07 21:48:23.199262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.249 qpair failed and we were unable to recover it. 00:31:23.249 [2024-06-07 21:48:23.199469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.249 [2024-06-07 21:48:23.199479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.249 qpair failed and we were unable to recover it. 00:31:23.249 [2024-06-07 21:48:23.199703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.249 [2024-06-07 21:48:23.199712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.249 qpair failed and we were unable to recover it. 00:31:23.249 [2024-06-07 21:48:23.199916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.249 [2024-06-07 21:48:23.199925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.249 qpair failed and we were unable to recover it. 00:31:23.249 [2024-06-07 21:48:23.200067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.249 [2024-06-07 21:48:23.200076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.249 qpair failed and we were unable to recover it. 00:31:23.249 [2024-06-07 21:48:23.200303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.249 [2024-06-07 21:48:23.200312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.249 qpair failed and we were unable to recover it. 00:31:23.249 [2024-06-07 21:48:23.200449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.249 [2024-06-07 21:48:23.200459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.250 qpair failed and we were unable to recover it. 00:31:23.250 [2024-06-07 21:48:23.200658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.250 [2024-06-07 21:48:23.200668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.250 qpair failed and we were unable to recover it. 00:31:23.250 [2024-06-07 21:48:23.200891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.250 [2024-06-07 21:48:23.200901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.250 qpair failed and we were unable to recover it. 00:31:23.250 [2024-06-07 21:48:23.201035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.250 [2024-06-07 21:48:23.201045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.250 qpair failed and we were unable to recover it. 00:31:23.250 [2024-06-07 21:48:23.201197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.250 [2024-06-07 21:48:23.201207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.250 qpair failed and we were unable to recover it. 00:31:23.250 [2024-06-07 21:48:23.201348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.250 [2024-06-07 21:48:23.201358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.250 qpair failed and we were unable to recover it. 00:31:23.250 [2024-06-07 21:48:23.201646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.250 [2024-06-07 21:48:23.201655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.250 qpair failed and we were unable to recover it. 00:31:23.250 [2024-06-07 21:48:23.201788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.250 [2024-06-07 21:48:23.201798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.250 qpair failed and we were unable to recover it. 00:31:23.250 [2024-06-07 21:48:23.201959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.250 [2024-06-07 21:48:23.201970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.250 qpair failed and we were unable to recover it. 00:31:23.250 [2024-06-07 21:48:23.202164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.250 [2024-06-07 21:48:23.202174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.250 qpair failed and we were unable to recover it. 00:31:23.250 [2024-06-07 21:48:23.202388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.250 [2024-06-07 21:48:23.202398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.250 qpair failed and we were unable to recover it. 00:31:23.250 [2024-06-07 21:48:23.202558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.250 [2024-06-07 21:48:23.202568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.250 qpair failed and we were unable to recover it. 00:31:23.250 [2024-06-07 21:48:23.202810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.250 [2024-06-07 21:48:23.202819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.250 qpair failed and we were unable to recover it. 00:31:23.250 [2024-06-07 21:48:23.203043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.250 [2024-06-07 21:48:23.203053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.250 qpair failed and we were unable to recover it. 00:31:23.250 [2024-06-07 21:48:23.203206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.250 [2024-06-07 21:48:23.203215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.250 qpair failed and we were unable to recover it. 00:31:23.250 [2024-06-07 21:48:23.203508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.250 [2024-06-07 21:48:23.203517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.250 qpair failed and we were unable to recover it. 00:31:23.250 [2024-06-07 21:48:23.203657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.250 [2024-06-07 21:48:23.203667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.250 qpair failed and we were unable to recover it. 00:31:23.250 [2024-06-07 21:48:23.203819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.250 [2024-06-07 21:48:23.203829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.250 qpair failed and we were unable to recover it. 00:31:23.250 [2024-06-07 21:48:23.204039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.250 [2024-06-07 21:48:23.204049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.250 qpair failed and we were unable to recover it. 00:31:23.250 [2024-06-07 21:48:23.204251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.250 [2024-06-07 21:48:23.204260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.250 qpair failed and we were unable to recover it. 00:31:23.250 [2024-06-07 21:48:23.204393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.250 [2024-06-07 21:48:23.204403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.250 qpair failed and we were unable to recover it. 00:31:23.250 [2024-06-07 21:48:23.204567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.250 [2024-06-07 21:48:23.204576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.250 qpair failed and we were unable to recover it. 00:31:23.250 [2024-06-07 21:48:23.204725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.250 [2024-06-07 21:48:23.204735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.250 qpair failed and we were unable to recover it. 00:31:23.250 [2024-06-07 21:48:23.204968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.250 [2024-06-07 21:48:23.204978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.250 qpair failed and we were unable to recover it. 00:31:23.250 [2024-06-07 21:48:23.205119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.250 [2024-06-07 21:48:23.205129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.250 qpair failed and we were unable to recover it. 00:31:23.250 [2024-06-07 21:48:23.205289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.250 [2024-06-07 21:48:23.205298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.250 qpair failed and we were unable to recover it. 00:31:23.250 [2024-06-07 21:48:23.205503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.250 [2024-06-07 21:48:23.205512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.250 qpair failed and we were unable to recover it. 00:31:23.250 [2024-06-07 21:48:23.205722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.250 [2024-06-07 21:48:23.205731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.250 qpair failed and we were unable to recover it. 00:31:23.250 [2024-06-07 21:48:23.205873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.250 [2024-06-07 21:48:23.205882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.250 qpair failed and we were unable to recover it. 00:31:23.250 [2024-06-07 21:48:23.206080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.250 [2024-06-07 21:48:23.206090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.250 qpair failed and we were unable to recover it. 00:31:23.250 [2024-06-07 21:48:23.206256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.250 [2024-06-07 21:48:23.206265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.250 qpair failed and we were unable to recover it. 00:31:23.250 [2024-06-07 21:48:23.206396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.250 [2024-06-07 21:48:23.206406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.250 qpair failed and we were unable to recover it. 00:31:23.250 [2024-06-07 21:48:23.206556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.250 [2024-06-07 21:48:23.206565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.250 qpair failed and we were unable to recover it. 00:31:23.250 [2024-06-07 21:48:23.206762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.250 [2024-06-07 21:48:23.206772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.250 qpair failed and we were unable to recover it. 00:31:23.251 [2024-06-07 21:48:23.207050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.251 [2024-06-07 21:48:23.207060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.251 qpair failed and we were unable to recover it. 00:31:23.251 [2024-06-07 21:48:23.207191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.251 [2024-06-07 21:48:23.207201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.251 qpair failed and we were unable to recover it. 00:31:23.251 [2024-06-07 21:48:23.207411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.251 [2024-06-07 21:48:23.207420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.251 qpair failed and we were unable to recover it. 00:31:23.251 [2024-06-07 21:48:23.207559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.251 [2024-06-07 21:48:23.207570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.251 qpair failed and we were unable to recover it. 00:31:23.251 [2024-06-07 21:48:23.207803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.251 [2024-06-07 21:48:23.207813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.251 qpair failed and we were unable to recover it. 00:31:23.251 [2024-06-07 21:48:23.208041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.251 [2024-06-07 21:48:23.208051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.251 qpair failed and we were unable to recover it. 00:31:23.251 [2024-06-07 21:48:23.208267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.251 [2024-06-07 21:48:23.208277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.251 qpair failed and we were unable to recover it. 00:31:23.251 [2024-06-07 21:48:23.208545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.251 [2024-06-07 21:48:23.208555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.251 qpair failed and we were unable to recover it. 00:31:23.251 [2024-06-07 21:48:23.208787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.251 [2024-06-07 21:48:23.208797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.251 qpair failed and we were unable to recover it. 00:31:23.251 [2024-06-07 21:48:23.209004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.251 [2024-06-07 21:48:23.209014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.251 qpair failed and we were unable to recover it. 00:31:23.251 [2024-06-07 21:48:23.209228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.251 [2024-06-07 21:48:23.209238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.251 qpair failed and we were unable to recover it. 00:31:23.251 [2024-06-07 21:48:23.209438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.251 [2024-06-07 21:48:23.209448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.251 qpair failed and we were unable to recover it. 00:31:23.251 [2024-06-07 21:48:23.209573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.251 [2024-06-07 21:48:23.209582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.251 qpair failed and we were unable to recover it. 00:31:23.251 [2024-06-07 21:48:23.209720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.251 [2024-06-07 21:48:23.209729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.251 qpair failed and we were unable to recover it. 00:31:23.251 [2024-06-07 21:48:23.209889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.251 [2024-06-07 21:48:23.209901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.251 qpair failed and we were unable to recover it. 00:31:23.251 [2024-06-07 21:48:23.210168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.251 [2024-06-07 21:48:23.210178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.251 qpair failed and we were unable to recover it. 00:31:23.251 [2024-06-07 21:48:23.210379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.251 [2024-06-07 21:48:23.210389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.251 qpair failed and we were unable to recover it. 00:31:23.251 [2024-06-07 21:48:23.210537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.251 [2024-06-07 21:48:23.210547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.251 qpair failed and we were unable to recover it. 00:31:23.251 [2024-06-07 21:48:23.210706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.251 [2024-06-07 21:48:23.210716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.251 qpair failed and we were unable to recover it. 00:31:23.251 [2024-06-07 21:48:23.210855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.251 [2024-06-07 21:48:23.210864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.251 qpair failed and we were unable to recover it. 00:31:23.251 [2024-06-07 21:48:23.210950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.251 [2024-06-07 21:48:23.210959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.251 qpair failed and we were unable to recover it. 00:31:23.251 [2024-06-07 21:48:23.211094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.251 [2024-06-07 21:48:23.211104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.251 qpair failed and we were unable to recover it. 00:31:23.251 [2024-06-07 21:48:23.211253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.251 [2024-06-07 21:48:23.211263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.251 qpair failed and we were unable to recover it. 00:31:23.251 [2024-06-07 21:48:23.211556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.251 [2024-06-07 21:48:23.211566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.251 qpair failed and we were unable to recover it. 00:31:23.251 [2024-06-07 21:48:23.211709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.251 [2024-06-07 21:48:23.211718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.251 qpair failed and we were unable to recover it. 00:31:23.251 [2024-06-07 21:48:23.211854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.251 [2024-06-07 21:48:23.211864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.251 qpair failed and we were unable to recover it. 00:31:23.251 [2024-06-07 21:48:23.212008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.251 [2024-06-07 21:48:23.212018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.251 qpair failed and we were unable to recover it. 00:31:23.251 [2024-06-07 21:48:23.212170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.251 [2024-06-07 21:48:23.212179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.251 qpair failed and we were unable to recover it. 00:31:23.251 [2024-06-07 21:48:23.212405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.251 [2024-06-07 21:48:23.212415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.251 qpair failed and we were unable to recover it. 00:31:23.251 [2024-06-07 21:48:23.212646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.251 [2024-06-07 21:48:23.212655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.251 qpair failed and we were unable to recover it. 00:31:23.251 [2024-06-07 21:48:23.212753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.251 [2024-06-07 21:48:23.212762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.251 qpair failed and we were unable to recover it. 00:31:23.251 [2024-06-07 21:48:23.212964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.251 [2024-06-07 21:48:23.212974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.251 qpair failed and we were unable to recover it. 00:31:23.251 [2024-06-07 21:48:23.213257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.251 [2024-06-07 21:48:23.213267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.251 qpair failed and we were unable to recover it. 00:31:23.251 [2024-06-07 21:48:23.213410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.251 [2024-06-07 21:48:23.213420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.251 qpair failed and we were unable to recover it. 00:31:23.251 [2024-06-07 21:48:23.213643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.251 [2024-06-07 21:48:23.213653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.251 qpair failed and we were unable to recover it. 00:31:23.251 [2024-06-07 21:48:23.213785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.251 [2024-06-07 21:48:23.213795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.251 qpair failed and we were unable to recover it. 00:31:23.251 [2024-06-07 21:48:23.214014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.251 [2024-06-07 21:48:23.214029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.251 qpair failed and we were unable to recover it. 00:31:23.252 [2024-06-07 21:48:23.214249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.252 [2024-06-07 21:48:23.214258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.252 qpair failed and we were unable to recover it. 00:31:23.252 [2024-06-07 21:48:23.214473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.252 [2024-06-07 21:48:23.214482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.252 qpair failed and we were unable to recover it. 00:31:23.252 [2024-06-07 21:48:23.214703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.252 [2024-06-07 21:48:23.214713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.252 qpair failed and we were unable to recover it. 00:31:23.252 [2024-06-07 21:48:23.214856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.252 [2024-06-07 21:48:23.214866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.252 qpair failed and we were unable to recover it. 00:31:23.252 [2024-06-07 21:48:23.215135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.252 [2024-06-07 21:48:23.215145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.252 qpair failed and we were unable to recover it. 00:31:23.252 [2024-06-07 21:48:23.215352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.252 [2024-06-07 21:48:23.215362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.252 qpair failed and we were unable to recover it. 00:31:23.252 [2024-06-07 21:48:23.215560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.252 [2024-06-07 21:48:23.215570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.252 qpair failed and we were unable to recover it. 00:31:23.252 [2024-06-07 21:48:23.215699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.252 [2024-06-07 21:48:23.215709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.252 qpair failed and we were unable to recover it. 00:31:23.252 [2024-06-07 21:48:23.215974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.252 [2024-06-07 21:48:23.215984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.252 qpair failed and we were unable to recover it. 00:31:23.252 [2024-06-07 21:48:23.216127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.252 [2024-06-07 21:48:23.216137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.252 qpair failed and we were unable to recover it. 00:31:23.252 [2024-06-07 21:48:23.216341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.252 [2024-06-07 21:48:23.216351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.252 qpair failed and we were unable to recover it. 00:31:23.252 [2024-06-07 21:48:23.216567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.252 [2024-06-07 21:48:23.216576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.252 qpair failed and we were unable to recover it. 00:31:23.252 [2024-06-07 21:48:23.216726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.252 [2024-06-07 21:48:23.216735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.252 qpair failed and we were unable to recover it. 00:31:23.252 [2024-06-07 21:48:23.216891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.252 [2024-06-07 21:48:23.216900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.252 qpair failed and we were unable to recover it. 00:31:23.252 [2024-06-07 21:48:23.217118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.252 [2024-06-07 21:48:23.217127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.252 qpair failed and we were unable to recover it. 00:31:23.252 [2024-06-07 21:48:23.217324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.252 [2024-06-07 21:48:23.217333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.252 qpair failed and we were unable to recover it. 00:31:23.252 [2024-06-07 21:48:23.217653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.252 [2024-06-07 21:48:23.217662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.252 qpair failed and we were unable to recover it. 00:31:23.252 [2024-06-07 21:48:23.217928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.252 [2024-06-07 21:48:23.217939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.252 qpair failed and we were unable to recover it. 00:31:23.252 [2024-06-07 21:48:23.218143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.252 [2024-06-07 21:48:23.218153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.252 qpair failed and we were unable to recover it. 00:31:23.252 [2024-06-07 21:48:23.218375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.252 [2024-06-07 21:48:23.218385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.252 qpair failed and we were unable to recover it. 00:31:23.252 [2024-06-07 21:48:23.218602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.252 [2024-06-07 21:48:23.218611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.252 qpair failed and we were unable to recover it. 00:31:23.252 [2024-06-07 21:48:23.218829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.252 [2024-06-07 21:48:23.218838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.252 qpair failed and we were unable to recover it. 00:31:23.252 [2024-06-07 21:48:23.219058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.252 [2024-06-07 21:48:23.219067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.252 qpair failed and we were unable to recover it. 00:31:23.252 [2024-06-07 21:48:23.219366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.252 [2024-06-07 21:48:23.219376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.252 qpair failed and we were unable to recover it. 00:31:23.252 [2024-06-07 21:48:23.219605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.252 [2024-06-07 21:48:23.219615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.252 qpair failed and we were unable to recover it. 00:31:23.252 [2024-06-07 21:48:23.219774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.252 [2024-06-07 21:48:23.219783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.252 qpair failed and we were unable to recover it. 00:31:23.252 [2024-06-07 21:48:23.219921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.252 [2024-06-07 21:48:23.219930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.252 qpair failed and we were unable to recover it. 00:31:23.252 [2024-06-07 21:48:23.220198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.252 [2024-06-07 21:48:23.220208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.252 qpair failed and we were unable to recover it. 00:31:23.252 [2024-06-07 21:48:23.220408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.252 [2024-06-07 21:48:23.220418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.253 qpair failed and we were unable to recover it. 00:31:23.253 [2024-06-07 21:48:23.220614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.253 [2024-06-07 21:48:23.220624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.253 qpair failed and we were unable to recover it. 00:31:23.253 [2024-06-07 21:48:23.220764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.253 [2024-06-07 21:48:23.220774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.253 qpair failed and we were unable to recover it. 00:31:23.253 [2024-06-07 21:48:23.220915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.253 [2024-06-07 21:48:23.220924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.253 qpair failed and we were unable to recover it. 00:31:23.253 [2024-06-07 21:48:23.221077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.253 [2024-06-07 21:48:23.221087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.253 qpair failed and we were unable to recover it. 00:31:23.253 [2024-06-07 21:48:23.221249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.253 [2024-06-07 21:48:23.221258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.253 qpair failed and we were unable to recover it. 00:31:23.253 [2024-06-07 21:48:23.221475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.253 [2024-06-07 21:48:23.221484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.253 qpair failed and we were unable to recover it. 00:31:23.253 [2024-06-07 21:48:23.221702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.253 [2024-06-07 21:48:23.221711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.253 qpair failed and we were unable to recover it. 00:31:23.253 [2024-06-07 21:48:23.221864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.253 [2024-06-07 21:48:23.221873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.253 qpair failed and we were unable to recover it. 00:31:23.253 [2024-06-07 21:48:23.222103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.253 [2024-06-07 21:48:23.222113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.253 qpair failed and we were unable to recover it. 00:31:23.253 [2024-06-07 21:48:23.222319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.253 [2024-06-07 21:48:23.222328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.253 qpair failed and we were unable to recover it. 00:31:23.253 [2024-06-07 21:48:23.222539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.253 [2024-06-07 21:48:23.222548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.253 qpair failed and we were unable to recover it. 00:31:23.253 [2024-06-07 21:48:23.222700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.253 [2024-06-07 21:48:23.222709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.253 qpair failed and we were unable to recover it. 00:31:23.253 [2024-06-07 21:48:23.222909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.253 [2024-06-07 21:48:23.222918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.253 qpair failed and we were unable to recover it. 00:31:23.253 [2024-06-07 21:48:23.223128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.253 [2024-06-07 21:48:23.223138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.253 qpair failed and we were unable to recover it. 00:31:23.253 [2024-06-07 21:48:23.223293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.253 [2024-06-07 21:48:23.223303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.253 qpair failed and we were unable to recover it. 00:31:23.253 [2024-06-07 21:48:23.223501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.253 [2024-06-07 21:48:23.223511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.253 qpair failed and we were unable to recover it. 00:31:23.253 [2024-06-07 21:48:23.223718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.253 [2024-06-07 21:48:23.223727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.253 qpair failed and we were unable to recover it. 00:31:23.253 [2024-06-07 21:48:23.224065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.253 [2024-06-07 21:48:23.224075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.253 qpair failed and we were unable to recover it. 00:31:23.253 [2024-06-07 21:48:23.224340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.253 [2024-06-07 21:48:23.224349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.253 qpair failed and we were unable to recover it. 00:31:23.253 [2024-06-07 21:48:23.224614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.253 [2024-06-07 21:48:23.224624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.253 qpair failed and we were unable to recover it. 00:31:23.253 [2024-06-07 21:48:23.224890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.253 [2024-06-07 21:48:23.224900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.253 qpair failed and we were unable to recover it. 00:31:23.253 [2024-06-07 21:48:23.225112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.253 [2024-06-07 21:48:23.225121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.253 qpair failed and we were unable to recover it. 00:31:23.253 [2024-06-07 21:48:23.225336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.253 [2024-06-07 21:48:23.225346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.253 qpair failed and we were unable to recover it. 00:31:23.253 [2024-06-07 21:48:23.225488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.253 [2024-06-07 21:48:23.225498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.253 qpair failed and we were unable to recover it. 00:31:23.253 [2024-06-07 21:48:23.225641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.253 [2024-06-07 21:48:23.225650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.253 qpair failed and we were unable to recover it. 00:31:23.253 [2024-06-07 21:48:23.225886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.253 [2024-06-07 21:48:23.225895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.253 qpair failed and we were unable to recover it. 00:31:23.253 [2024-06-07 21:48:23.226045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.253 [2024-06-07 21:48:23.226055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.253 qpair failed and we were unable to recover it. 00:31:23.253 [2024-06-07 21:48:23.226197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.253 [2024-06-07 21:48:23.226206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.253 qpair failed and we were unable to recover it. 00:31:23.253 [2024-06-07 21:48:23.226353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.253 [2024-06-07 21:48:23.226364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.253 qpair failed and we were unable to recover it. 00:31:23.253 [2024-06-07 21:48:23.226632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.253 [2024-06-07 21:48:23.226642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.253 qpair failed and we were unable to recover it. 00:31:23.253 [2024-06-07 21:48:23.226791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.253 [2024-06-07 21:48:23.226800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.253 qpair failed and we were unable to recover it. 00:31:23.253 [2024-06-07 21:48:23.227016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.253 [2024-06-07 21:48:23.227031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.253 qpair failed and we were unable to recover it. 00:31:23.253 [2024-06-07 21:48:23.227173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.253 [2024-06-07 21:48:23.227182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.253 qpair failed and we were unable to recover it. 00:31:23.253 [2024-06-07 21:48:23.227398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.253 [2024-06-07 21:48:23.227408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.253 qpair failed and we were unable to recover it. 00:31:23.253 [2024-06-07 21:48:23.227615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.253 [2024-06-07 21:48:23.227625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.253 qpair failed and we were unable to recover it. 00:31:23.254 [2024-06-07 21:48:23.227854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.254 [2024-06-07 21:48:23.227863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.254 qpair failed and we were unable to recover it. 00:31:23.254 [2024-06-07 21:48:23.228007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.254 [2024-06-07 21:48:23.228016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.254 qpair failed and we were unable to recover it. 00:31:23.254 [2024-06-07 21:48:23.228162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.254 [2024-06-07 21:48:23.228172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.254 qpair failed and we were unable to recover it. 00:31:23.254 [2024-06-07 21:48:23.228312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.254 [2024-06-07 21:48:23.228322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.254 qpair failed and we were unable to recover it. 00:31:23.254 [2024-06-07 21:48:23.228541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.254 [2024-06-07 21:48:23.228550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.254 qpair failed and we were unable to recover it. 00:31:23.254 [2024-06-07 21:48:23.228793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.254 [2024-06-07 21:48:23.228803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.254 qpair failed and we were unable to recover it. 00:31:23.254 [2024-06-07 21:48:23.229008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.254 [2024-06-07 21:48:23.229018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.254 qpair failed and we were unable to recover it. 00:31:23.254 [2024-06-07 21:48:23.229320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.254 [2024-06-07 21:48:23.229330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.254 qpair failed and we were unable to recover it. 00:31:23.254 [2024-06-07 21:48:23.229484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.254 [2024-06-07 21:48:23.229493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.254 qpair failed and we were unable to recover it. 00:31:23.254 [2024-06-07 21:48:23.229689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.254 [2024-06-07 21:48:23.229698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.254 qpair failed and we were unable to recover it. 00:31:23.254 [2024-06-07 21:48:23.229846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.254 [2024-06-07 21:48:23.229855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.254 qpair failed and we were unable to recover it. 00:31:23.254 [2024-06-07 21:48:23.230094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.254 [2024-06-07 21:48:23.230104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.254 qpair failed and we were unable to recover it. 00:31:23.254 [2024-06-07 21:48:23.230395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.254 [2024-06-07 21:48:23.230404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.254 qpair failed and we were unable to recover it. 00:31:23.254 [2024-06-07 21:48:23.230636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.254 [2024-06-07 21:48:23.230645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.254 qpair failed and we were unable to recover it. 00:31:23.254 [2024-06-07 21:48:23.230878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.254 [2024-06-07 21:48:23.230888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.254 qpair failed and we were unable to recover it. 00:31:23.254 [2024-06-07 21:48:23.231044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.254 [2024-06-07 21:48:23.231054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.254 qpair failed and we were unable to recover it. 00:31:23.254 [2024-06-07 21:48:23.231258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.254 [2024-06-07 21:48:23.231268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.254 qpair failed and we were unable to recover it. 00:31:23.254 [2024-06-07 21:48:23.231488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.254 [2024-06-07 21:48:23.231498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.254 qpair failed and we were unable to recover it. 00:31:23.254 [2024-06-07 21:48:23.231751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.254 [2024-06-07 21:48:23.231761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.254 qpair failed and we were unable to recover it. 00:31:23.254 [2024-06-07 21:48:23.231953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.254 [2024-06-07 21:48:23.231962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.254 qpair failed and we were unable to recover it. 00:31:23.254 [2024-06-07 21:48:23.232099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.254 [2024-06-07 21:48:23.232111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.254 qpair failed and we were unable to recover it. 00:31:23.254 [2024-06-07 21:48:23.232427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.254 [2024-06-07 21:48:23.232437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.254 qpair failed and we were unable to recover it. 00:31:23.254 [2024-06-07 21:48:23.232566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.254 [2024-06-07 21:48:23.232575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.254 qpair failed and we were unable to recover it. 00:31:23.254 [2024-06-07 21:48:23.232724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.254 [2024-06-07 21:48:23.232733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.254 qpair failed and we were unable to recover it. 00:31:23.254 [2024-06-07 21:48:23.232890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.254 [2024-06-07 21:48:23.232900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.254 qpair failed and we were unable to recover it. 00:31:23.254 [2024-06-07 21:48:23.233125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.254 [2024-06-07 21:48:23.233135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.254 qpair failed and we were unable to recover it. 00:31:23.254 [2024-06-07 21:48:23.233347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.254 [2024-06-07 21:48:23.233357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.254 qpair failed and we were unable to recover it. 00:31:23.254 [2024-06-07 21:48:23.233588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.254 [2024-06-07 21:48:23.233597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.254 qpair failed and we were unable to recover it. 00:31:23.254 [2024-06-07 21:48:23.233812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.254 [2024-06-07 21:48:23.233821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.254 qpair failed and we were unable to recover it. 00:31:23.254 [2024-06-07 21:48:23.233962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.254 [2024-06-07 21:48:23.233971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.254 qpair failed and we were unable to recover it. 00:31:23.254 [2024-06-07 21:48:23.234105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.254 [2024-06-07 21:48:23.234115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.254 qpair failed and we were unable to recover it. 00:31:23.254 [2024-06-07 21:48:23.234262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.254 [2024-06-07 21:48:23.234272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.254 qpair failed and we were unable to recover it. 00:31:23.254 [2024-06-07 21:48:23.234413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.255 [2024-06-07 21:48:23.234423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.255 qpair failed and we were unable to recover it. 00:31:23.255 [2024-06-07 21:48:23.234716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.255 [2024-06-07 21:48:23.234725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.255 qpair failed and we were unable to recover it. 00:31:23.255 [2024-06-07 21:48:23.234922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.255 [2024-06-07 21:48:23.234931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.255 qpair failed and we were unable to recover it. 00:31:23.255 [2024-06-07 21:48:23.235189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.255 [2024-06-07 21:48:23.235199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.255 qpair failed and we were unable to recover it. 00:31:23.255 [2024-06-07 21:48:23.235492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.255 [2024-06-07 21:48:23.235501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.255 qpair failed and we were unable to recover it. 00:31:23.255 [2024-06-07 21:48:23.235657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.255 [2024-06-07 21:48:23.235666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.255 qpair failed and we were unable to recover it. 00:31:23.255 [2024-06-07 21:48:23.235762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.255 [2024-06-07 21:48:23.235772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.255 qpair failed and we were unable to recover it. 00:31:23.255 [2024-06-07 21:48:23.236036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.255 [2024-06-07 21:48:23.236046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.255 qpair failed and we were unable to recover it. 00:31:23.255 [2024-06-07 21:48:23.236191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.255 [2024-06-07 21:48:23.236201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.255 qpair failed and we were unable to recover it. 00:31:23.255 [2024-06-07 21:48:23.236397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.255 [2024-06-07 21:48:23.236407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.255 qpair failed and we were unable to recover it. 00:31:23.255 [2024-06-07 21:48:23.236624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.255 [2024-06-07 21:48:23.236633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.255 qpair failed and we were unable to recover it. 00:31:23.255 [2024-06-07 21:48:23.236779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.255 [2024-06-07 21:48:23.236789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.255 qpair failed and we were unable to recover it. 00:31:23.255 [2024-06-07 21:48:23.236927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.255 [2024-06-07 21:48:23.236937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.255 qpair failed and we were unable to recover it. 00:31:23.255 [2024-06-07 21:48:23.237135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.255 [2024-06-07 21:48:23.237144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.255 qpair failed and we were unable to recover it. 00:31:23.255 [2024-06-07 21:48:23.237284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.255 [2024-06-07 21:48:23.237293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.255 qpair failed and we were unable to recover it. 00:31:23.255 [2024-06-07 21:48:23.237529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.255 [2024-06-07 21:48:23.237538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.255 qpair failed and we were unable to recover it. 00:31:23.255 [2024-06-07 21:48:23.237735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.255 [2024-06-07 21:48:23.237745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.255 qpair failed and we were unable to recover it. 00:31:23.255 [2024-06-07 21:48:23.237942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.255 [2024-06-07 21:48:23.237951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.255 qpair failed and we were unable to recover it. 00:31:23.255 [2024-06-07 21:48:23.238156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.255 [2024-06-07 21:48:23.238166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.255 qpair failed and we were unable to recover it. 00:31:23.255 [2024-06-07 21:48:23.238366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.255 [2024-06-07 21:48:23.238376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.255 qpair failed and we were unable to recover it. 00:31:23.255 [2024-06-07 21:48:23.238663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.255 [2024-06-07 21:48:23.238673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.255 qpair failed and we were unable to recover it. 00:31:23.255 [2024-06-07 21:48:23.238965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.255 [2024-06-07 21:48:23.238974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.255 qpair failed and we were unable to recover it. 00:31:23.255 [2024-06-07 21:48:23.239117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.255 [2024-06-07 21:48:23.239127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.255 qpair failed and we were unable to recover it. 00:31:23.255 [2024-06-07 21:48:23.239338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.255 [2024-06-07 21:48:23.239348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.255 qpair failed and we were unable to recover it. 00:31:23.255 [2024-06-07 21:48:23.239516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.255 [2024-06-07 21:48:23.239526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.255 qpair failed and we were unable to recover it. 00:31:23.255 [2024-06-07 21:48:23.239668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.255 [2024-06-07 21:48:23.239677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.255 qpair failed and we were unable to recover it. 00:31:23.255 [2024-06-07 21:48:23.239947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.255 [2024-06-07 21:48:23.239956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.255 qpair failed and we were unable to recover it. 00:31:23.255 [2024-06-07 21:48:23.240160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.255 [2024-06-07 21:48:23.240170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.255 qpair failed and we were unable to recover it. 00:31:23.255 [2024-06-07 21:48:23.240435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.255 [2024-06-07 21:48:23.240446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.255 qpair failed and we were unable to recover it. 00:31:23.255 [2024-06-07 21:48:23.240659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.255 [2024-06-07 21:48:23.240670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.255 qpair failed and we were unable to recover it. 00:31:23.255 [2024-06-07 21:48:23.240795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.255 [2024-06-07 21:48:23.240804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.255 qpair failed and we were unable to recover it. 00:31:23.255 [2024-06-07 21:48:23.241006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.255 [2024-06-07 21:48:23.241015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.255 qpair failed and we were unable to recover it. 00:31:23.255 [2024-06-07 21:48:23.241176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.255 [2024-06-07 21:48:23.241186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.255 qpair failed and we were unable to recover it. 00:31:23.255 [2024-06-07 21:48:23.241327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.255 [2024-06-07 21:48:23.241336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.255 qpair failed and we were unable to recover it. 00:31:23.255 [2024-06-07 21:48:23.241484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.255 [2024-06-07 21:48:23.241493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.256 qpair failed and we were unable to recover it. 00:31:23.256 [2024-06-07 21:48:23.241705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.256 [2024-06-07 21:48:23.241714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.256 qpair failed and we were unable to recover it. 00:31:23.256 [2024-06-07 21:48:23.241844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.256 [2024-06-07 21:48:23.241853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.256 qpair failed and we were unable to recover it. 00:31:23.256 [2024-06-07 21:48:23.242034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.256 [2024-06-07 21:48:23.242044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.256 qpair failed and we were unable to recover it. 00:31:23.256 [2024-06-07 21:48:23.242170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.256 [2024-06-07 21:48:23.242180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.256 qpair failed and we were unable to recover it. 00:31:23.256 [2024-06-07 21:48:23.242448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.256 [2024-06-07 21:48:23.242457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.256 qpair failed and we were unable to recover it. 00:31:23.256 [2024-06-07 21:48:23.242615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.256 [2024-06-07 21:48:23.242624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.256 qpair failed and we were unable to recover it. 00:31:23.256 [2024-06-07 21:48:23.242917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.256 [2024-06-07 21:48:23.242926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.256 qpair failed and we were unable to recover it. 00:31:23.256 [2024-06-07 21:48:23.243077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.256 [2024-06-07 21:48:23.243087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.256 qpair failed and we were unable to recover it. 00:31:23.256 [2024-06-07 21:48:23.243296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.256 [2024-06-07 21:48:23.243305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.256 qpair failed and we were unable to recover it. 00:31:23.256 [2024-06-07 21:48:23.243421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.256 [2024-06-07 21:48:23.243430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.256 qpair failed and we were unable to recover it. 00:31:23.256 [2024-06-07 21:48:23.243583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.256 [2024-06-07 21:48:23.243593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.256 qpair failed and we were unable to recover it. 00:31:23.256 [2024-06-07 21:48:23.243822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.256 [2024-06-07 21:48:23.243832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.256 qpair failed and we were unable to recover it. 00:31:23.256 [2024-06-07 21:48:23.244111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.256 [2024-06-07 21:48:23.244121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.256 qpair failed and we were unable to recover it. 00:31:23.256 [2024-06-07 21:48:23.244327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.256 [2024-06-07 21:48:23.244336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.256 qpair failed and we were unable to recover it. 00:31:23.256 [2024-06-07 21:48:23.244607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.256 [2024-06-07 21:48:23.244617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.256 qpair failed and we were unable to recover it. 00:31:23.256 [2024-06-07 21:48:23.244760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.256 [2024-06-07 21:48:23.244770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.256 qpair failed and we were unable to recover it. 00:31:23.256 [2024-06-07 21:48:23.244979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.256 [2024-06-07 21:48:23.244989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.256 qpair failed and we were unable to recover it. 00:31:23.256 [2024-06-07 21:48:23.245189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.256 [2024-06-07 21:48:23.245199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.256 qpair failed and we were unable to recover it. 00:31:23.256 [2024-06-07 21:48:23.245466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.256 [2024-06-07 21:48:23.245475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.256 qpair failed and we were unable to recover it. 00:31:23.256 [2024-06-07 21:48:23.245750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.256 [2024-06-07 21:48:23.245760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.256 qpair failed and we were unable to recover it. 00:31:23.256 [2024-06-07 21:48:23.245964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.256 [2024-06-07 21:48:23.245974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.256 qpair failed and we were unable to recover it. 00:31:23.256 [2024-06-07 21:48:23.246133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.256 [2024-06-07 21:48:23.246143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.256 qpair failed and we were unable to recover it. 00:31:23.256 [2024-06-07 21:48:23.246288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.256 [2024-06-07 21:48:23.246297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.256 qpair failed and we were unable to recover it. 00:31:23.256 [2024-06-07 21:48:23.246439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.256 [2024-06-07 21:48:23.246449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.256 qpair failed and we were unable to recover it. 00:31:23.256 [2024-06-07 21:48:23.246682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.256 [2024-06-07 21:48:23.246691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.256 qpair failed and we were unable to recover it. 00:31:23.256 [2024-06-07 21:48:23.246894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.256 [2024-06-07 21:48:23.246904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.256 qpair failed and we were unable to recover it. 00:31:23.256 [2024-06-07 21:48:23.247042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.256 [2024-06-07 21:48:23.247051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.256 qpair failed and we were unable to recover it. 00:31:23.256 [2024-06-07 21:48:23.247190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.256 [2024-06-07 21:48:23.247200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.256 qpair failed and we were unable to recover it. 00:31:23.256 [2024-06-07 21:48:23.247342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.256 [2024-06-07 21:48:23.247352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.256 qpair failed and we were unable to recover it. 00:31:23.256 [2024-06-07 21:48:23.247618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.256 [2024-06-07 21:48:23.247627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.256 qpair failed and we were unable to recover it. 00:31:23.256 [2024-06-07 21:48:23.247756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.256 [2024-06-07 21:48:23.247765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.256 qpair failed and we were unable to recover it. 00:31:23.256 [2024-06-07 21:48:23.247900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.256 [2024-06-07 21:48:23.247909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.256 qpair failed and we were unable to recover it. 00:31:23.256 [2024-06-07 21:48:23.248114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.256 [2024-06-07 21:48:23.248124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.257 qpair failed and we were unable to recover it. 00:31:23.257 [2024-06-07 21:48:23.248258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.257 [2024-06-07 21:48:23.248269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.257 qpair failed and we were unable to recover it. 00:31:23.257 [2024-06-07 21:48:23.248465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.257 [2024-06-07 21:48:23.248475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.257 qpair failed and we were unable to recover it. 00:31:23.257 [2024-06-07 21:48:23.248675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.257 [2024-06-07 21:48:23.248684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.257 qpair failed and we were unable to recover it. 00:31:23.257 [2024-06-07 21:48:23.248895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.257 [2024-06-07 21:48:23.248904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.257 qpair failed and we were unable to recover it. 00:31:23.257 [2024-06-07 21:48:23.249065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.257 [2024-06-07 21:48:23.249075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.257 qpair failed and we were unable to recover it. 00:31:23.257 [2024-06-07 21:48:23.249282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.257 [2024-06-07 21:48:23.249291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.257 qpair failed and we were unable to recover it. 00:31:23.257 [2024-06-07 21:48:23.249581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.257 [2024-06-07 21:48:23.249590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.257 qpair failed and we were unable to recover it. 00:31:23.257 [2024-06-07 21:48:23.249686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.257 [2024-06-07 21:48:23.249696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.257 qpair failed and we were unable to recover it. 00:31:23.257 [2024-06-07 21:48:23.249894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.257 [2024-06-07 21:48:23.249904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.257 qpair failed and we were unable to recover it. 00:31:23.257 [2024-06-07 21:48:23.250035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.257 [2024-06-07 21:48:23.250045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.257 qpair failed and we were unable to recover it. 00:31:23.257 [2024-06-07 21:48:23.250182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.257 [2024-06-07 21:48:23.250191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.257 qpair failed and we were unable to recover it. 00:31:23.257 [2024-06-07 21:48:23.250335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.257 [2024-06-07 21:48:23.250345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.257 qpair failed and we were unable to recover it. 00:31:23.257 [2024-06-07 21:48:23.250592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.257 [2024-06-07 21:48:23.250602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.257 qpair failed and we were unable to recover it. 00:31:23.257 [2024-06-07 21:48:23.250742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.257 [2024-06-07 21:48:23.250752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.257 qpair failed and we were unable to recover it. 00:31:23.257 [2024-06-07 21:48:23.250904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.257 [2024-06-07 21:48:23.250914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.257 qpair failed and we were unable to recover it. 00:31:23.257 [2024-06-07 21:48:23.251131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.257 [2024-06-07 21:48:23.251140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.257 qpair failed and we were unable to recover it. 00:31:23.257 [2024-06-07 21:48:23.251355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.257 [2024-06-07 21:48:23.251364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.257 qpair failed and we were unable to recover it. 00:31:23.257 [2024-06-07 21:48:23.251505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.257 [2024-06-07 21:48:23.251515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.257 qpair failed and we were unable to recover it. 00:31:23.257 [2024-06-07 21:48:23.251724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.257 [2024-06-07 21:48:23.251733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.257 qpair failed and we were unable to recover it. 00:31:23.257 [2024-06-07 21:48:23.251867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.257 [2024-06-07 21:48:23.251876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.257 qpair failed and we were unable to recover it. 00:31:23.257 [2024-06-07 21:48:23.252088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.257 [2024-06-07 21:48:23.252097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.257 qpair failed and we were unable to recover it. 00:31:23.257 [2024-06-07 21:48:23.252310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.257 [2024-06-07 21:48:23.252320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.257 qpair failed and we were unable to recover it. 00:31:23.257 [2024-06-07 21:48:23.252526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.257 [2024-06-07 21:48:23.252536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.257 qpair failed and we were unable to recover it. 00:31:23.257 [2024-06-07 21:48:23.252769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.257 [2024-06-07 21:48:23.252779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.257 qpair failed and we were unable to recover it. 00:31:23.257 [2024-06-07 21:48:23.252981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.257 [2024-06-07 21:48:23.252990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.257 qpair failed and we were unable to recover it. 00:31:23.257 [2024-06-07 21:48:23.253130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.257 [2024-06-07 21:48:23.253139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.257 qpair failed and we were unable to recover it. 00:31:23.257 [2024-06-07 21:48:23.253350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.257 [2024-06-07 21:48:23.253359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.257 qpair failed and we were unable to recover it. 00:31:23.257 [2024-06-07 21:48:23.253555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.257 [2024-06-07 21:48:23.253565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.257 qpair failed and we were unable to recover it. 00:31:23.258 [2024-06-07 21:48:23.253725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.258 [2024-06-07 21:48:23.253735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.258 qpair failed and we were unable to recover it. 00:31:23.258 [2024-06-07 21:48:23.253961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.258 [2024-06-07 21:48:23.253970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.258 qpair failed and we were unable to recover it. 00:31:23.258 [2024-06-07 21:48:23.254200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.258 [2024-06-07 21:48:23.254209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.258 qpair failed and we were unable to recover it. 00:31:23.258 [2024-06-07 21:48:23.254430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.258 [2024-06-07 21:48:23.254439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.258 qpair failed and we were unable to recover it. 00:31:23.258 [2024-06-07 21:48:23.254654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.258 [2024-06-07 21:48:23.254664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.258 qpair failed and we were unable to recover it. 00:31:23.258 [2024-06-07 21:48:23.254864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.258 [2024-06-07 21:48:23.254874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.258 qpair failed and we were unable to recover it. 00:31:23.258 [2024-06-07 21:48:23.254982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.258 [2024-06-07 21:48:23.254991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.258 qpair failed and we were unable to recover it. 00:31:23.258 [2024-06-07 21:48:23.255210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.258 [2024-06-07 21:48:23.255220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.258 qpair failed and we were unable to recover it. 00:31:23.258 [2024-06-07 21:48:23.255366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.258 [2024-06-07 21:48:23.255376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.258 qpair failed and we were unable to recover it. 00:31:23.258 [2024-06-07 21:48:23.255681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.258 [2024-06-07 21:48:23.255691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.258 qpair failed and we were unable to recover it. 00:31:23.258 [2024-06-07 21:48:23.256015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.258 [2024-06-07 21:48:23.256027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.258 qpair failed and we were unable to recover it. 00:31:23.258 [2024-06-07 21:48:23.256237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.258 [2024-06-07 21:48:23.256247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.258 qpair failed and we were unable to recover it. 00:31:23.258 [2024-06-07 21:48:23.256461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.258 [2024-06-07 21:48:23.256472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.258 qpair failed and we were unable to recover it. 00:31:23.258 [2024-06-07 21:48:23.256673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.258 [2024-06-07 21:48:23.256683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.258 qpair failed and we were unable to recover it. 00:31:23.258 [2024-06-07 21:48:23.256878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.258 [2024-06-07 21:48:23.256887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.258 qpair failed and we were unable to recover it. 00:31:23.258 [2024-06-07 21:48:23.257033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.258 [2024-06-07 21:48:23.257043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.258 qpair failed and we were unable to recover it. 00:31:23.258 [2024-06-07 21:48:23.257195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.258 [2024-06-07 21:48:23.257204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.258 qpair failed and we were unable to recover it. 00:31:23.258 [2024-06-07 21:48:23.257353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.258 [2024-06-07 21:48:23.257362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.258 qpair failed and we were unable to recover it. 00:31:23.258 [2024-06-07 21:48:23.257508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.258 [2024-06-07 21:48:23.257518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.258 qpair failed and we were unable to recover it. 00:31:23.258 [2024-06-07 21:48:23.257799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.258 [2024-06-07 21:48:23.257808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.258 qpair failed and we were unable to recover it. 00:31:23.258 [2024-06-07 21:48:23.258021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.258 [2024-06-07 21:48:23.258034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.258 qpair failed and we were unable to recover it. 00:31:23.258 [2024-06-07 21:48:23.258165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.258 [2024-06-07 21:48:23.258174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.258 qpair failed and we were unable to recover it. 00:31:23.258 [2024-06-07 21:48:23.258393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.258 [2024-06-07 21:48:23.258402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.258 qpair failed and we were unable to recover it. 00:31:23.258 [2024-06-07 21:48:23.258539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.258 [2024-06-07 21:48:23.258549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.258 qpair failed and we were unable to recover it. 00:31:23.258 [2024-06-07 21:48:23.258771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.258 [2024-06-07 21:48:23.258781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.258 qpair failed and we were unable to recover it. 00:31:23.258 [2024-06-07 21:48:23.258919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.258 [2024-06-07 21:48:23.258929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.258 qpair failed and we were unable to recover it. 00:31:23.258 [2024-06-07 21:48:23.259067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.258 [2024-06-07 21:48:23.259077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.258 qpair failed and we were unable to recover it. 00:31:23.258 [2024-06-07 21:48:23.259349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.258 [2024-06-07 21:48:23.259358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.258 qpair failed and we were unable to recover it. 00:31:23.258 [2024-06-07 21:48:23.259627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.258 [2024-06-07 21:48:23.259637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.258 qpair failed and we were unable to recover it. 00:31:23.258 [2024-06-07 21:48:23.259842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.258 [2024-06-07 21:48:23.259851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.258 qpair failed and we were unable to recover it. 00:31:23.258 [2024-06-07 21:48:23.259989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.258 [2024-06-07 21:48:23.259998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.258 qpair failed and we were unable to recover it. 00:31:23.258 [2024-06-07 21:48:23.260319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.258 [2024-06-07 21:48:23.260329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.258 qpair failed and we were unable to recover it. 00:31:23.258 [2024-06-07 21:48:23.260480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.258 [2024-06-07 21:48:23.260489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.258 qpair failed and we were unable to recover it. 00:31:23.258 [2024-06-07 21:48:23.260598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.258 [2024-06-07 21:48:23.260607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.258 qpair failed and we were unable to recover it. 00:31:23.258 [2024-06-07 21:48:23.260842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.258 [2024-06-07 21:48:23.260852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.258 qpair failed and we were unable to recover it. 00:31:23.259 [2024-06-07 21:48:23.261106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.259 [2024-06-07 21:48:23.261116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.259 qpair failed and we were unable to recover it. 00:31:23.259 [2024-06-07 21:48:23.261265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.259 [2024-06-07 21:48:23.261275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.259 qpair failed and we were unable to recover it. 00:31:23.259 [2024-06-07 21:48:23.261482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.259 [2024-06-07 21:48:23.261491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.259 qpair failed and we were unable to recover it. 00:31:23.259 [2024-06-07 21:48:23.261626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.259 [2024-06-07 21:48:23.261635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.259 qpair failed and we were unable to recover it. 00:31:23.259 [2024-06-07 21:48:23.261784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.259 [2024-06-07 21:48:23.261793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.259 qpair failed and we were unable to recover it. 00:31:23.259 [2024-06-07 21:48:23.262079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.259 [2024-06-07 21:48:23.262089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.259 qpair failed and we were unable to recover it. 00:31:23.259 [2024-06-07 21:48:23.262291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.259 [2024-06-07 21:48:23.262300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.259 qpair failed and we were unable to recover it. 00:31:23.259 [2024-06-07 21:48:23.262396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.259 [2024-06-07 21:48:23.262405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.259 qpair failed and we were unable to recover it. 00:31:23.259 [2024-06-07 21:48:23.262541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.259 [2024-06-07 21:48:23.262550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.259 qpair failed and we were unable to recover it. 00:31:23.259 [2024-06-07 21:48:23.262709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.259 [2024-06-07 21:48:23.262718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.259 qpair failed and we were unable to recover it. 00:31:23.259 [2024-06-07 21:48:23.262963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.259 [2024-06-07 21:48:23.262973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.259 qpair failed and we were unable to recover it. 00:31:23.259 [2024-06-07 21:48:23.263242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.259 [2024-06-07 21:48:23.263252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.259 qpair failed and we were unable to recover it. 00:31:23.259 [2024-06-07 21:48:23.263457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.259 [2024-06-07 21:48:23.263467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.259 qpair failed and we were unable to recover it. 00:31:23.259 [2024-06-07 21:48:23.263692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.259 [2024-06-07 21:48:23.263702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.259 qpair failed and we were unable to recover it. 00:31:23.259 [2024-06-07 21:48:23.263910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.259 [2024-06-07 21:48:23.263920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.259 qpair failed and we were unable to recover it. 00:31:23.259 [2024-06-07 21:48:23.264142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.259 [2024-06-07 21:48:23.264152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.259 qpair failed and we were unable to recover it. 00:31:23.259 [2024-06-07 21:48:23.264294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.259 [2024-06-07 21:48:23.264304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.259 qpair failed and we were unable to recover it. 00:31:23.259 [2024-06-07 21:48:23.264569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.259 [2024-06-07 21:48:23.264580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.259 qpair failed and we were unable to recover it. 00:31:23.259 [2024-06-07 21:48:23.264777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.259 [2024-06-07 21:48:23.264786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.259 qpair failed and we were unable to recover it. 00:31:23.259 [2024-06-07 21:48:23.264916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.259 [2024-06-07 21:48:23.264926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.259 qpair failed and we were unable to recover it. 00:31:23.259 [2024-06-07 21:48:23.265225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.259 [2024-06-07 21:48:23.265235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.259 qpair failed and we were unable to recover it. 00:31:23.259 [2024-06-07 21:48:23.265381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.259 [2024-06-07 21:48:23.265391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.259 qpair failed and we were unable to recover it. 00:31:23.259 [2024-06-07 21:48:23.265541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.259 [2024-06-07 21:48:23.265551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.259 qpair failed and we were unable to recover it. 00:31:23.259 [2024-06-07 21:48:23.265690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.259 [2024-06-07 21:48:23.265699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.259 qpair failed and we were unable to recover it. 00:31:23.259 [2024-06-07 21:48:23.265826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.259 [2024-06-07 21:48:23.265835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.259 qpair failed and we were unable to recover it. 00:31:23.259 [2024-06-07 21:48:23.266044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.259 [2024-06-07 21:48:23.266054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.259 qpair failed and we were unable to recover it. 00:31:23.259 [2024-06-07 21:48:23.266267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.259 [2024-06-07 21:48:23.266277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.259 qpair failed and we were unable to recover it. 00:31:23.259 [2024-06-07 21:48:23.266475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.259 [2024-06-07 21:48:23.266484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.259 qpair failed and we were unable to recover it. 00:31:23.259 [2024-06-07 21:48:23.266683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.259 [2024-06-07 21:48:23.266693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.259 qpair failed and we were unable to recover it. 00:31:23.259 [2024-06-07 21:48:23.266893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.259 [2024-06-07 21:48:23.266903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.259 qpair failed and we were unable to recover it. 00:31:23.259 [2024-06-07 21:48:23.267058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.259 [2024-06-07 21:48:23.267068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.259 qpair failed and we were unable to recover it. 00:31:23.259 [2024-06-07 21:48:23.267216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.259 [2024-06-07 21:48:23.267226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.259 qpair failed and we were unable to recover it. 00:31:23.259 [2024-06-07 21:48:23.267424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.259 [2024-06-07 21:48:23.267433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.259 qpair failed and we were unable to recover it. 00:31:23.259 [2024-06-07 21:48:23.267661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.259 [2024-06-07 21:48:23.267670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.259 qpair failed and we were unable to recover it. 00:31:23.259 [2024-06-07 21:48:23.267809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.259 [2024-06-07 21:48:23.267818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.259 qpair failed and we were unable to recover it. 00:31:23.259 [2024-06-07 21:48:23.268014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.260 [2024-06-07 21:48:23.268023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.260 qpair failed and we were unable to recover it. 00:31:23.260 [2024-06-07 21:48:23.268221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.260 [2024-06-07 21:48:23.268230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.260 qpair failed and we were unable to recover it. 00:31:23.260 [2024-06-07 21:48:23.268376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.260 [2024-06-07 21:48:23.268385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.260 qpair failed and we were unable to recover it. 00:31:23.260 [2024-06-07 21:48:23.268522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.260 [2024-06-07 21:48:23.268532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.260 qpair failed and we were unable to recover it. 00:31:23.260 [2024-06-07 21:48:23.268678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.260 [2024-06-07 21:48:23.268687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.260 qpair failed and we were unable to recover it. 00:31:23.260 [2024-06-07 21:48:23.268843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.260 [2024-06-07 21:48:23.268852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.260 qpair failed and we were unable to recover it. 00:31:23.260 [2024-06-07 21:48:23.268991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.260 [2024-06-07 21:48:23.269001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.260 qpair failed and we were unable to recover it. 00:31:23.260 [2024-06-07 21:48:23.269143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.260 [2024-06-07 21:48:23.269153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.260 qpair failed and we were unable to recover it. 00:31:23.260 [2024-06-07 21:48:23.269311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.260 [2024-06-07 21:48:23.269320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.260 qpair failed and we were unable to recover it. 00:31:23.260 [2024-06-07 21:48:23.269474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.260 [2024-06-07 21:48:23.269484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.260 qpair failed and we were unable to recover it. 00:31:23.260 [2024-06-07 21:48:23.269704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.260 [2024-06-07 21:48:23.269713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.260 qpair failed and we were unable to recover it. 00:31:23.260 [2024-06-07 21:48:23.269912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.260 [2024-06-07 21:48:23.269922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.260 qpair failed and we were unable to recover it. 00:31:23.260 [2024-06-07 21:48:23.270062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.260 [2024-06-07 21:48:23.270072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.260 qpair failed and we were unable to recover it. 00:31:23.260 [2024-06-07 21:48:23.270284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.260 [2024-06-07 21:48:23.270293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.260 qpair failed and we were unable to recover it. 00:31:23.260 [2024-06-07 21:48:23.270496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.260 [2024-06-07 21:48:23.270505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.260 qpair failed and we were unable to recover it. 00:31:23.260 [2024-06-07 21:48:23.270716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.260 [2024-06-07 21:48:23.270725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.260 qpair failed and we were unable to recover it. 00:31:23.260 [2024-06-07 21:48:23.270870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.260 [2024-06-07 21:48:23.270879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.260 qpair failed and we were unable to recover it. 00:31:23.260 [2024-06-07 21:48:23.271085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.260 [2024-06-07 21:48:23.271095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.260 qpair failed and we were unable to recover it. 00:31:23.260 [2024-06-07 21:48:23.271293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.260 [2024-06-07 21:48:23.271303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.260 qpair failed and we were unable to recover it. 00:31:23.260 [2024-06-07 21:48:23.271569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.260 [2024-06-07 21:48:23.271579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.260 qpair failed and we were unable to recover it. 00:31:23.260 [2024-06-07 21:48:23.271777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.260 [2024-06-07 21:48:23.271786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.260 qpair failed and we were unable to recover it. 00:31:23.260 [2024-06-07 21:48:23.271988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.260 [2024-06-07 21:48:23.271998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.260 qpair failed and we were unable to recover it. 00:31:23.260 [2024-06-07 21:48:23.272102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.260 [2024-06-07 21:48:23.272113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.260 qpair failed and we were unable to recover it. 00:31:23.260 [2024-06-07 21:48:23.272319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.260 [2024-06-07 21:48:23.272328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.260 qpair failed and we were unable to recover it. 00:31:23.260 [2024-06-07 21:48:23.272456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.260 [2024-06-07 21:48:23.272465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.260 qpair failed and we were unable to recover it. 00:31:23.260 [2024-06-07 21:48:23.272593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.260 [2024-06-07 21:48:23.272603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.260 qpair failed and we were unable to recover it. 00:31:23.260 [2024-06-07 21:48:23.272743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.260 [2024-06-07 21:48:23.272752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.260 qpair failed and we were unable to recover it. 00:31:23.260 [2024-06-07 21:48:23.272951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.260 [2024-06-07 21:48:23.272960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.260 qpair failed and we were unable to recover it. 00:31:23.260 [2024-06-07 21:48:23.273104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.260 [2024-06-07 21:48:23.273114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.260 qpair failed and we were unable to recover it. 00:31:23.260 [2024-06-07 21:48:23.273421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.260 [2024-06-07 21:48:23.273430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.260 qpair failed and we were unable to recover it. 00:31:23.260 [2024-06-07 21:48:23.273578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.260 [2024-06-07 21:48:23.273588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.260 qpair failed and we were unable to recover it. 00:31:23.260 [2024-06-07 21:48:23.273746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.260 [2024-06-07 21:48:23.273756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.260 qpair failed and we were unable to recover it. 00:31:23.260 [2024-06-07 21:48:23.273881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.260 [2024-06-07 21:48:23.273891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.260 qpair failed and we were unable to recover it. 00:31:23.260 [2024-06-07 21:48:23.274110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.260 [2024-06-07 21:48:23.274119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.260 qpair failed and we were unable to recover it. 00:31:23.260 [2024-06-07 21:48:23.274260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.260 [2024-06-07 21:48:23.274269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.260 qpair failed and we were unable to recover it. 00:31:23.260 [2024-06-07 21:48:23.274469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.260 [2024-06-07 21:48:23.274478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.260 qpair failed and we were unable to recover it. 00:31:23.260 [2024-06-07 21:48:23.274685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.260 [2024-06-07 21:48:23.274695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.260 qpair failed and we were unable to recover it. 00:31:23.260 [2024-06-07 21:48:23.274867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.261 [2024-06-07 21:48:23.274877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.261 qpair failed and we were unable to recover it. 00:31:23.261 [2024-06-07 21:48:23.275071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.261 [2024-06-07 21:48:23.275080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.261 qpair failed and we were unable to recover it. 00:31:23.261 [2024-06-07 21:48:23.275218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.261 [2024-06-07 21:48:23.275228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.261 qpair failed and we were unable to recover it. 00:31:23.261 [2024-06-07 21:48:23.275379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.261 [2024-06-07 21:48:23.275389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.261 qpair failed and we were unable to recover it. 00:31:23.261 [2024-06-07 21:48:23.275612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.261 [2024-06-07 21:48:23.275621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.261 qpair failed and we were unable to recover it. 00:31:23.261 [2024-06-07 21:48:23.275767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.261 [2024-06-07 21:48:23.275776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.261 qpair failed and we were unable to recover it. 00:31:23.261 [2024-06-07 21:48:23.275915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.261 [2024-06-07 21:48:23.275925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.261 qpair failed and we were unable to recover it. 00:31:23.261 [2024-06-07 21:48:23.276062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.261 [2024-06-07 21:48:23.276071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.261 qpair failed and we were unable to recover it. 00:31:23.261 [2024-06-07 21:48:23.276243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.261 [2024-06-07 21:48:23.276252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.261 qpair failed and we were unable to recover it. 00:31:23.261 [2024-06-07 21:48:23.276388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.261 [2024-06-07 21:48:23.276397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.261 qpair failed and we were unable to recover it. 00:31:23.261 [2024-06-07 21:48:23.276524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.261 [2024-06-07 21:48:23.276534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.261 qpair failed and we were unable to recover it. 00:31:23.261 [2024-06-07 21:48:23.276851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.261 [2024-06-07 21:48:23.276860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.261 qpair failed and we were unable to recover it. 00:31:23.261 [2024-06-07 21:48:23.277157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.261 [2024-06-07 21:48:23.277167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.261 qpair failed and we were unable to recover it. 00:31:23.261 [2024-06-07 21:48:23.277327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.261 [2024-06-07 21:48:23.277336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.261 qpair failed and we were unable to recover it. 00:31:23.261 [2024-06-07 21:48:23.277601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.261 [2024-06-07 21:48:23.277610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.261 qpair failed and we were unable to recover it. 00:31:23.261 [2024-06-07 21:48:23.277812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.261 [2024-06-07 21:48:23.277821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.261 qpair failed and we were unable to recover it. 00:31:23.261 [2024-06-07 21:48:23.277965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.261 [2024-06-07 21:48:23.277974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.261 qpair failed and we were unable to recover it. 00:31:23.261 [2024-06-07 21:48:23.278130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.261 [2024-06-07 21:48:23.278140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.261 qpair failed and we were unable to recover it. 00:31:23.261 [2024-06-07 21:48:23.278288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.261 [2024-06-07 21:48:23.278297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.261 qpair failed and we were unable to recover it. 00:31:23.261 [2024-06-07 21:48:23.278591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.261 [2024-06-07 21:48:23.278600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.261 qpair failed and we were unable to recover it. 00:31:23.261 [2024-06-07 21:48:23.278761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.261 [2024-06-07 21:48:23.278771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.261 qpair failed and we were unable to recover it. 00:31:23.261 [2024-06-07 21:48:23.278906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.261 [2024-06-07 21:48:23.278916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.261 qpair failed and we were unable to recover it. 00:31:23.261 [2024-06-07 21:48:23.279120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.261 [2024-06-07 21:48:23.279130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.261 qpair failed and we were unable to recover it. 00:31:23.261 [2024-06-07 21:48:23.279346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.261 [2024-06-07 21:48:23.279356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.261 qpair failed and we were unable to recover it. 00:31:23.261 [2024-06-07 21:48:23.279570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.261 [2024-06-07 21:48:23.279579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.261 qpair failed and we were unable to recover it. 00:31:23.261 [2024-06-07 21:48:23.279715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.261 [2024-06-07 21:48:23.279726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.261 qpair failed and we were unable to recover it. 00:31:23.261 [2024-06-07 21:48:23.279865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.261 [2024-06-07 21:48:23.279875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.261 qpair failed and we were unable to recover it. 00:31:23.261 [2024-06-07 21:48:23.280112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.261 [2024-06-07 21:48:23.280122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.261 qpair failed and we were unable to recover it. 00:31:23.261 [2024-06-07 21:48:23.280337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.261 [2024-06-07 21:48:23.280346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.261 qpair failed and we were unable to recover it. 00:31:23.261 [2024-06-07 21:48:23.280559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.261 [2024-06-07 21:48:23.280568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.261 qpair failed and we were unable to recover it. 00:31:23.261 [2024-06-07 21:48:23.280781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.261 [2024-06-07 21:48:23.280791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.261 qpair failed and we were unable to recover it. 00:31:23.261 [2024-06-07 21:48:23.280919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.261 [2024-06-07 21:48:23.280928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.261 qpair failed and we were unable to recover it. 00:31:23.261 [2024-06-07 21:48:23.281076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.261 [2024-06-07 21:48:23.281086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.261 qpair failed and we were unable to recover it. 00:31:23.261 [2024-06-07 21:48:23.281318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.261 [2024-06-07 21:48:23.281328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.261 qpair failed and we were unable to recover it. 00:31:23.261 [2024-06-07 21:48:23.281537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.261 [2024-06-07 21:48:23.281547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.261 qpair failed and we were unable to recover it. 00:31:23.261 [2024-06-07 21:48:23.281697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.261 [2024-06-07 21:48:23.281706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.261 qpair failed and we were unable to recover it. 00:31:23.261 [2024-06-07 21:48:23.281907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.262 [2024-06-07 21:48:23.281917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.262 qpair failed and we were unable to recover it. 00:31:23.262 [2024-06-07 21:48:23.282203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.262 [2024-06-07 21:48:23.282213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.262 qpair failed and we were unable to recover it. 00:31:23.262 [2024-06-07 21:48:23.282443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.262 [2024-06-07 21:48:23.282452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.262 qpair failed and we were unable to recover it. 00:31:23.262 [2024-06-07 21:48:23.282664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.262 [2024-06-07 21:48:23.282674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.262 qpair failed and we were unable to recover it. 00:31:23.262 [2024-06-07 21:48:23.282800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.262 [2024-06-07 21:48:23.282809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.262 qpair failed and we were unable to recover it. 00:31:23.262 [2024-06-07 21:48:23.283010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.262 [2024-06-07 21:48:23.283019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.262 qpair failed and we were unable to recover it. 00:31:23.262 [2024-06-07 21:48:23.283166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.262 [2024-06-07 21:48:23.283175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.262 qpair failed and we were unable to recover it. 00:31:23.262 [2024-06-07 21:48:23.283332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.262 [2024-06-07 21:48:23.283341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.262 qpair failed and we were unable to recover it. 00:31:23.262 [2024-06-07 21:48:23.283543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.262 [2024-06-07 21:48:23.283552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.262 qpair failed and we were unable to recover it. 00:31:23.262 [2024-06-07 21:48:23.283791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.262 [2024-06-07 21:48:23.283800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.262 qpair failed and we were unable to recover it. 00:31:23.262 [2024-06-07 21:48:23.283958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.262 [2024-06-07 21:48:23.283967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.262 qpair failed and we were unable to recover it. 00:31:23.262 [2024-06-07 21:48:23.284108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.262 [2024-06-07 21:48:23.284118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.262 qpair failed and we were unable to recover it. 00:31:23.262 [2024-06-07 21:48:23.284252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.262 [2024-06-07 21:48:23.284261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.262 qpair failed and we were unable to recover it. 00:31:23.262 [2024-06-07 21:48:23.284474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.262 [2024-06-07 21:48:23.284483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.262 qpair failed and we were unable to recover it. 00:31:23.262 [2024-06-07 21:48:23.284777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.262 [2024-06-07 21:48:23.284786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.262 qpair failed and we were unable to recover it. 00:31:23.262 [2024-06-07 21:48:23.285003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.262 [2024-06-07 21:48:23.285012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.262 qpair failed and we were unable to recover it. 00:31:23.262 [2024-06-07 21:48:23.285285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.262 [2024-06-07 21:48:23.285294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.262 qpair failed and we were unable to recover it. 00:31:23.262 [2024-06-07 21:48:23.285418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.262 [2024-06-07 21:48:23.285427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.262 qpair failed and we were unable to recover it. 00:31:23.262 [2024-06-07 21:48:23.285639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.262 [2024-06-07 21:48:23.285648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.262 qpair failed and we were unable to recover it. 00:31:23.262 [2024-06-07 21:48:23.285912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.262 [2024-06-07 21:48:23.285921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.262 qpair failed and we were unable to recover it. 00:31:23.262 [2024-06-07 21:48:23.286121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.262 [2024-06-07 21:48:23.286131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.262 qpair failed and we were unable to recover it. 00:31:23.262 [2024-06-07 21:48:23.286349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.262 [2024-06-07 21:48:23.286358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.262 qpair failed and we were unable to recover it. 00:31:23.262 [2024-06-07 21:48:23.286564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.262 [2024-06-07 21:48:23.286574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.262 qpair failed and we were unable to recover it. 00:31:23.262 [2024-06-07 21:48:23.286808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.262 [2024-06-07 21:48:23.286818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.262 qpair failed and we were unable to recover it. 00:31:23.262 [2024-06-07 21:48:23.286948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.262 [2024-06-07 21:48:23.286957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.262 qpair failed and we were unable to recover it. 00:31:23.262 [2024-06-07 21:48:23.287166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.262 [2024-06-07 21:48:23.287176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.262 qpair failed and we were unable to recover it. 00:31:23.262 [2024-06-07 21:48:23.287314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.262 [2024-06-07 21:48:23.287323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.262 qpair failed and we were unable to recover it. 00:31:23.262 [2024-06-07 21:48:23.287525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.262 [2024-06-07 21:48:23.287535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.262 qpair failed and we were unable to recover it. 00:31:23.263 [2024-06-07 21:48:23.287673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.263 [2024-06-07 21:48:23.287683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.263 qpair failed and we were unable to recover it. 00:31:23.263 [2024-06-07 21:48:23.287830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.263 [2024-06-07 21:48:23.287840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.263 qpair failed and we were unable to recover it. 00:31:23.263 [2024-06-07 21:48:23.287986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.263 [2024-06-07 21:48:23.287995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.263 qpair failed and we were unable to recover it. 00:31:23.263 [2024-06-07 21:48:23.288141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.263 [2024-06-07 21:48:23.288151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.263 qpair failed and we were unable to recover it. 00:31:23.263 [2024-06-07 21:48:23.288313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.263 [2024-06-07 21:48:23.288322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.263 qpair failed and we were unable to recover it. 00:31:23.263 [2024-06-07 21:48:23.288588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.263 [2024-06-07 21:48:23.288598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.263 qpair failed and we were unable to recover it. 00:31:23.263 [2024-06-07 21:48:23.288715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.263 [2024-06-07 21:48:23.288724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.263 qpair failed and we were unable to recover it. 00:31:23.263 [2024-06-07 21:48:23.288815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.263 [2024-06-07 21:48:23.288824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.263 qpair failed and we were unable to recover it. 00:31:23.263 [2024-06-07 21:48:23.289020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.263 [2024-06-07 21:48:23.289034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.263 qpair failed and we were unable to recover it. 00:31:23.263 [2024-06-07 21:48:23.289242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.263 [2024-06-07 21:48:23.289251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.263 qpair failed and we were unable to recover it. 00:31:23.263 [2024-06-07 21:48:23.289391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.263 [2024-06-07 21:48:23.289401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.263 qpair failed and we were unable to recover it. 00:31:23.263 [2024-06-07 21:48:23.289598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.263 [2024-06-07 21:48:23.289607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.263 qpair failed and we were unable to recover it. 00:31:23.263 [2024-06-07 21:48:23.289805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.263 [2024-06-07 21:48:23.289814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.263 qpair failed and we were unable to recover it. 00:31:23.263 [2024-06-07 21:48:23.289971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.263 [2024-06-07 21:48:23.289980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.263 qpair failed and we were unable to recover it. 00:31:23.263 [2024-06-07 21:48:23.290133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.263 [2024-06-07 21:48:23.290142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.263 qpair failed and we were unable to recover it. 00:31:23.263 [2024-06-07 21:48:23.290355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.263 [2024-06-07 21:48:23.290365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.263 qpair failed and we were unable to recover it. 00:31:23.263 [2024-06-07 21:48:23.290464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.263 [2024-06-07 21:48:23.290474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.263 qpair failed and we were unable to recover it. 00:31:23.263 [2024-06-07 21:48:23.290792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.263 [2024-06-07 21:48:23.290802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.263 qpair failed and we were unable to recover it. 00:31:23.263 [2024-06-07 21:48:23.290997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.263 [2024-06-07 21:48:23.291007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.263 qpair failed and we were unable to recover it. 00:31:23.263 [2024-06-07 21:48:23.291157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.263 [2024-06-07 21:48:23.291167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.263 qpair failed and we were unable to recover it. 00:31:23.263 [2024-06-07 21:48:23.291309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.263 [2024-06-07 21:48:23.291319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.263 qpair failed and we were unable to recover it. 00:31:23.263 [2024-06-07 21:48:23.291533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.263 [2024-06-07 21:48:23.291543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.263 qpair failed and we were unable to recover it. 00:31:23.263 [2024-06-07 21:48:23.291844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.263 [2024-06-07 21:48:23.291854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.263 qpair failed and we were unable to recover it. 00:31:23.263 [2024-06-07 21:48:23.292141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.263 [2024-06-07 21:48:23.292151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.263 qpair failed and we were unable to recover it. 00:31:23.263 [2024-06-07 21:48:23.292312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.263 [2024-06-07 21:48:23.292321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.263 qpair failed and we were unable to recover it. 00:31:23.263 [2024-06-07 21:48:23.292539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.263 [2024-06-07 21:48:23.292549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.263 qpair failed and we were unable to recover it. 00:31:23.263 [2024-06-07 21:48:23.292693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.263 [2024-06-07 21:48:23.292702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.263 qpair failed and we were unable to recover it. 00:31:23.263 [2024-06-07 21:48:23.292848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.263 [2024-06-07 21:48:23.292857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.263 qpair failed and we were unable to recover it. 00:31:23.263 [2024-06-07 21:48:23.293101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.263 [2024-06-07 21:48:23.293110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.263 qpair failed and we were unable to recover it. 00:31:23.263 [2024-06-07 21:48:23.293256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.263 [2024-06-07 21:48:23.293266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.263 qpair failed and we were unable to recover it. 00:31:23.263 [2024-06-07 21:48:23.293418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.263 [2024-06-07 21:48:23.293427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.263 qpair failed and we were unable to recover it. 00:31:23.263 [2024-06-07 21:48:23.293656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.263 [2024-06-07 21:48:23.293665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.263 qpair failed and we were unable to recover it. 00:31:23.263 [2024-06-07 21:48:23.293807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.263 [2024-06-07 21:48:23.293816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.263 qpair failed and we were unable to recover it. 00:31:23.263 [2024-06-07 21:48:23.294015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.263 [2024-06-07 21:48:23.294028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.263 qpair failed and we were unable to recover it. 00:31:23.263 [2024-06-07 21:48:23.294184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.263 [2024-06-07 21:48:23.294194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.263 qpair failed and we were unable to recover it. 00:31:23.263 [2024-06-07 21:48:23.294402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.263 [2024-06-07 21:48:23.294411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.263 qpair failed and we were unable to recover it. 00:31:23.264 [2024-06-07 21:48:23.294572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.264 [2024-06-07 21:48:23.294582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.264 qpair failed and we were unable to recover it. 00:31:23.264 [2024-06-07 21:48:23.294814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.264 [2024-06-07 21:48:23.294824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.264 qpair failed and we were unable to recover it. 00:31:23.264 [2024-06-07 21:48:23.295143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.264 [2024-06-07 21:48:23.295153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.264 qpair failed and we were unable to recover it. 00:31:23.264 [2024-06-07 21:48:23.295308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.264 [2024-06-07 21:48:23.295318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.264 qpair failed and we were unable to recover it. 00:31:23.264 [2024-06-07 21:48:23.295463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.264 [2024-06-07 21:48:23.295473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.264 qpair failed and we were unable to recover it. 00:31:23.264 [2024-06-07 21:48:23.295623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.264 [2024-06-07 21:48:23.295634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.264 qpair failed and we were unable to recover it. 00:31:23.264 [2024-06-07 21:48:23.295785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.264 [2024-06-07 21:48:23.295794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.264 qpair failed and we were unable to recover it. 00:31:23.264 [2024-06-07 21:48:23.295999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.264 [2024-06-07 21:48:23.296009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.264 qpair failed and we were unable to recover it. 00:31:23.264 [2024-06-07 21:48:23.296248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.264 [2024-06-07 21:48:23.296258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.264 qpair failed and we were unable to recover it. 00:31:23.264 [2024-06-07 21:48:23.296454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.264 [2024-06-07 21:48:23.296464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.264 qpair failed and we were unable to recover it. 00:31:23.264 [2024-06-07 21:48:23.296666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.264 [2024-06-07 21:48:23.296675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.264 qpair failed and we were unable to recover it. 00:31:23.264 [2024-06-07 21:48:23.296819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.264 [2024-06-07 21:48:23.296829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.264 qpair failed and we were unable to recover it. 00:31:23.264 [2024-06-07 21:48:23.296970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.264 [2024-06-07 21:48:23.296980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.264 qpair failed and we were unable to recover it. 00:31:23.264 [2024-06-07 21:48:23.297122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.264 [2024-06-07 21:48:23.297132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.264 qpair failed and we were unable to recover it. 00:31:23.264 [2024-06-07 21:48:23.297395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.264 [2024-06-07 21:48:23.297405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.264 qpair failed and we were unable to recover it. 00:31:23.264 [2024-06-07 21:48:23.297537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.264 [2024-06-07 21:48:23.297546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.264 qpair failed and we were unable to recover it. 00:31:23.264 [2024-06-07 21:48:23.297742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.264 [2024-06-07 21:48:23.297752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.264 qpair failed and we were unable to recover it. 00:31:23.264 [2024-06-07 21:48:23.297957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.264 [2024-06-07 21:48:23.297966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.264 qpair failed and we were unable to recover it. 00:31:23.264 [2024-06-07 21:48:23.298118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.264 [2024-06-07 21:48:23.298128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.264 qpair failed and we were unable to recover it. 00:31:23.264 [2024-06-07 21:48:23.298328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.264 [2024-06-07 21:48:23.298337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.264 qpair failed and we were unable to recover it. 00:31:23.264 [2024-06-07 21:48:23.298562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.264 [2024-06-07 21:48:23.298571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.264 qpair failed and we were unable to recover it. 00:31:23.264 [2024-06-07 21:48:23.298723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.264 [2024-06-07 21:48:23.298733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.264 qpair failed and we were unable to recover it. 00:31:23.264 [2024-06-07 21:48:23.298933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.264 [2024-06-07 21:48:23.298942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.264 qpair failed and we were unable to recover it. 00:31:23.264 [2024-06-07 21:48:23.299141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.264 [2024-06-07 21:48:23.299150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.264 qpair failed and we were unable to recover it. 00:31:23.264 [2024-06-07 21:48:23.299376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.264 [2024-06-07 21:48:23.299386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.264 qpair failed and we were unable to recover it. 00:31:23.264 [2024-06-07 21:48:23.299650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.264 [2024-06-07 21:48:23.299659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.264 qpair failed and we were unable to recover it. 00:31:23.264 [2024-06-07 21:48:23.299866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.264 [2024-06-07 21:48:23.299876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.264 qpair failed and we were unable to recover it. 00:31:23.264 [2024-06-07 21:48:23.300114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.264 [2024-06-07 21:48:23.300124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.264 qpair failed and we were unable to recover it. 00:31:23.264 [2024-06-07 21:48:23.300340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.264 [2024-06-07 21:48:23.300350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.264 qpair failed and we were unable to recover it. 00:31:23.264 [2024-06-07 21:48:23.300570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.264 [2024-06-07 21:48:23.300580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.264 qpair failed and we were unable to recover it. 00:31:23.264 [2024-06-07 21:48:23.300808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.264 [2024-06-07 21:48:23.300818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.264 qpair failed and we were unable to recover it. 00:31:23.264 [2024-06-07 21:48:23.300952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.264 [2024-06-07 21:48:23.300961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.264 qpair failed and we were unable to recover it. 00:31:23.264 [2024-06-07 21:48:23.301183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.264 [2024-06-07 21:48:23.301193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.264 qpair failed and we were unable to recover it. 00:31:23.264 [2024-06-07 21:48:23.301457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.264 [2024-06-07 21:48:23.301466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.264 qpair failed and we were unable to recover it. 00:31:23.264 [2024-06-07 21:48:23.301608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.264 [2024-06-07 21:48:23.301618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.264 qpair failed and we were unable to recover it. 00:31:23.264 [2024-06-07 21:48:23.301745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.264 [2024-06-07 21:48:23.301754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.265 qpair failed and we were unable to recover it. 00:31:23.265 [2024-06-07 21:48:23.301975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.265 [2024-06-07 21:48:23.301984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.265 qpair failed and we were unable to recover it. 00:31:23.265 [2024-06-07 21:48:23.302185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.265 [2024-06-07 21:48:23.302195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.265 qpair failed and we were unable to recover it. 00:31:23.265 [2024-06-07 21:48:23.302401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.265 [2024-06-07 21:48:23.302410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.265 qpair failed and we were unable to recover it. 00:31:23.265 [2024-06-07 21:48:23.302644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.265 [2024-06-07 21:48:23.302654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.265 qpair failed and we were unable to recover it. 00:31:23.265 [2024-06-07 21:48:23.302791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.265 [2024-06-07 21:48:23.302801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.265 qpair failed and we were unable to recover it. 00:31:23.265 [2024-06-07 21:48:23.302955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.265 [2024-06-07 21:48:23.302964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.265 qpair failed and we were unable to recover it. 00:31:23.265 [2024-06-07 21:48:23.303234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.265 [2024-06-07 21:48:23.303244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.265 qpair failed and we were unable to recover it. 00:31:23.265 [2024-06-07 21:48:23.303390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.265 [2024-06-07 21:48:23.303400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.265 qpair failed and we were unable to recover it. 00:31:23.265 [2024-06-07 21:48:23.303547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.265 [2024-06-07 21:48:23.303556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.265 qpair failed and we were unable to recover it. 00:31:23.265 [2024-06-07 21:48:23.303757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.265 [2024-06-07 21:48:23.303768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.265 qpair failed and we were unable to recover it. 00:31:23.265 [2024-06-07 21:48:23.303905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.265 [2024-06-07 21:48:23.303914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.265 qpair failed and we were unable to recover it. 00:31:23.265 [2024-06-07 21:48:23.304049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.265 [2024-06-07 21:48:23.304059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.265 qpair failed and we were unable to recover it. 00:31:23.265 [2024-06-07 21:48:23.304205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.265 [2024-06-07 21:48:23.304215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.265 qpair failed and we were unable to recover it. 00:31:23.265 [2024-06-07 21:48:23.304418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.265 [2024-06-07 21:48:23.304428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.265 qpair failed and we were unable to recover it. 00:31:23.265 [2024-06-07 21:48:23.304561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.265 [2024-06-07 21:48:23.304571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.265 qpair failed and we were unable to recover it. 00:31:23.265 [2024-06-07 21:48:23.304713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.265 [2024-06-07 21:48:23.304723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.265 qpair failed and we were unable to recover it. 00:31:23.265 [2024-06-07 21:48:23.304865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.265 [2024-06-07 21:48:23.304875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.265 qpair failed and we were unable to recover it. 00:31:23.265 [2024-06-07 21:48:23.305018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.265 [2024-06-07 21:48:23.305032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.265 qpair failed and we were unable to recover it. 00:31:23.265 [2024-06-07 21:48:23.305239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.265 [2024-06-07 21:48:23.305249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.265 qpair failed and we were unable to recover it. 00:31:23.265 [2024-06-07 21:48:23.305505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.265 [2024-06-07 21:48:23.305514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.265 qpair failed and we were unable to recover it. 00:31:23.265 [2024-06-07 21:48:23.305661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.265 [2024-06-07 21:48:23.305670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.265 qpair failed and we were unable to recover it. 00:31:23.265 [2024-06-07 21:48:23.305876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.265 [2024-06-07 21:48:23.305886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.265 qpair failed and we were unable to recover it. 00:31:23.265 [2024-06-07 21:48:23.306035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.265 [2024-06-07 21:48:23.306045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.265 qpair failed and we were unable to recover it. 00:31:23.265 [2024-06-07 21:48:23.306182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.265 [2024-06-07 21:48:23.306191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.265 qpair failed and we were unable to recover it. 00:31:23.265 [2024-06-07 21:48:23.306417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.265 [2024-06-07 21:48:23.306427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.265 qpair failed and we were unable to recover it. 00:31:23.265 [2024-06-07 21:48:23.306576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.265 [2024-06-07 21:48:23.306585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.265 qpair failed and we were unable to recover it. 00:31:23.265 [2024-06-07 21:48:23.306786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.265 [2024-06-07 21:48:23.306795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.265 qpair failed and we were unable to recover it. 00:31:23.265 [2024-06-07 21:48:23.306958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.265 [2024-06-07 21:48:23.306968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.265 qpair failed and we were unable to recover it. 00:31:23.265 [2024-06-07 21:48:23.307175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.265 [2024-06-07 21:48:23.307185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.265 qpair failed and we were unable to recover it. 00:31:23.265 [2024-06-07 21:48:23.307337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.265 [2024-06-07 21:48:23.307347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.265 qpair failed and we were unable to recover it. 00:31:23.265 [2024-06-07 21:48:23.307654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.265 [2024-06-07 21:48:23.307663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.265 qpair failed and we were unable to recover it. 00:31:23.265 [2024-06-07 21:48:23.307862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.265 [2024-06-07 21:48:23.307872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.265 qpair failed and we were unable to recover it. 00:31:23.265 [2024-06-07 21:48:23.308139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.265 [2024-06-07 21:48:23.308149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.265 qpair failed and we were unable to recover it. 00:31:23.265 [2024-06-07 21:48:23.308357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.265 [2024-06-07 21:48:23.308367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.265 qpair failed and we were unable to recover it. 00:31:23.265 [2024-06-07 21:48:23.308501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.265 [2024-06-07 21:48:23.308510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.265 qpair failed and we were unable to recover it. 00:31:23.265 [2024-06-07 21:48:23.308653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.265 [2024-06-07 21:48:23.308662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.265 qpair failed and we were unable to recover it. 00:31:23.265 [2024-06-07 21:48:23.308929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.265 [2024-06-07 21:48:23.308938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.265 qpair failed and we were unable to recover it. 00:31:23.265 [2024-06-07 21:48:23.309096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.266 [2024-06-07 21:48:23.309106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.266 qpair failed and we were unable to recover it. 00:31:23.266 [2024-06-07 21:48:23.309399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.266 [2024-06-07 21:48:23.309408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.266 qpair failed and we were unable to recover it. 00:31:23.266 [2024-06-07 21:48:23.309627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.266 [2024-06-07 21:48:23.309636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.266 qpair failed and we were unable to recover it. 00:31:23.266 [2024-06-07 21:48:23.309834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.266 [2024-06-07 21:48:23.309843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.266 qpair failed and we were unable to recover it. 00:31:23.266 [2024-06-07 21:48:23.310040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.266 [2024-06-07 21:48:23.310050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.266 qpair failed and we were unable to recover it. 00:31:23.266 [2024-06-07 21:48:23.310282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.266 [2024-06-07 21:48:23.310292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.266 qpair failed and we were unable to recover it. 00:31:23.266 [2024-06-07 21:48:23.310438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.266 [2024-06-07 21:48:23.310448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.266 qpair failed and we were unable to recover it. 00:31:23.266 [2024-06-07 21:48:23.310643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.266 [2024-06-07 21:48:23.310653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.266 qpair failed and we were unable to recover it. 00:31:23.266 [2024-06-07 21:48:23.310855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.266 [2024-06-07 21:48:23.310864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.266 qpair failed and we were unable to recover it. 00:31:23.266 [2024-06-07 21:48:23.311019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.266 [2024-06-07 21:48:23.311033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.266 qpair failed and we were unable to recover it. 00:31:23.266 [2024-06-07 21:48:23.311238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.266 [2024-06-07 21:48:23.311247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.266 qpair failed and we were unable to recover it. 00:31:23.266 [2024-06-07 21:48:23.311545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.266 [2024-06-07 21:48:23.311554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.266 qpair failed and we were unable to recover it. 00:31:23.266 [2024-06-07 21:48:23.311751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.266 [2024-06-07 21:48:23.311762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.266 qpair failed and we were unable to recover it. 00:31:23.266 [2024-06-07 21:48:23.312032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.266 [2024-06-07 21:48:23.312042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.266 qpair failed and we were unable to recover it. 00:31:23.266 [2024-06-07 21:48:23.312255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.266 [2024-06-07 21:48:23.312265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.266 qpair failed and we were unable to recover it. 00:31:23.266 [2024-06-07 21:48:23.312491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.266 [2024-06-07 21:48:23.312501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.266 qpair failed and we were unable to recover it. 00:31:23.266 [2024-06-07 21:48:23.312611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.266 [2024-06-07 21:48:23.312620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.266 qpair failed and we were unable to recover it. 00:31:23.266 [2024-06-07 21:48:23.312827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.266 [2024-06-07 21:48:23.312836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.266 qpair failed and we were unable to recover it. 00:31:23.266 [2024-06-07 21:48:23.313099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.266 [2024-06-07 21:48:23.313109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.266 qpair failed and we were unable to recover it. 00:31:23.266 [2024-06-07 21:48:23.313257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.266 [2024-06-07 21:48:23.313267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.266 qpair failed and we were unable to recover it. 00:31:23.266 [2024-06-07 21:48:23.313535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.266 [2024-06-07 21:48:23.313545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.266 qpair failed and we were unable to recover it. 00:31:23.266 [2024-06-07 21:48:23.313768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.266 [2024-06-07 21:48:23.313778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.266 qpair failed and we were unable to recover it. 00:31:23.266 [2024-06-07 21:48:23.313941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.266 [2024-06-07 21:48:23.313950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.266 qpair failed and we were unable to recover it. 00:31:23.266 [2024-06-07 21:48:23.314086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.266 [2024-06-07 21:48:23.314096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.266 qpair failed and we were unable to recover it. 00:31:23.266 [2024-06-07 21:48:23.314251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.266 [2024-06-07 21:48:23.314261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.266 qpair failed and we were unable to recover it. 00:31:23.266 [2024-06-07 21:48:23.314402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.266 [2024-06-07 21:48:23.314411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.266 qpair failed and we were unable to recover it. 00:31:23.266 [2024-06-07 21:48:23.314626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.266 [2024-06-07 21:48:23.314636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.266 qpair failed and we were unable to recover it. 00:31:23.266 [2024-06-07 21:48:23.314783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.266 [2024-06-07 21:48:23.314792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.266 qpair failed and we were unable to recover it. 00:31:23.266 [2024-06-07 21:48:23.314960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.266 [2024-06-07 21:48:23.314969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.266 qpair failed and we were unable to recover it. 00:31:23.266 [2024-06-07 21:48:23.315103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.266 [2024-06-07 21:48:23.315113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.266 qpair failed and we were unable to recover it. 00:31:23.266 [2024-06-07 21:48:23.315331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.266 [2024-06-07 21:48:23.315341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.266 qpair failed and we were unable to recover it. 00:31:23.266 [2024-06-07 21:48:23.315540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.266 [2024-06-07 21:48:23.315549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.266 qpair failed and we were unable to recover it. 00:31:23.266 [2024-06-07 21:48:23.315685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.266 [2024-06-07 21:48:23.315694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.266 qpair failed and we were unable to recover it. 00:31:23.266 [2024-06-07 21:48:23.315959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.266 [2024-06-07 21:48:23.315968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.266 qpair failed and we were unable to recover it. 00:31:23.266 [2024-06-07 21:48:23.316171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.266 [2024-06-07 21:48:23.316181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.266 qpair failed and we were unable to recover it. 00:31:23.266 [2024-06-07 21:48:23.316381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.266 [2024-06-07 21:48:23.316390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.267 qpair failed and we were unable to recover it. 00:31:23.267 [2024-06-07 21:48:23.316597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.267 [2024-06-07 21:48:23.316606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.267 qpair failed and we were unable to recover it. 00:31:23.267 [2024-06-07 21:48:23.316755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.267 [2024-06-07 21:48:23.316765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.267 qpair failed and we were unable to recover it. 00:31:23.267 [2024-06-07 21:48:23.317034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.267 [2024-06-07 21:48:23.317043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.267 qpair failed and we were unable to recover it. 00:31:23.267 [2024-06-07 21:48:23.317277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.267 [2024-06-07 21:48:23.317286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.267 qpair failed and we were unable to recover it. 00:31:23.267 [2024-06-07 21:48:23.317500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.267 [2024-06-07 21:48:23.317510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.267 qpair failed and we were unable to recover it. 00:31:23.267 [2024-06-07 21:48:23.317805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.267 [2024-06-07 21:48:23.317814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.267 qpair failed and we were unable to recover it. 00:31:23.267 [2024-06-07 21:48:23.317971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.267 [2024-06-07 21:48:23.317980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.267 qpair failed and we were unable to recover it. 00:31:23.267 [2024-06-07 21:48:23.318211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.267 [2024-06-07 21:48:23.318221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.267 qpair failed and we were unable to recover it. 00:31:23.267 [2024-06-07 21:48:23.318431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.267 [2024-06-07 21:48:23.318441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.267 qpair failed and we were unable to recover it. 00:31:23.267 [2024-06-07 21:48:23.318705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.267 [2024-06-07 21:48:23.318714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.267 qpair failed and we were unable to recover it. 00:31:23.267 [2024-06-07 21:48:23.318867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.267 [2024-06-07 21:48:23.318877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.267 qpair failed and we were unable to recover it. 00:31:23.267 [2024-06-07 21:48:23.319146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.267 [2024-06-07 21:48:23.319156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.267 qpair failed and we were unable to recover it. 00:31:23.267 [2024-06-07 21:48:23.319301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.267 [2024-06-07 21:48:23.319311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.267 qpair failed and we were unable to recover it. 00:31:23.267 [2024-06-07 21:48:23.319455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.267 [2024-06-07 21:48:23.319465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.267 qpair failed and we were unable to recover it. 00:31:23.267 [2024-06-07 21:48:23.319669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.267 [2024-06-07 21:48:23.319679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.267 qpair failed and we were unable to recover it. 00:31:23.267 [2024-06-07 21:48:23.319883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.267 [2024-06-07 21:48:23.319892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.267 qpair failed and we were unable to recover it. 00:31:23.267 [2024-06-07 21:48:23.320094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.267 [2024-06-07 21:48:23.320105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.267 qpair failed and we were unable to recover it. 00:31:23.267 [2024-06-07 21:48:23.320343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.267 [2024-06-07 21:48:23.320353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.267 qpair failed and we were unable to recover it. 00:31:23.267 [2024-06-07 21:48:23.320506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.267 [2024-06-07 21:48:23.320515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.267 qpair failed and we were unable to recover it. 00:31:23.267 [2024-06-07 21:48:23.320667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.267 [2024-06-07 21:48:23.320676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.267 qpair failed and we were unable to recover it. 00:31:23.267 [2024-06-07 21:48:23.320897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.267 [2024-06-07 21:48:23.320907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.267 qpair failed and we were unable to recover it. 00:31:23.267 [2024-06-07 21:48:23.321117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.267 [2024-06-07 21:48:23.321127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.267 qpair failed and we were unable to recover it. 00:31:23.267 [2024-06-07 21:48:23.321408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.267 [2024-06-07 21:48:23.321418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.267 qpair failed and we were unable to recover it. 00:31:23.267 [2024-06-07 21:48:23.321543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.267 [2024-06-07 21:48:23.321552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.267 qpair failed and we were unable to recover it. 00:31:23.267 [2024-06-07 21:48:23.321713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.267 [2024-06-07 21:48:23.321723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.267 qpair failed and we were unable to recover it. 00:31:23.267 [2024-06-07 21:48:23.321856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.267 [2024-06-07 21:48:23.321865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.267 qpair failed and we were unable to recover it. 00:31:23.267 [2024-06-07 21:48:23.322072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.267 [2024-06-07 21:48:23.322082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.267 qpair failed and we were unable to recover it. 00:31:23.267 [2024-06-07 21:48:23.322298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.267 [2024-06-07 21:48:23.322308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.267 qpair failed and we were unable to recover it. 00:31:23.267 [2024-06-07 21:48:23.322521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.267 [2024-06-07 21:48:23.322530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.267 qpair failed and we were unable to recover it. 00:31:23.268 [2024-06-07 21:48:23.322677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.268 [2024-06-07 21:48:23.322686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.268 qpair failed and we were unable to recover it. 00:31:23.268 [2024-06-07 21:48:23.322891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.268 [2024-06-07 21:48:23.322901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.268 qpair failed and we were unable to recover it. 00:31:23.268 [2024-06-07 21:48:23.323049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.268 [2024-06-07 21:48:23.323059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.268 qpair failed and we were unable to recover it. 00:31:23.268 [2024-06-07 21:48:23.323329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.268 [2024-06-07 21:48:23.323339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.268 qpair failed and we were unable to recover it. 00:31:23.268 [2024-06-07 21:48:23.323557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.268 [2024-06-07 21:48:23.323567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.268 qpair failed and we were unable to recover it. 00:31:23.268 [2024-06-07 21:48:23.323719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.268 [2024-06-07 21:48:23.323729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.268 qpair failed and we were unable to recover it. 00:31:23.268 [2024-06-07 21:48:23.323994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.268 [2024-06-07 21:48:23.324004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.268 qpair failed and we were unable to recover it. 00:31:23.268 [2024-06-07 21:48:23.324155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.268 [2024-06-07 21:48:23.324165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.268 qpair failed and we were unable to recover it. 00:31:23.268 [2024-06-07 21:48:23.324421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.268 [2024-06-07 21:48:23.324431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.268 qpair failed and we were unable to recover it. 00:31:23.268 [2024-06-07 21:48:23.324572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.268 [2024-06-07 21:48:23.324582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.268 qpair failed and we were unable to recover it. 00:31:23.268 [2024-06-07 21:48:23.324792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.268 [2024-06-07 21:48:23.324801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.268 qpair failed and we were unable to recover it. 00:31:23.268 [2024-06-07 21:48:23.325119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.268 [2024-06-07 21:48:23.325129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.268 qpair failed and we were unable to recover it. 00:31:23.268 [2024-06-07 21:48:23.325258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.268 [2024-06-07 21:48:23.325268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.268 qpair failed and we were unable to recover it. 00:31:23.268 [2024-06-07 21:48:23.325414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.268 [2024-06-07 21:48:23.325423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.268 qpair failed and we were unable to recover it. 00:31:23.268 [2024-06-07 21:48:23.325623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.268 [2024-06-07 21:48:23.325633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.268 qpair failed and we were unable to recover it. 00:31:23.268 [2024-06-07 21:48:23.325846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.268 [2024-06-07 21:48:23.325855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.268 qpair failed and we were unable to recover it. 00:31:23.268 [2024-06-07 21:48:23.325995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.268 [2024-06-07 21:48:23.326005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.268 qpair failed and we were unable to recover it. 00:31:23.268 [2024-06-07 21:48:23.326228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.268 [2024-06-07 21:48:23.326238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.268 qpair failed and we were unable to recover it. 00:31:23.268 [2024-06-07 21:48:23.326421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.268 [2024-06-07 21:48:23.326431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.268 qpair failed and we were unable to recover it. 00:31:23.268 [2024-06-07 21:48:23.326723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.268 [2024-06-07 21:48:23.326733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.268 qpair failed and we were unable to recover it. 00:31:23.268 [2024-06-07 21:48:23.326950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.268 [2024-06-07 21:48:23.326959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.268 qpair failed and we were unable to recover it. 00:31:23.268 [2024-06-07 21:48:23.327257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.268 [2024-06-07 21:48:23.327267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.268 qpair failed and we were unable to recover it. 00:31:23.268 [2024-06-07 21:48:23.327548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.268 [2024-06-07 21:48:23.327557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.268 qpair failed and we were unable to recover it. 00:31:23.268 [2024-06-07 21:48:23.327759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.268 [2024-06-07 21:48:23.327768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.268 qpair failed and we were unable to recover it. 00:31:23.268 [2024-06-07 21:48:23.327966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.268 [2024-06-07 21:48:23.327975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.268 qpair failed and we were unable to recover it. 00:31:23.268 [2024-06-07 21:48:23.328259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.268 [2024-06-07 21:48:23.328269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.268 qpair failed and we were unable to recover it. 00:31:23.268 [2024-06-07 21:48:23.328537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.268 [2024-06-07 21:48:23.328547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.268 qpair failed and we were unable to recover it. 00:31:23.268 [2024-06-07 21:48:23.328696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.268 [2024-06-07 21:48:23.328706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.268 qpair failed and we were unable to recover it. 00:31:23.268 [2024-06-07 21:48:23.328854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.268 [2024-06-07 21:48:23.328864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.268 qpair failed and we were unable to recover it. 00:31:23.268 [2024-06-07 21:48:23.329075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.268 [2024-06-07 21:48:23.329084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.268 qpair failed and we were unable to recover it. 00:31:23.268 [2024-06-07 21:48:23.329235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.268 [2024-06-07 21:48:23.329245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.268 qpair failed and we were unable to recover it. 00:31:23.268 [2024-06-07 21:48:23.329388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.268 [2024-06-07 21:48:23.329398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.268 qpair failed and we were unable to recover it. 00:31:23.268 [2024-06-07 21:48:23.329548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.268 [2024-06-07 21:48:23.329558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.268 qpair failed and we were unable to recover it. 00:31:23.268 [2024-06-07 21:48:23.329712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.268 [2024-06-07 21:48:23.329722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.268 qpair failed and we were unable to recover it. 00:31:23.268 [2024-06-07 21:48:23.329933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.268 [2024-06-07 21:48:23.329942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.268 qpair failed and we were unable to recover it. 00:31:23.268 [2024-06-07 21:48:23.330151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.268 [2024-06-07 21:48:23.330161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.268 qpair failed and we were unable to recover it. 00:31:23.268 [2024-06-07 21:48:23.330302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.268 [2024-06-07 21:48:23.330312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.268 qpair failed and we were unable to recover it. 00:31:23.269 [2024-06-07 21:48:23.330516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.269 [2024-06-07 21:48:23.330525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.269 qpair failed and we were unable to recover it. 00:31:23.269 [2024-06-07 21:48:23.330802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.269 [2024-06-07 21:48:23.330812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.269 qpair failed and we were unable to recover it. 00:31:23.269 [2024-06-07 21:48:23.330958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.269 [2024-06-07 21:48:23.330967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.269 qpair failed and we were unable to recover it. 00:31:23.269 [2024-06-07 21:48:23.331175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.269 [2024-06-07 21:48:23.331185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.269 qpair failed and we were unable to recover it. 00:31:23.269 [2024-06-07 21:48:23.331399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.269 [2024-06-07 21:48:23.331409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.269 qpair failed and we were unable to recover it. 00:31:23.269 [2024-06-07 21:48:23.331619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.269 [2024-06-07 21:48:23.331628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.269 qpair failed and we were unable to recover it. 00:31:23.269 [2024-06-07 21:48:23.331791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.269 [2024-06-07 21:48:23.331800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.269 qpair failed and we were unable to recover it. 00:31:23.269 [2024-06-07 21:48:23.332009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.269 [2024-06-07 21:48:23.332019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.269 qpair failed and we were unable to recover it. 00:31:23.269 [2024-06-07 21:48:23.332180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.269 [2024-06-07 21:48:23.332189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.269 qpair failed and we were unable to recover it. 00:31:23.269 [2024-06-07 21:48:23.332338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.269 [2024-06-07 21:48:23.332348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.269 qpair failed and we were unable to recover it. 00:31:23.269 [2024-06-07 21:48:23.332496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.269 [2024-06-07 21:48:23.332505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.269 qpair failed and we were unable to recover it. 00:31:23.269 [2024-06-07 21:48:23.332641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.269 [2024-06-07 21:48:23.332650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.269 qpair failed and we were unable to recover it. 00:31:23.269 [2024-06-07 21:48:23.332796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.269 [2024-06-07 21:48:23.332805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.269 qpair failed and we were unable to recover it. 00:31:23.269 [2024-06-07 21:48:23.333036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.269 [2024-06-07 21:48:23.333045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.269 qpair failed and we were unable to recover it. 00:31:23.269 [2024-06-07 21:48:23.333173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.269 [2024-06-07 21:48:23.333182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.269 qpair failed and we were unable to recover it. 00:31:23.269 [2024-06-07 21:48:23.333393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.269 [2024-06-07 21:48:23.333402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.269 qpair failed and we were unable to recover it. 00:31:23.269 [2024-06-07 21:48:23.333605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.269 [2024-06-07 21:48:23.333614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.269 qpair failed and we were unable to recover it. 00:31:23.269 [2024-06-07 21:48:23.333813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.269 [2024-06-07 21:48:23.333824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.269 qpair failed and we were unable to recover it. 00:31:23.269 [2024-06-07 21:48:23.333958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.269 [2024-06-07 21:48:23.333967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.269 qpair failed and we were unable to recover it. 00:31:23.269 [2024-06-07 21:48:23.334178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.269 [2024-06-07 21:48:23.334188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.269 qpair failed and we were unable to recover it. 00:31:23.269 [2024-06-07 21:48:23.334387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.269 [2024-06-07 21:48:23.334396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.269 qpair failed and we were unable to recover it. 00:31:23.269 [2024-06-07 21:48:23.334614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.269 [2024-06-07 21:48:23.334623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.269 qpair failed and we were unable to recover it. 00:31:23.269 [2024-06-07 21:48:23.334766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.269 [2024-06-07 21:48:23.334776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.269 qpair failed and we were unable to recover it. 00:31:23.269 [2024-06-07 21:48:23.334995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.269 [2024-06-07 21:48:23.335004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.269 qpair failed and we were unable to recover it. 00:31:23.269 [2024-06-07 21:48:23.335167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.269 [2024-06-07 21:48:23.335177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.269 qpair failed and we were unable to recover it. 00:31:23.269 [2024-06-07 21:48:23.335391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.269 [2024-06-07 21:48:23.335401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.269 qpair failed and we were unable to recover it. 00:31:23.269 [2024-06-07 21:48:23.335595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.269 [2024-06-07 21:48:23.335604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.269 qpair failed and we were unable to recover it. 00:31:23.269 [2024-06-07 21:48:23.335832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.269 [2024-06-07 21:48:23.335841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.269 qpair failed and we were unable to recover it. 00:31:23.269 [2024-06-07 21:48:23.335975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.269 [2024-06-07 21:48:23.335984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.269 qpair failed and we were unable to recover it. 00:31:23.269 [2024-06-07 21:48:23.336207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.269 [2024-06-07 21:48:23.336216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.269 qpair failed and we were unable to recover it. 00:31:23.269 [2024-06-07 21:48:23.336441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.269 [2024-06-07 21:48:23.336450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.269 qpair failed and we were unable to recover it. 00:31:23.269 [2024-06-07 21:48:23.336615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.269 [2024-06-07 21:48:23.336624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.269 qpair failed and we were unable to recover it. 00:31:23.269 [2024-06-07 21:48:23.336842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.269 [2024-06-07 21:48:23.336852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.269 qpair failed and we were unable to recover it. 00:31:23.269 [2024-06-07 21:48:23.336993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.269 [2024-06-07 21:48:23.337003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.269 qpair failed and we were unable to recover it. 00:31:23.269 [2024-06-07 21:48:23.337213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.269 [2024-06-07 21:48:23.337222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.269 qpair failed and we were unable to recover it. 00:31:23.269 [2024-06-07 21:48:23.337468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.269 [2024-06-07 21:48:23.337478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.269 qpair failed and we were unable to recover it. 00:31:23.269 [2024-06-07 21:48:23.337641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.270 [2024-06-07 21:48:23.337651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.270 qpair failed and we were unable to recover it. 00:31:23.270 [2024-06-07 21:48:23.337845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.270 [2024-06-07 21:48:23.337854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.270 qpair failed and we were unable to recover it. 00:31:23.270 [2024-06-07 21:48:23.338055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.270 [2024-06-07 21:48:23.338064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.270 qpair failed and we were unable to recover it. 00:31:23.270 [2024-06-07 21:48:23.338213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.270 [2024-06-07 21:48:23.338223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.270 qpair failed and we were unable to recover it. 00:31:23.270 [2024-06-07 21:48:23.338457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.270 [2024-06-07 21:48:23.338466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.270 qpair failed and we were unable to recover it. 00:31:23.270 [2024-06-07 21:48:23.338610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.270 [2024-06-07 21:48:23.338619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.270 qpair failed and we were unable to recover it. 00:31:23.270 [2024-06-07 21:48:23.338764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.270 [2024-06-07 21:48:23.338774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.270 qpair failed and we were unable to recover it. 00:31:23.270 [2024-06-07 21:48:23.338925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.270 [2024-06-07 21:48:23.338934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.270 qpair failed and we were unable to recover it. 00:31:23.270 [2024-06-07 21:48:23.339153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.270 [2024-06-07 21:48:23.339163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.270 qpair failed and we were unable to recover it. 00:31:23.270 [2024-06-07 21:48:23.339362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.270 [2024-06-07 21:48:23.339371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.270 qpair failed and we were unable to recover it. 00:31:23.270 [2024-06-07 21:48:23.339506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.270 [2024-06-07 21:48:23.339515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.270 qpair failed and we were unable to recover it. 00:31:23.270 [2024-06-07 21:48:23.339681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.270 [2024-06-07 21:48:23.339690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.270 qpair failed and we were unable to recover it. 00:31:23.270 [2024-06-07 21:48:23.339890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.270 [2024-06-07 21:48:23.339900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.270 qpair failed and we were unable to recover it. 00:31:23.270 [2024-06-07 21:48:23.340035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.270 [2024-06-07 21:48:23.340045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.270 qpair failed and we were unable to recover it. 00:31:23.270 [2024-06-07 21:48:23.340205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.270 [2024-06-07 21:48:23.340215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.270 qpair failed and we were unable to recover it. 00:31:23.270 [2024-06-07 21:48:23.340350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.270 [2024-06-07 21:48:23.340360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.270 qpair failed and we were unable to recover it. 00:31:23.270 [2024-06-07 21:48:23.340627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.270 [2024-06-07 21:48:23.340637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.270 qpair failed and we were unable to recover it. 00:31:23.270 [2024-06-07 21:48:23.340796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.270 [2024-06-07 21:48:23.340805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.270 qpair failed and we were unable to recover it. 00:31:23.270 [2024-06-07 21:48:23.341099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.270 [2024-06-07 21:48:23.341108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.270 qpair failed and we were unable to recover it. 00:31:23.270 [2024-06-07 21:48:23.341247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.270 [2024-06-07 21:48:23.341257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.270 qpair failed and we were unable to recover it. 00:31:23.270 [2024-06-07 21:48:23.341387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.270 [2024-06-07 21:48:23.341397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.270 qpair failed and we were unable to recover it. 00:31:23.270 [2024-06-07 21:48:23.341529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.270 [2024-06-07 21:48:23.341540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.270 qpair failed and we were unable to recover it. 00:31:23.270 [2024-06-07 21:48:23.341835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.270 [2024-06-07 21:48:23.341844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.270 qpair failed and we were unable to recover it. 00:31:23.270 [2024-06-07 21:48:23.341982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.270 [2024-06-07 21:48:23.341991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.270 qpair failed and we were unable to recover it. 00:31:23.270 [2024-06-07 21:48:23.342204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.270 [2024-06-07 21:48:23.342213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.270 qpair failed and we were unable to recover it. 00:31:23.270 [2024-06-07 21:48:23.342473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.270 [2024-06-07 21:48:23.342484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.270 qpair failed and we were unable to recover it. 00:31:23.270 [2024-06-07 21:48:23.342700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.270 [2024-06-07 21:48:23.342710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.270 qpair failed and we were unable to recover it. 00:31:23.270 [2024-06-07 21:48:23.342908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.270 [2024-06-07 21:48:23.342917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.270 qpair failed and we were unable to recover it. 00:31:23.270 [2024-06-07 21:48:23.343065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.270 [2024-06-07 21:48:23.343075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.270 qpair failed and we were unable to recover it. 00:31:23.270 [2024-06-07 21:48:23.343207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.270 [2024-06-07 21:48:23.343216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.270 qpair failed and we were unable to recover it. 00:31:23.270 [2024-06-07 21:48:23.343361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.270 [2024-06-07 21:48:23.343371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.270 qpair failed and we were unable to recover it. 00:31:23.270 [2024-06-07 21:48:23.343514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.270 [2024-06-07 21:48:23.343524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.270 qpair failed and we were unable to recover it. 00:31:23.270 [2024-06-07 21:48:23.343672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.270 [2024-06-07 21:48:23.343682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.270 qpair failed and we were unable to recover it. 00:31:23.270 [2024-06-07 21:48:23.343811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.270 [2024-06-07 21:48:23.343820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.270 qpair failed and we were unable to recover it. 00:31:23.270 [2024-06-07 21:48:23.344088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.270 [2024-06-07 21:48:23.344097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.270 qpair failed and we were unable to recover it. 00:31:23.270 [2024-06-07 21:48:23.344242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.270 [2024-06-07 21:48:23.344252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.270 qpair failed and we were unable to recover it. 00:31:23.270 [2024-06-07 21:48:23.344513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.270 [2024-06-07 21:48:23.344523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.271 qpair failed and we were unable to recover it. 00:31:23.271 [2024-06-07 21:48:23.344739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.271 [2024-06-07 21:48:23.344748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.271 qpair failed and we were unable to recover it. 00:31:23.271 [2024-06-07 21:48:23.344965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.271 [2024-06-07 21:48:23.344974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.271 qpair failed and we were unable to recover it. 00:31:23.271 [2024-06-07 21:48:23.345135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.271 [2024-06-07 21:48:23.345144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.271 qpair failed and we were unable to recover it. 00:31:23.271 [2024-06-07 21:48:23.345294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.271 [2024-06-07 21:48:23.345303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.271 qpair failed and we were unable to recover it. 00:31:23.271 [2024-06-07 21:48:23.345504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.271 [2024-06-07 21:48:23.345513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.271 qpair failed and we were unable to recover it. 00:31:23.271 [2024-06-07 21:48:23.345646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.271 [2024-06-07 21:48:23.345655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.271 qpair failed and we were unable to recover it. 00:31:23.271 [2024-06-07 21:48:23.345797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.271 [2024-06-07 21:48:23.345806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.271 qpair failed and we were unable to recover it. 00:31:23.271 [2024-06-07 21:48:23.345941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.271 [2024-06-07 21:48:23.345950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.271 qpair failed and we were unable to recover it. 00:31:23.271 [2024-06-07 21:48:23.346236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.271 [2024-06-07 21:48:23.346246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.271 qpair failed and we were unable to recover it. 00:31:23.271 [2024-06-07 21:48:23.346389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.271 [2024-06-07 21:48:23.346399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.271 qpair failed and we were unable to recover it. 00:31:23.271 [2024-06-07 21:48:23.346633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.271 [2024-06-07 21:48:23.346643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.271 qpair failed and we were unable to recover it. 00:31:23.271 [2024-06-07 21:48:23.346822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.271 [2024-06-07 21:48:23.346831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.271 qpair failed and we were unable to recover it. 00:31:23.271 [2024-06-07 21:48:23.346975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.271 [2024-06-07 21:48:23.346984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.271 qpair failed and we were unable to recover it. 00:31:23.271 [2024-06-07 21:48:23.347200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.271 [2024-06-07 21:48:23.347210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.271 qpair failed and we were unable to recover it. 00:31:23.271 [2024-06-07 21:48:23.347355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.271 [2024-06-07 21:48:23.347364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.271 qpair failed and we were unable to recover it. 00:31:23.271 [2024-06-07 21:48:23.347599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.271 [2024-06-07 21:48:23.347608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.271 qpair failed and we were unable to recover it. 00:31:23.271 [2024-06-07 21:48:23.347757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.271 [2024-06-07 21:48:23.347766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.271 qpair failed and we were unable to recover it. 00:31:23.271 [2024-06-07 21:48:23.347923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.271 [2024-06-07 21:48:23.347933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.271 qpair failed and we were unable to recover it. 00:31:23.271 [2024-06-07 21:48:23.348074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.271 [2024-06-07 21:48:23.348084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.271 qpair failed and we were unable to recover it. 00:31:23.271 [2024-06-07 21:48:23.348227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.271 [2024-06-07 21:48:23.348237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.271 qpair failed and we were unable to recover it. 00:31:23.271 [2024-06-07 21:48:23.348515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.271 [2024-06-07 21:48:23.348525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.271 qpair failed and we were unable to recover it. 00:31:23.271 [2024-06-07 21:48:23.348685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.271 [2024-06-07 21:48:23.348695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.271 qpair failed and we were unable to recover it. 00:31:23.271 [2024-06-07 21:48:23.348835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.271 [2024-06-07 21:48:23.348844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.271 qpair failed and we were unable to recover it. 00:31:23.271 [2024-06-07 21:48:23.349054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.271 [2024-06-07 21:48:23.349063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.271 qpair failed and we were unable to recover it. 00:31:23.271 [2024-06-07 21:48:23.349258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.271 [2024-06-07 21:48:23.349270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.271 qpair failed and we were unable to recover it. 00:31:23.271 [2024-06-07 21:48:23.349483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.271 [2024-06-07 21:48:23.349492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.271 qpair failed and we were unable to recover it. 00:31:23.271 [2024-06-07 21:48:23.349759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.271 [2024-06-07 21:48:23.349768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.271 qpair failed and we were unable to recover it. 00:31:23.271 [2024-06-07 21:48:23.349980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.271 [2024-06-07 21:48:23.349990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.271 qpair failed and we were unable to recover it. 00:31:23.271 [2024-06-07 21:48:23.350132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.271 [2024-06-07 21:48:23.350142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.271 qpair failed and we were unable to recover it. 00:31:23.271 [2024-06-07 21:48:23.350285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.271 [2024-06-07 21:48:23.350295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.271 qpair failed and we were unable to recover it. 00:31:23.271 [2024-06-07 21:48:23.350433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.271 [2024-06-07 21:48:23.350443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.271 qpair failed and we were unable to recover it. 00:31:23.271 [2024-06-07 21:48:23.350573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.271 [2024-06-07 21:48:23.350582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.271 qpair failed and we were unable to recover it. 00:31:23.271 [2024-06-07 21:48:23.350727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.271 [2024-06-07 21:48:23.350736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.271 qpair failed and we were unable to recover it. 00:31:23.271 [2024-06-07 21:48:23.350878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.271 [2024-06-07 21:48:23.350887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.271 qpair failed and we were unable to recover it. 00:31:23.271 [2024-06-07 21:48:23.351042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.273 [2024-06-07 21:48:23.351052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.273 qpair failed and we were unable to recover it. 00:31:23.273 [2024-06-07 21:48:23.351197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.273 [2024-06-07 21:48:23.351206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.273 qpair failed and we were unable to recover it. 00:31:23.273 [2024-06-07 21:48:23.351417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.273 [2024-06-07 21:48:23.351426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.273 qpair failed and we were unable to recover it. 00:31:23.273 [2024-06-07 21:48:23.351636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.273 [2024-06-07 21:48:23.351645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.273 qpair failed and we were unable to recover it. 00:31:23.273 [2024-06-07 21:48:23.351787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.273 [2024-06-07 21:48:23.351796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.273 qpair failed and we were unable to recover it. 00:31:23.273 [2024-06-07 21:48:23.351929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.273 [2024-06-07 21:48:23.351938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.273 qpair failed and we were unable to recover it. 00:31:23.273 [2024-06-07 21:48:23.352145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.273 [2024-06-07 21:48:23.352155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.273 qpair failed and we were unable to recover it. 00:31:23.273 [2024-06-07 21:48:23.352356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.273 [2024-06-07 21:48:23.352366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.273 qpair failed and we were unable to recover it. 00:31:23.273 [2024-06-07 21:48:23.352498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.273 [2024-06-07 21:48:23.352507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.273 qpair failed and we were unable to recover it. 00:31:23.273 [2024-06-07 21:48:23.352718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.273 [2024-06-07 21:48:23.352727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.273 qpair failed and we were unable to recover it. 00:31:23.273 [2024-06-07 21:48:23.352931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.273 [2024-06-07 21:48:23.352941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.273 qpair failed and we were unable to recover it. 00:31:23.273 [2024-06-07 21:48:23.353081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.273 [2024-06-07 21:48:23.353091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.273 qpair failed and we were unable to recover it. 00:31:23.273 [2024-06-07 21:48:23.353287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.273 [2024-06-07 21:48:23.353296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.273 qpair failed and we were unable to recover it. 00:31:23.273 [2024-06-07 21:48:23.353497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.273 [2024-06-07 21:48:23.353506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.273 qpair failed and we were unable to recover it. 00:31:23.273 [2024-06-07 21:48:23.353797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.273 [2024-06-07 21:48:23.353807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.273 qpair failed and we were unable to recover it. 00:31:23.273 [2024-06-07 21:48:23.353950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.273 [2024-06-07 21:48:23.353960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.273 qpair failed and we were unable to recover it. 00:31:23.273 [2024-06-07 21:48:23.354158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.273 [2024-06-07 21:48:23.354168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.273 qpair failed and we were unable to recover it. 00:31:23.273 [2024-06-07 21:48:23.354315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.274 [2024-06-07 21:48:23.354325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.274 qpair failed and we were unable to recover it. 00:31:23.274 [2024-06-07 21:48:23.354472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.274 [2024-06-07 21:48:23.354482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.274 qpair failed and we were unable to recover it. 00:31:23.274 [2024-06-07 21:48:23.354614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.274 [2024-06-07 21:48:23.354624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.274 qpair failed and we were unable to recover it. 00:31:23.274 [2024-06-07 21:48:23.354823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.274 [2024-06-07 21:48:23.354833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.274 qpair failed and we were unable to recover it. 00:31:23.274 [2024-06-07 21:48:23.355064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.274 [2024-06-07 21:48:23.355074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.274 qpair failed and we were unable to recover it. 00:31:23.274 [2024-06-07 21:48:23.355236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.274 [2024-06-07 21:48:23.355245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.274 qpair failed and we were unable to recover it. 00:31:23.274 [2024-06-07 21:48:23.355453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.274 [2024-06-07 21:48:23.355463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.274 qpair failed and we were unable to recover it. 00:31:23.274 [2024-06-07 21:48:23.355602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.274 [2024-06-07 21:48:23.355611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.274 qpair failed and we were unable to recover it. 00:31:23.274 [2024-06-07 21:48:23.355767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.274 [2024-06-07 21:48:23.355778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.274 qpair failed and we were unable to recover it. 00:31:23.274 [2024-06-07 21:48:23.355918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.274 [2024-06-07 21:48:23.355928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.274 qpair failed and we were unable to recover it. 00:31:23.274 [2024-06-07 21:48:23.356093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.274 [2024-06-07 21:48:23.356104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.274 qpair failed and we were unable to recover it. 00:31:23.274 [2024-06-07 21:48:23.356328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.274 [2024-06-07 21:48:23.356337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.274 qpair failed and we were unable to recover it. 00:31:23.274 [2024-06-07 21:48:23.356482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.274 [2024-06-07 21:48:23.356492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.274 qpair failed and we were unable to recover it. 00:31:23.274 [2024-06-07 21:48:23.356695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.274 [2024-06-07 21:48:23.356707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.274 qpair failed and we were unable to recover it. 00:31:23.274 [2024-06-07 21:48:23.356833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.274 [2024-06-07 21:48:23.356843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.274 qpair failed and we were unable to recover it. 00:31:23.274 [2024-06-07 21:48:23.357017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.274 [2024-06-07 21:48:23.357031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.274 qpair failed and we were unable to recover it. 00:31:23.274 [2024-06-07 21:48:23.357170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.274 [2024-06-07 21:48:23.357179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.274 qpair failed and we were unable to recover it. 00:31:23.274 [2024-06-07 21:48:23.357328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.274 [2024-06-07 21:48:23.357338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.274 qpair failed and we were unable to recover it. 00:31:23.274 [2024-06-07 21:48:23.357475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.274 [2024-06-07 21:48:23.357484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.274 qpair failed and we were unable to recover it. 00:31:23.274 [2024-06-07 21:48:23.357705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.274 [2024-06-07 21:48:23.357714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.274 qpair failed and we were unable to recover it. 00:31:23.274 [2024-06-07 21:48:23.357916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.274 [2024-06-07 21:48:23.357926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.274 qpair failed and we were unable to recover it. 00:31:23.274 [2024-06-07 21:48:23.358148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.274 [2024-06-07 21:48:23.358158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.274 qpair failed and we were unable to recover it. 00:31:23.274 [2024-06-07 21:48:23.358442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.274 [2024-06-07 21:48:23.358452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.274 qpair failed and we were unable to recover it. 00:31:23.274 [2024-06-07 21:48:23.358608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.274 [2024-06-07 21:48:23.358618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.274 qpair failed and we were unable to recover it. 00:31:23.274 [2024-06-07 21:48:23.358823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.274 [2024-06-07 21:48:23.358832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.274 qpair failed and we were unable to recover it. 00:31:23.274 [2024-06-07 21:48:23.358978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.274 [2024-06-07 21:48:23.358988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.274 qpair failed and we were unable to recover it. 00:31:23.274 [2024-06-07 21:48:23.359178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.274 [2024-06-07 21:48:23.359188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.274 qpair failed and we were unable to recover it. 00:31:23.274 [2024-06-07 21:48:23.359433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.274 [2024-06-07 21:48:23.359443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.274 qpair failed and we were unable to recover it. 00:31:23.274 [2024-06-07 21:48:23.359648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.274 [2024-06-07 21:48:23.359658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.274 qpair failed and we were unable to recover it. 00:31:23.274 [2024-06-07 21:48:23.359804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.274 [2024-06-07 21:48:23.359814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.274 qpair failed and we were unable to recover it. 00:31:23.274 [2024-06-07 21:48:23.360081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.274 [2024-06-07 21:48:23.360091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.274 qpair failed and we were unable to recover it. 00:31:23.274 [2024-06-07 21:48:23.360238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.274 [2024-06-07 21:48:23.360249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.274 qpair failed and we were unable to recover it. 00:31:23.274 [2024-06-07 21:48:23.360340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.274 [2024-06-07 21:48:23.360350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.274 qpair failed and we were unable to recover it. 00:31:23.274 [2024-06-07 21:48:23.360628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.274 [2024-06-07 21:48:23.360638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.274 qpair failed and we were unable to recover it. 00:31:23.274 [2024-06-07 21:48:23.360794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.274 [2024-06-07 21:48:23.360803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.274 qpair failed and we were unable to recover it. 00:31:23.274 [2024-06-07 21:48:23.360936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.274 [2024-06-07 21:48:23.360945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.274 qpair failed and we were unable to recover it. 00:31:23.274 [2024-06-07 21:48:23.361145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.274 [2024-06-07 21:48:23.361155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-06-07 21:48:23.361379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-06-07 21:48:23.361388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-06-07 21:48:23.361515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-06-07 21:48:23.361525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-06-07 21:48:23.361756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-06-07 21:48:23.361766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-06-07 21:48:23.361962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-06-07 21:48:23.361972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-06-07 21:48:23.362101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-06-07 21:48:23.362111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-06-07 21:48:23.362237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-06-07 21:48:23.362246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-06-07 21:48:23.362403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-06-07 21:48:23.362412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-06-07 21:48:23.362627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-06-07 21:48:23.362637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-06-07 21:48:23.362909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-06-07 21:48:23.362919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-06-07 21:48:23.363136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-06-07 21:48:23.363145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-06-07 21:48:23.363296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-06-07 21:48:23.363306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-06-07 21:48:23.363500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-06-07 21:48:23.363510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-06-07 21:48:23.363722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-06-07 21:48:23.363732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-06-07 21:48:23.363858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-06-07 21:48:23.363868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-06-07 21:48:23.364125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-06-07 21:48:23.364143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-06-07 21:48:23.364324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-06-07 21:48:23.364333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-06-07 21:48:23.364577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-06-07 21:48:23.364589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-06-07 21:48:23.364747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-06-07 21:48:23.364756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-06-07 21:48:23.365059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-06-07 21:48:23.365069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-06-07 21:48:23.365222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-06-07 21:48:23.365231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-06-07 21:48:23.365371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-06-07 21:48:23.365381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-06-07 21:48:23.365586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-06-07 21:48:23.365596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-06-07 21:48:23.365740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-06-07 21:48:23.365749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-06-07 21:48:23.366041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-06-07 21:48:23.366051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-06-07 21:48:23.366204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-06-07 21:48:23.366212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-06-07 21:48:23.366355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-06-07 21:48:23.366365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-06-07 21:48:23.366521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-06-07 21:48:23.366531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-06-07 21:48:23.366742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-06-07 21:48:23.366752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-06-07 21:48:23.366980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-06-07 21:48:23.366989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-06-07 21:48:23.367235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-06-07 21:48:23.367244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-06-07 21:48:23.367568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-06-07 21:48:23.367577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-06-07 21:48:23.367715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-06-07 21:48:23.367725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-06-07 21:48:23.367916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-06-07 21:48:23.367925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-06-07 21:48:23.368136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-06-07 21:48:23.368146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-06-07 21:48:23.368354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-06-07 21:48:23.368364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.275 [2024-06-07 21:48:23.368570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.275 [2024-06-07 21:48:23.368579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.275 qpair failed and we were unable to recover it. 00:31:23.276 [2024-06-07 21:48:23.368706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-06-07 21:48:23.368715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-06-07 21:48:23.368916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-06-07 21:48:23.368925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-06-07 21:48:23.369122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-06-07 21:48:23.369131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-06-07 21:48:23.369288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-06-07 21:48:23.369297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-06-07 21:48:23.369540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-06-07 21:48:23.369549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-06-07 21:48:23.369693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-06-07 21:48:23.369702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-06-07 21:48:23.369827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-06-07 21:48:23.369836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-06-07 21:48:23.369991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-06-07 21:48:23.370001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-06-07 21:48:23.370225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-06-07 21:48:23.370235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-06-07 21:48:23.370419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-06-07 21:48:23.370429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-06-07 21:48:23.370571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-06-07 21:48:23.370582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-06-07 21:48:23.370726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-06-07 21:48:23.370736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-06-07 21:48:23.370937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-06-07 21:48:23.370946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-06-07 21:48:23.371089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-06-07 21:48:23.371099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-06-07 21:48:23.371327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-06-07 21:48:23.371337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-06-07 21:48:23.371482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-06-07 21:48:23.371492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-06-07 21:48:23.371693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-06-07 21:48:23.371702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-06-07 21:48:23.371850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-06-07 21:48:23.371860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-06-07 21:48:23.372058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-06-07 21:48:23.372068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-06-07 21:48:23.372267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-06-07 21:48:23.372277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-06-07 21:48:23.372562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-06-07 21:48:23.372574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-06-07 21:48:23.372729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-06-07 21:48:23.372739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-06-07 21:48:23.372943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-06-07 21:48:23.372953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-06-07 21:48:23.373167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-06-07 21:48:23.373177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-06-07 21:48:23.373316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-06-07 21:48:23.373325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-06-07 21:48:23.373455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-06-07 21:48:23.373464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-06-07 21:48:23.373658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-06-07 21:48:23.373668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-06-07 21:48:23.373819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-06-07 21:48:23.373829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-06-07 21:48:23.373988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-06-07 21:48:23.373997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-06-07 21:48:23.374285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-06-07 21:48:23.374294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-06-07 21:48:23.374434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-06-07 21:48:23.374443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-06-07 21:48:23.374592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-06-07 21:48:23.374602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-06-07 21:48:23.374818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-06-07 21:48:23.374827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-06-07 21:48:23.374976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-06-07 21:48:23.374985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-06-07 21:48:23.375185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.276 [2024-06-07 21:48:23.375195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.276 qpair failed and we were unable to recover it. 00:31:23.276 [2024-06-07 21:48:23.375340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-06-07 21:48:23.375350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-06-07 21:48:23.375589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-06-07 21:48:23.375598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-06-07 21:48:23.375732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-06-07 21:48:23.375741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-06-07 21:48:23.375892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-06-07 21:48:23.375902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-06-07 21:48:23.376047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-06-07 21:48:23.376056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-06-07 21:48:23.376260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-06-07 21:48:23.376270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-06-07 21:48:23.376470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-06-07 21:48:23.376480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-06-07 21:48:23.376691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-06-07 21:48:23.376701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-06-07 21:48:23.376927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-06-07 21:48:23.376937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-06-07 21:48:23.377078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-06-07 21:48:23.377087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-06-07 21:48:23.377355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-06-07 21:48:23.377364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-06-07 21:48:23.377497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-06-07 21:48:23.377506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-06-07 21:48:23.377647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-06-07 21:48:23.377657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-06-07 21:48:23.377793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-06-07 21:48:23.377803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-06-07 21:48:23.378013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-06-07 21:48:23.378022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-06-07 21:48:23.378165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-06-07 21:48:23.378174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-06-07 21:48:23.378329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-06-07 21:48:23.378339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-06-07 21:48:23.378485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-06-07 21:48:23.378494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-06-07 21:48:23.378762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-06-07 21:48:23.378772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-06-07 21:48:23.378971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-06-07 21:48:23.378980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-06-07 21:48:23.379129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-06-07 21:48:23.379139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-06-07 21:48:23.379301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-06-07 21:48:23.379311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-06-07 21:48:23.379440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-06-07 21:48:23.379450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-06-07 21:48:23.379659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-06-07 21:48:23.379669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-06-07 21:48:23.379808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-06-07 21:48:23.379817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-06-07 21:48:23.380023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-06-07 21:48:23.380040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-06-07 21:48:23.380175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-06-07 21:48:23.380185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-06-07 21:48:23.380337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-06-07 21:48:23.380347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-06-07 21:48:23.380580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-06-07 21:48:23.380589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-06-07 21:48:23.380789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-06-07 21:48:23.380799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-06-07 21:48:23.380932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-06-07 21:48:23.380941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-06-07 21:48:23.381090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-06-07 21:48:23.381100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.277 [2024-06-07 21:48:23.381233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.277 [2024-06-07 21:48:23.381242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.277 qpair failed and we were unable to recover it. 00:31:23.278 [2024-06-07 21:48:23.381409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-06-07 21:48:23.381418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-06-07 21:48:23.381559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-06-07 21:48:23.381569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-06-07 21:48:23.381835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-06-07 21:48:23.381845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-06-07 21:48:23.381992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-06-07 21:48:23.382002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-06-07 21:48:23.382204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-06-07 21:48:23.382214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-06-07 21:48:23.382355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-06-07 21:48:23.382365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-06-07 21:48:23.382502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-06-07 21:48:23.382511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-06-07 21:48:23.382724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-06-07 21:48:23.382734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-06-07 21:48:23.382999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-06-07 21:48:23.383008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-06-07 21:48:23.383160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-06-07 21:48:23.383170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-06-07 21:48:23.383382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-06-07 21:48:23.383392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-06-07 21:48:23.383603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-06-07 21:48:23.383613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-06-07 21:48:23.383825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-06-07 21:48:23.383834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-06-07 21:48:23.383966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-06-07 21:48:23.383975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-06-07 21:48:23.384110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-06-07 21:48:23.384120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-06-07 21:48:23.384261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-06-07 21:48:23.384270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-06-07 21:48:23.384429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-06-07 21:48:23.384439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-06-07 21:48:23.384579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-06-07 21:48:23.384589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-06-07 21:48:23.384711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-06-07 21:48:23.384720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-06-07 21:48:23.384867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-06-07 21:48:23.384876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-06-07 21:48:23.385009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-06-07 21:48:23.385019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-06-07 21:48:23.385159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-06-07 21:48:23.385168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-06-07 21:48:23.385367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-06-07 21:48:23.385377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-06-07 21:48:23.385519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-06-07 21:48:23.385528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-06-07 21:48:23.385670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-06-07 21:48:23.385679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-06-07 21:48:23.385808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-06-07 21:48:23.385817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-06-07 21:48:23.386092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-06-07 21:48:23.386102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-06-07 21:48:23.386230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-06-07 21:48:23.386239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.278 [2024-06-07 21:48:23.386393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.278 [2024-06-07 21:48:23.386403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.278 qpair failed and we were unable to recover it. 00:31:23.279 [2024-06-07 21:48:23.386551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-06-07 21:48:23.386560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-06-07 21:48:23.386683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-06-07 21:48:23.386692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-06-07 21:48:23.386822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-06-07 21:48:23.386832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-06-07 21:48:23.387039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-06-07 21:48:23.387051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-06-07 21:48:23.387265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-06-07 21:48:23.387274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-06-07 21:48:23.387470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-06-07 21:48:23.387479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-06-07 21:48:23.387622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-06-07 21:48:23.387631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-06-07 21:48:23.387834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-06-07 21:48:23.387843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-06-07 21:48:23.387971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-06-07 21:48:23.387981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-06-07 21:48:23.388129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-06-07 21:48:23.388140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-06-07 21:48:23.388290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-06-07 21:48:23.388300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-06-07 21:48:23.388440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-06-07 21:48:23.388450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-06-07 21:48:23.388667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-06-07 21:48:23.388677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-06-07 21:48:23.388826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-06-07 21:48:23.388836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-06-07 21:48:23.388988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-06-07 21:48:23.388997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-06-07 21:48:23.389124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-06-07 21:48:23.389134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-06-07 21:48:23.389283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-06-07 21:48:23.389293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-06-07 21:48:23.389520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-06-07 21:48:23.389529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-06-07 21:48:23.389754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-06-07 21:48:23.389764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-06-07 21:48:23.389964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-06-07 21:48:23.389974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-06-07 21:48:23.390109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-06-07 21:48:23.390119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-06-07 21:48:23.390246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-06-07 21:48:23.390256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-06-07 21:48:23.390477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-06-07 21:48:23.390488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-06-07 21:48:23.390782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-06-07 21:48:23.390792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-06-07 21:48:23.390938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-06-07 21:48:23.390948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-06-07 21:48:23.391109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-06-07 21:48:23.391119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-06-07 21:48:23.391268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-06-07 21:48:23.391278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-06-07 21:48:23.391492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-06-07 21:48:23.391501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-06-07 21:48:23.391635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-06-07 21:48:23.391644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-06-07 21:48:23.391800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-06-07 21:48:23.391809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-06-07 21:48:23.392011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-06-07 21:48:23.392021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-06-07 21:48:23.392336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-06-07 21:48:23.392346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-06-07 21:48:23.392506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-06-07 21:48:23.392515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-06-07 21:48:23.392664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-06-07 21:48:23.392674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-06-07 21:48:23.392872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.279 [2024-06-07 21:48:23.392882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.279 qpair failed and we were unable to recover it. 00:31:23.279 [2024-06-07 21:48:23.393034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-06-07 21:48:23.393044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-06-07 21:48:23.393311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-06-07 21:48:23.393321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-06-07 21:48:23.393461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-06-07 21:48:23.393471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-06-07 21:48:23.393671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-06-07 21:48:23.393681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-06-07 21:48:23.393878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-06-07 21:48:23.393888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-06-07 21:48:23.394115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-06-07 21:48:23.394125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-06-07 21:48:23.394372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-06-07 21:48:23.394382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-06-07 21:48:23.394600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-06-07 21:48:23.394609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-06-07 21:48:23.394754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-06-07 21:48:23.394766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-06-07 21:48:23.394902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-06-07 21:48:23.394911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-06-07 21:48:23.395139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-06-07 21:48:23.395149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-06-07 21:48:23.395358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-06-07 21:48:23.395368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-06-07 21:48:23.395506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-06-07 21:48:23.395515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-06-07 21:48:23.395619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-06-07 21:48:23.395628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-06-07 21:48:23.395916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-06-07 21:48:23.395926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-06-07 21:48:23.396085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-06-07 21:48:23.396095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-06-07 21:48:23.396339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-06-07 21:48:23.396348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-06-07 21:48:23.396566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-06-07 21:48:23.396576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-06-07 21:48:23.396845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-06-07 21:48:23.396854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-06-07 21:48:23.397012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-06-07 21:48:23.397021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-06-07 21:48:23.397163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-06-07 21:48:23.397172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-06-07 21:48:23.397378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-06-07 21:48:23.397387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-06-07 21:48:23.397597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-06-07 21:48:23.397606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-06-07 21:48:23.397732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-06-07 21:48:23.397742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-06-07 21:48:23.397869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-06-07 21:48:23.397878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-06-07 21:48:23.398019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-06-07 21:48:23.398034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-06-07 21:48:23.398246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-06-07 21:48:23.398255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-06-07 21:48:23.398467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-06-07 21:48:23.398477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-06-07 21:48:23.398684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-06-07 21:48:23.398694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-06-07 21:48:23.398917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-06-07 21:48:23.398926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-06-07 21:48:23.399076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-06-07 21:48:23.399085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-06-07 21:48:23.399296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-06-07 21:48:23.399306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-06-07 21:48:23.399506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-06-07 21:48:23.399515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-06-07 21:48:23.399778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-06-07 21:48:23.399787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-06-07 21:48:23.399893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-06-07 21:48:23.399902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-06-07 21:48:23.400052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-06-07 21:48:23.400062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-06-07 21:48:23.400318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-06-07 21:48:23.400328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-06-07 21:48:23.400570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.280 [2024-06-07 21:48:23.400580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.280 qpair failed and we were unable to recover it. 00:31:23.280 [2024-06-07 21:48:23.400841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-06-07 21:48:23.400851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-06-07 21:48:23.400992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-06-07 21:48:23.401001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-06-07 21:48:23.401148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-06-07 21:48:23.401158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-06-07 21:48:23.401366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-06-07 21:48:23.401375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-06-07 21:48:23.401579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-06-07 21:48:23.401588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-06-07 21:48:23.401707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-06-07 21:48:23.401716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-06-07 21:48:23.401924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-06-07 21:48:23.401933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-06-07 21:48:23.402157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-06-07 21:48:23.402167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-06-07 21:48:23.402402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-06-07 21:48:23.402411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-06-07 21:48:23.402557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-06-07 21:48:23.402566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-06-07 21:48:23.402768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-06-07 21:48:23.402780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-06-07 21:48:23.402919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-06-07 21:48:23.402929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-06-07 21:48:23.403199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-06-07 21:48:23.403208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-06-07 21:48:23.403356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-06-07 21:48:23.403366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-06-07 21:48:23.403633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-06-07 21:48:23.403642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-06-07 21:48:23.403813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-06-07 21:48:23.403823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-06-07 21:48:23.403976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-06-07 21:48:23.403986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-06-07 21:48:23.404258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-06-07 21:48:23.404268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-06-07 21:48:23.404502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-06-07 21:48:23.404511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-06-07 21:48:23.404785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-06-07 21:48:23.404794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-06-07 21:48:23.404897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-06-07 21:48:23.404906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-06-07 21:48:23.405004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-06-07 21:48:23.405013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-06-07 21:48:23.405173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-06-07 21:48:23.405183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-06-07 21:48:23.405446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-06-07 21:48:23.405456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-06-07 21:48:23.405593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-06-07 21:48:23.405603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-06-07 21:48:23.405741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-06-07 21:48:23.405750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-06-07 21:48:23.405901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-06-07 21:48:23.405910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-06-07 21:48:23.406077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-06-07 21:48:23.406087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-06-07 21:48:23.406242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-06-07 21:48:23.406251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-06-07 21:48:23.406458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-06-07 21:48:23.406468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-06-07 21:48:23.406604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-06-07 21:48:23.406614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-06-07 21:48:23.406816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-06-07 21:48:23.406825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-06-07 21:48:23.407092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-06-07 21:48:23.407101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-06-07 21:48:23.407314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-06-07 21:48:23.407324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-06-07 21:48:23.407457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-06-07 21:48:23.407467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-06-07 21:48:23.407611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.281 [2024-06-07 21:48:23.407620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.281 qpair failed and we were unable to recover it. 00:31:23.281 [2024-06-07 21:48:23.407751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-06-07 21:48:23.407760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-06-07 21:48:23.407963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-06-07 21:48:23.407972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-06-07 21:48:23.408173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-06-07 21:48:23.408183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-06-07 21:48:23.408397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-06-07 21:48:23.408407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-06-07 21:48:23.408645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-06-07 21:48:23.408655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-06-07 21:48:23.408792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-06-07 21:48:23.408801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-06-07 21:48:23.408942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-06-07 21:48:23.408952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-06-07 21:48:23.409052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-06-07 21:48:23.409061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-06-07 21:48:23.409267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-06-07 21:48:23.409276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-06-07 21:48:23.409468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-06-07 21:48:23.409478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-06-07 21:48:23.409671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-06-07 21:48:23.409680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-06-07 21:48:23.409814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-06-07 21:48:23.409823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-06-07 21:48:23.409956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-06-07 21:48:23.409965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-06-07 21:48:23.410180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-06-07 21:48:23.410190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-06-07 21:48:23.410443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-06-07 21:48:23.410454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-06-07 21:48:23.410597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-06-07 21:48:23.410606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-06-07 21:48:23.410817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-06-07 21:48:23.410826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-06-07 21:48:23.411023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-06-07 21:48:23.411038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-06-07 21:48:23.411265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-06-07 21:48:23.411275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-06-07 21:48:23.411416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-06-07 21:48:23.411426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-06-07 21:48:23.411559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-06-07 21:48:23.411568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-06-07 21:48:23.411772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-06-07 21:48:23.411782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-06-07 21:48:23.412011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-06-07 21:48:23.412020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-06-07 21:48:23.412163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-06-07 21:48:23.412173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-06-07 21:48:23.412312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-06-07 21:48:23.412321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-06-07 21:48:23.412467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-06-07 21:48:23.412477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-06-07 21:48:23.412796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-06-07 21:48:23.412805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-06-07 21:48:23.412942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-06-07 21:48:23.412952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-06-07 21:48:23.413103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-06-07 21:48:23.413113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-06-07 21:48:23.413251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-06-07 21:48:23.413261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-06-07 21:48:23.413459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-06-07 21:48:23.413469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-06-07 21:48:23.413665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-06-07 21:48:23.413675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-06-07 21:48:23.413824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-06-07 21:48:23.413834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.282 qpair failed and we were unable to recover it. 00:31:23.282 [2024-06-07 21:48:23.414039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.282 [2024-06-07 21:48:23.414048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-06-07 21:48:23.414204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-06-07 21:48:23.414213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-06-07 21:48:23.414438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-06-07 21:48:23.414447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-06-07 21:48:23.414601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-06-07 21:48:23.414611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-06-07 21:48:23.414759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-06-07 21:48:23.414768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-06-07 21:48:23.414973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-06-07 21:48:23.414982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-06-07 21:48:23.415203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-06-07 21:48:23.415212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-06-07 21:48:23.415439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-06-07 21:48:23.415449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-06-07 21:48:23.415603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-06-07 21:48:23.415612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-06-07 21:48:23.415739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-06-07 21:48:23.415749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-06-07 21:48:23.415947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-06-07 21:48:23.415956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-06-07 21:48:23.416104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-06-07 21:48:23.416114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-06-07 21:48:23.416381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-06-07 21:48:23.416391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-06-07 21:48:23.416659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-06-07 21:48:23.416669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-06-07 21:48:23.416960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-06-07 21:48:23.416970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-06-07 21:48:23.417138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-06-07 21:48:23.417149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-06-07 21:48:23.417346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-06-07 21:48:23.417356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-06-07 21:48:23.417501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-06-07 21:48:23.417510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-06-07 21:48:23.417705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-06-07 21:48:23.417715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-06-07 21:48:23.417845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-06-07 21:48:23.417854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-06-07 21:48:23.417989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-06-07 21:48:23.417999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-06-07 21:48:23.418201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-06-07 21:48:23.418213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-06-07 21:48:23.418356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-06-07 21:48:23.418365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-06-07 21:48:23.418572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-06-07 21:48:23.418582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-06-07 21:48:23.418810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-06-07 21:48:23.418819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-06-07 21:48:23.419030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-06-07 21:48:23.419040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-06-07 21:48:23.419238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-06-07 21:48:23.419248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-06-07 21:48:23.419380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-06-07 21:48:23.419389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-06-07 21:48:23.419600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-06-07 21:48:23.419609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-06-07 21:48:23.419824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-06-07 21:48:23.419835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.283 [2024-06-07 21:48:23.420059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.283 [2024-06-07 21:48:23.420069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.283 qpair failed and we were unable to recover it. 00:31:23.284 [2024-06-07 21:48:23.420205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-06-07 21:48:23.420215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-06-07 21:48:23.420381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-06-07 21:48:23.420390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-06-07 21:48:23.420611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-06-07 21:48:23.420620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-06-07 21:48:23.420776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-06-07 21:48:23.420786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-06-07 21:48:23.421081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-06-07 21:48:23.421091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-06-07 21:48:23.421246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-06-07 21:48:23.421256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-06-07 21:48:23.421455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-06-07 21:48:23.421464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-06-07 21:48:23.421589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-06-07 21:48:23.421598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-06-07 21:48:23.421758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-06-07 21:48:23.421767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-06-07 21:48:23.421904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-06-07 21:48:23.421913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-06-07 21:48:23.422122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-06-07 21:48:23.422131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-06-07 21:48:23.422276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-06-07 21:48:23.422285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-06-07 21:48:23.422514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-06-07 21:48:23.422524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-06-07 21:48:23.422669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-06-07 21:48:23.422679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-06-07 21:48:23.422883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-06-07 21:48:23.422893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-06-07 21:48:23.423106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-06-07 21:48:23.423116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-06-07 21:48:23.423272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-06-07 21:48:23.423280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-06-07 21:48:23.423432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-06-07 21:48:23.423441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-06-07 21:48:23.423548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-06-07 21:48:23.423557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-06-07 21:48:23.423754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-06-07 21:48:23.423763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-06-07 21:48:23.423968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-06-07 21:48:23.423978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-06-07 21:48:23.424173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-06-07 21:48:23.424183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-06-07 21:48:23.424333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-06-07 21:48:23.424342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-06-07 21:48:23.424488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-06-07 21:48:23.424497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-06-07 21:48:23.424625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-06-07 21:48:23.424634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-06-07 21:48:23.424781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-06-07 21:48:23.424791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-06-07 21:48:23.424949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-06-07 21:48:23.424959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-06-07 21:48:23.425120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-06-07 21:48:23.425130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-06-07 21:48:23.425272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-06-07 21:48:23.425282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-06-07 21:48:23.425515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-06-07 21:48:23.425524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-06-07 21:48:23.425655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-06-07 21:48:23.425666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-06-07 21:48:23.425803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-06-07 21:48:23.425812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-06-07 21:48:23.425970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-06-07 21:48:23.425979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-06-07 21:48:23.426081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-06-07 21:48:23.426091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-06-07 21:48:23.426232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-06-07 21:48:23.426241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.284 [2024-06-07 21:48:23.426462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.284 [2024-06-07 21:48:23.426471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.284 qpair failed and we were unable to recover it. 00:31:23.285 [2024-06-07 21:48:23.426739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-06-07 21:48:23.426749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-06-07 21:48:23.426959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-06-07 21:48:23.426968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-06-07 21:48:23.427111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-06-07 21:48:23.427120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-06-07 21:48:23.427267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-06-07 21:48:23.427277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-06-07 21:48:23.427477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-06-07 21:48:23.427486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-06-07 21:48:23.427685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-06-07 21:48:23.427694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-06-07 21:48:23.427824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-06-07 21:48:23.427833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-06-07 21:48:23.428049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-06-07 21:48:23.428059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-06-07 21:48:23.428194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-06-07 21:48:23.428204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-06-07 21:48:23.428343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-06-07 21:48:23.428353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-06-07 21:48:23.428551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-06-07 21:48:23.428561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-06-07 21:48:23.428870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-06-07 21:48:23.428879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-06-07 21:48:23.429043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-06-07 21:48:23.429053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-06-07 21:48:23.429260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-06-07 21:48:23.429269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-06-07 21:48:23.429496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-06-07 21:48:23.429505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-06-07 21:48:23.429663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-06-07 21:48:23.429673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-06-07 21:48:23.429814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-06-07 21:48:23.429823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-06-07 21:48:23.430032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-06-07 21:48:23.430042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-06-07 21:48:23.430246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-06-07 21:48:23.430256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-06-07 21:48:23.430549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-06-07 21:48:23.430558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-06-07 21:48:23.430716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-06-07 21:48:23.430726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-06-07 21:48:23.430944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-06-07 21:48:23.430953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-06-07 21:48:23.431151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-06-07 21:48:23.431161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-06-07 21:48:23.431378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-06-07 21:48:23.431387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-06-07 21:48:23.431677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-06-07 21:48:23.431687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-06-07 21:48:23.431913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-06-07 21:48:23.431922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-06-07 21:48:23.432132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-06-07 21:48:23.432142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-06-07 21:48:23.432292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-06-07 21:48:23.432302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-06-07 21:48:23.432434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-06-07 21:48:23.432443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-06-07 21:48:23.432640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-06-07 21:48:23.432649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-06-07 21:48:23.432915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-06-07 21:48:23.432924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-06-07 21:48:23.433191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-06-07 21:48:23.433200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-06-07 21:48:23.433355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-06-07 21:48:23.433364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-06-07 21:48:23.433574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-06-07 21:48:23.433583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-06-07 21:48:23.433720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-06-07 21:48:23.433729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-06-07 21:48:23.433953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-06-07 21:48:23.433963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-06-07 21:48:23.434104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-06-07 21:48:23.434113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-06-07 21:48:23.434316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-06-07 21:48:23.434325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-06-07 21:48:23.434521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-06-07 21:48:23.434530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-06-07 21:48:23.434727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.285 [2024-06-07 21:48:23.434737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.285 qpair failed and we were unable to recover it. 00:31:23.285 [2024-06-07 21:48:23.434946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-06-07 21:48:23.434956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-06-07 21:48:23.435114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-06-07 21:48:23.435123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-06-07 21:48:23.435349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-06-07 21:48:23.435359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-06-07 21:48:23.435560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-06-07 21:48:23.435570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-06-07 21:48:23.435713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-06-07 21:48:23.435722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-06-07 21:48:23.435877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-06-07 21:48:23.435887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-06-07 21:48:23.436120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-06-07 21:48:23.436130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-06-07 21:48:23.436337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-06-07 21:48:23.436347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-06-07 21:48:23.436569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-06-07 21:48:23.436579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-06-07 21:48:23.436844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-06-07 21:48:23.436853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-06-07 21:48:23.436996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-06-07 21:48:23.437006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-06-07 21:48:23.437270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-06-07 21:48:23.437280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-06-07 21:48:23.437427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-06-07 21:48:23.437437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-06-07 21:48:23.437641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-06-07 21:48:23.437650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-06-07 21:48:23.437778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-06-07 21:48:23.437787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-06-07 21:48:23.438004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-06-07 21:48:23.438014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-06-07 21:48:23.438171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-06-07 21:48:23.438181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-06-07 21:48:23.438429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-06-07 21:48:23.438438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-06-07 21:48:23.438647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-06-07 21:48:23.438656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-06-07 21:48:23.438853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-06-07 21:48:23.438862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-06-07 21:48:23.439014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-06-07 21:48:23.439023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-06-07 21:48:23.439169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-06-07 21:48:23.439182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-06-07 21:48:23.439281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-06-07 21:48:23.439290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-06-07 21:48:23.439514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-06-07 21:48:23.439523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-06-07 21:48:23.439737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-06-07 21:48:23.439747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-06-07 21:48:23.439961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-06-07 21:48:23.439971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-06-07 21:48:23.440204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-06-07 21:48:23.440214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-06-07 21:48:23.440480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-06-07 21:48:23.440490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-06-07 21:48:23.440691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-06-07 21:48:23.440701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-06-07 21:48:23.440913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-06-07 21:48:23.440922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-06-07 21:48:23.441188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-06-07 21:48:23.441197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-06-07 21:48:23.441409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-06-07 21:48:23.441419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-06-07 21:48:23.441639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-06-07 21:48:23.441649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-06-07 21:48:23.441791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-06-07 21:48:23.441800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-06-07 21:48:23.442004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-06-07 21:48:23.442014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-06-07 21:48:23.442229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-06-07 21:48:23.442238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-06-07 21:48:23.442383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-06-07 21:48:23.442393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-06-07 21:48:23.442603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-06-07 21:48:23.442612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.286 [2024-06-07 21:48:23.442766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.286 [2024-06-07 21:48:23.442775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.286 qpair failed and we were unable to recover it. 00:31:23.287 [2024-06-07 21:48:23.442991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-06-07 21:48:23.443001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-06-07 21:48:23.443293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-06-07 21:48:23.443303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-06-07 21:48:23.443513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-06-07 21:48:23.443522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-06-07 21:48:23.443619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-06-07 21:48:23.443628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-06-07 21:48:23.443953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-06-07 21:48:23.443963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-06-07 21:48:23.444174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-06-07 21:48:23.444184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-06-07 21:48:23.444333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-06-07 21:48:23.444343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-06-07 21:48:23.444491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-06-07 21:48:23.444501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-06-07 21:48:23.444767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-06-07 21:48:23.444776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-06-07 21:48:23.444915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-06-07 21:48:23.444924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-06-07 21:48:23.445065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-06-07 21:48:23.445075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-06-07 21:48:23.445296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-06-07 21:48:23.445306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-06-07 21:48:23.445506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-06-07 21:48:23.445516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-06-07 21:48:23.445622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-06-07 21:48:23.445632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-06-07 21:48:23.445768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-06-07 21:48:23.445778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-06-07 21:48:23.445937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-06-07 21:48:23.445947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-06-07 21:48:23.446081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-06-07 21:48:23.446091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-06-07 21:48:23.446188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-06-07 21:48:23.446197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-06-07 21:48:23.446290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-06-07 21:48:23.446301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-06-07 21:48:23.446534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-06-07 21:48:23.446544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-06-07 21:48:23.446684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-06-07 21:48:23.446694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-06-07 21:48:23.446834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-06-07 21:48:23.446844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-06-07 21:48:23.447110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-06-07 21:48:23.447121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-06-07 21:48:23.447268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-06-07 21:48:23.447278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-06-07 21:48:23.447538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-06-07 21:48:23.447547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-06-07 21:48:23.447751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-06-07 21:48:23.447761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-06-07 21:48:23.448035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-06-07 21:48:23.448044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-06-07 21:48:23.448284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-06-07 21:48:23.448293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-06-07 21:48:23.448452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-06-07 21:48:23.448461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-06-07 21:48:23.448615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-06-07 21:48:23.448624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-06-07 21:48:23.448835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.287 [2024-06-07 21:48:23.448845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.287 qpair failed and we were unable to recover it. 00:31:23.287 [2024-06-07 21:48:23.448988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-06-07 21:48:23.448997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-06-07 21:48:23.449133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-06-07 21:48:23.449143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-06-07 21:48:23.449289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-06-07 21:48:23.449298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-06-07 21:48:23.449449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-06-07 21:48:23.449459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-06-07 21:48:23.449665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-06-07 21:48:23.449675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-06-07 21:48:23.449879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-06-07 21:48:23.449890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-06-07 21:48:23.450033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-06-07 21:48:23.450043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-06-07 21:48:23.450172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-06-07 21:48:23.450182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-06-07 21:48:23.450473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-06-07 21:48:23.450483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-06-07 21:48:23.450625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-06-07 21:48:23.450634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-06-07 21:48:23.450835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-06-07 21:48:23.450844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-06-07 21:48:23.450981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-06-07 21:48:23.450990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-06-07 21:48:23.451131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-06-07 21:48:23.451141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-06-07 21:48:23.451352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-06-07 21:48:23.451361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-06-07 21:48:23.451578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-06-07 21:48:23.451588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-06-07 21:48:23.451792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-06-07 21:48:23.451802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-06-07 21:48:23.452075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-06-07 21:48:23.452085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-06-07 21:48:23.452279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-06-07 21:48:23.452288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-06-07 21:48:23.452566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-06-07 21:48:23.452575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-06-07 21:48:23.452716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-06-07 21:48:23.452726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-06-07 21:48:23.453017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-06-07 21:48:23.453031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-06-07 21:48:23.453174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-06-07 21:48:23.453183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-06-07 21:48:23.453328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-06-07 21:48:23.453337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-06-07 21:48:23.453603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-06-07 21:48:23.453612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-06-07 21:48:23.453765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-06-07 21:48:23.453774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-06-07 21:48:23.453909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-06-07 21:48:23.453919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-06-07 21:48:23.454126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-06-07 21:48:23.454135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-06-07 21:48:23.454277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-06-07 21:48:23.454286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-06-07 21:48:23.454560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-06-07 21:48:23.454570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-06-07 21:48:23.454777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-06-07 21:48:23.454787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-06-07 21:48:23.455007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-06-07 21:48:23.455016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-06-07 21:48:23.455173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-06-07 21:48:23.455184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-06-07 21:48:23.455311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-06-07 21:48:23.455320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-06-07 21:48:23.455532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-06-07 21:48:23.455541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-06-07 21:48:23.455738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-06-07 21:48:23.455748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-06-07 21:48:23.455889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-06-07 21:48:23.455898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-06-07 21:48:23.456203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-06-07 21:48:23.456213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-06-07 21:48:23.456352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-06-07 21:48:23.456362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-06-07 21:48:23.456558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-06-07 21:48:23.456567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-06-07 21:48:23.456847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.288 [2024-06-07 21:48:23.456856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.288 qpair failed and we were unable to recover it. 00:31:23.288 [2024-06-07 21:48:23.456983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-06-07 21:48:23.456993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-06-07 21:48:23.457207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-06-07 21:48:23.457217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-06-07 21:48:23.457429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-06-07 21:48:23.457438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-06-07 21:48:23.457634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-06-07 21:48:23.457643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-06-07 21:48:23.457858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-06-07 21:48:23.457868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-06-07 21:48:23.458109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-06-07 21:48:23.458119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-06-07 21:48:23.458336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-06-07 21:48:23.458345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-06-07 21:48:23.458640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-06-07 21:48:23.458650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-06-07 21:48:23.458885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-06-07 21:48:23.458895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-06-07 21:48:23.459105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-06-07 21:48:23.459115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-06-07 21:48:23.459356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-06-07 21:48:23.459365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-06-07 21:48:23.459686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-06-07 21:48:23.459695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-06-07 21:48:23.459960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-06-07 21:48:23.459969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-06-07 21:48:23.460120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-06-07 21:48:23.460130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-06-07 21:48:23.460359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-06-07 21:48:23.460369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-06-07 21:48:23.460512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-06-07 21:48:23.460521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-06-07 21:48:23.460725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-06-07 21:48:23.460734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-06-07 21:48:23.460877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-06-07 21:48:23.460886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-06-07 21:48:23.461126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-06-07 21:48:23.461135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-06-07 21:48:23.461357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-06-07 21:48:23.461366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-06-07 21:48:23.461527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-06-07 21:48:23.461537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-06-07 21:48:23.461682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-06-07 21:48:23.461691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-06-07 21:48:23.461905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-06-07 21:48:23.461914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-06-07 21:48:23.462248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-06-07 21:48:23.462258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-06-07 21:48:23.462464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-06-07 21:48:23.462474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-06-07 21:48:23.462684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-06-07 21:48:23.462694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-06-07 21:48:23.462907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-06-07 21:48:23.462917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-06-07 21:48:23.463121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-06-07 21:48:23.463131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-06-07 21:48:23.463327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-06-07 21:48:23.463337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-06-07 21:48:23.463633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-06-07 21:48:23.463642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-06-07 21:48:23.463881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-06-07 21:48:23.463890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-06-07 21:48:23.464182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-06-07 21:48:23.464193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-06-07 21:48:23.464411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-06-07 21:48:23.464420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-06-07 21:48:23.464570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-06-07 21:48:23.464579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-06-07 21:48:23.464789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-06-07 21:48:23.464798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-06-07 21:48:23.465065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-06-07 21:48:23.465075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-06-07 21:48:23.465342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-06-07 21:48:23.465352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-06-07 21:48:23.465562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-06-07 21:48:23.465571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-06-07 21:48:23.465772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-06-07 21:48:23.465781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-06-07 21:48:23.466045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-06-07 21:48:23.466054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-06-07 21:48:23.466264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-06-07 21:48:23.466273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-06-07 21:48:23.466476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.289 [2024-06-07 21:48:23.466485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.289 qpair failed and we were unable to recover it. 00:31:23.289 [2024-06-07 21:48:23.466702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-06-07 21:48:23.466711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-06-07 21:48:23.466906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-06-07 21:48:23.466915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-06-07 21:48:23.467144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-06-07 21:48:23.467154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-06-07 21:48:23.467426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-06-07 21:48:23.467436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-06-07 21:48:23.467644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-06-07 21:48:23.467653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-06-07 21:48:23.467884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-06-07 21:48:23.467894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-06-07 21:48:23.468021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-06-07 21:48:23.468035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-06-07 21:48:23.468337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-06-07 21:48:23.468346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-06-07 21:48:23.468543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-06-07 21:48:23.468552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-06-07 21:48:23.468826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-06-07 21:48:23.468836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-06-07 21:48:23.468994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-06-07 21:48:23.469003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-06-07 21:48:23.469210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-06-07 21:48:23.469220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-06-07 21:48:23.469339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-06-07 21:48:23.469349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-06-07 21:48:23.469615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-06-07 21:48:23.469625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-06-07 21:48:23.469827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-06-07 21:48:23.469837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-06-07 21:48:23.470046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-06-07 21:48:23.470056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-06-07 21:48:23.470192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-06-07 21:48:23.470201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-06-07 21:48:23.470425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-06-07 21:48:23.470434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-06-07 21:48:23.470709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-06-07 21:48:23.470718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-06-07 21:48:23.470925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-06-07 21:48:23.470934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-06-07 21:48:23.471148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-06-07 21:48:23.471157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-06-07 21:48:23.471313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-06-07 21:48:23.471323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-06-07 21:48:23.471532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-06-07 21:48:23.471541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-06-07 21:48:23.471750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-06-07 21:48:23.471760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-06-07 21:48:23.471943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-06-07 21:48:23.471952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-06-07 21:48:23.472250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-06-07 21:48:23.472260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-06-07 21:48:23.472392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-06-07 21:48:23.472401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-06-07 21:48:23.472686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-06-07 21:48:23.472696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-06-07 21:48:23.472838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-06-07 21:48:23.472848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-06-07 21:48:23.473046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-06-07 21:48:23.473060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-06-07 21:48:23.473274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-06-07 21:48:23.473284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-06-07 21:48:23.473493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-06-07 21:48:23.473503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-06-07 21:48:23.473638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-06-07 21:48:23.473647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-06-07 21:48:23.473937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-06-07 21:48:23.473947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-06-07 21:48:23.474239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-06-07 21:48:23.474248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-06-07 21:48:23.474460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-06-07 21:48:23.474470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-06-07 21:48:23.474602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-06-07 21:48:23.474611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-06-07 21:48:23.474752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-06-07 21:48:23.474762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-06-07 21:48:23.474969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-06-07 21:48:23.474978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-06-07 21:48:23.475195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-06-07 21:48:23.475205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-06-07 21:48:23.475356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-06-07 21:48:23.475365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-06-07 21:48:23.475560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-06-07 21:48:23.475570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.290 [2024-06-07 21:48:23.475710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.290 [2024-06-07 21:48:23.475719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.290 qpair failed and we were unable to recover it. 00:31:23.291 [2024-06-07 21:48:23.475989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-06-07 21:48:23.475998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-06-07 21:48:23.476194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-06-07 21:48:23.476204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-06-07 21:48:23.476339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-06-07 21:48:23.476348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-06-07 21:48:23.476483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-06-07 21:48:23.476492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-06-07 21:48:23.476700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-06-07 21:48:23.476709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-06-07 21:48:23.476975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-06-07 21:48:23.476984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-06-07 21:48:23.477149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-06-07 21:48:23.477159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-06-07 21:48:23.477495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-06-07 21:48:23.477504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-06-07 21:48:23.477704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-06-07 21:48:23.477714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-06-07 21:48:23.477972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-06-07 21:48:23.477981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-06-07 21:48:23.478195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-06-07 21:48:23.478205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-06-07 21:48:23.478428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-06-07 21:48:23.478437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-06-07 21:48:23.478655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-06-07 21:48:23.478664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-06-07 21:48:23.478893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-06-07 21:48:23.478903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-06-07 21:48:23.479115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-06-07 21:48:23.479125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-06-07 21:48:23.479264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-06-07 21:48:23.479275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-06-07 21:48:23.479424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-06-07 21:48:23.479434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-06-07 21:48:23.479642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-06-07 21:48:23.479652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-06-07 21:48:23.479869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-06-07 21:48:23.479878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-06-07 21:48:23.480174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-06-07 21:48:23.480184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-06-07 21:48:23.480327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-06-07 21:48:23.480337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-06-07 21:48:23.480552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-06-07 21:48:23.480561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-06-07 21:48:23.480762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-06-07 21:48:23.480772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-06-07 21:48:23.480917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-06-07 21:48:23.480927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-06-07 21:48:23.481217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-06-07 21:48:23.481227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-06-07 21:48:23.481440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-06-07 21:48:23.481449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-06-07 21:48:23.481665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-06-07 21:48:23.481677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-06-07 21:48:23.481942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-06-07 21:48:23.481951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-06-07 21:48:23.482163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-06-07 21:48:23.482173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-06-07 21:48:23.482327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-06-07 21:48:23.482336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-06-07 21:48:23.482565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-06-07 21:48:23.482575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-06-07 21:48:23.482775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-06-07 21:48:23.482785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-06-07 21:48:23.483010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-06-07 21:48:23.483020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-06-07 21:48:23.483239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-06-07 21:48:23.483248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-06-07 21:48:23.483399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-06-07 21:48:23.483408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-06-07 21:48:23.483612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-06-07 21:48:23.483621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-06-07 21:48:23.483750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-06-07 21:48:23.483759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-06-07 21:48:23.483868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-06-07 21:48:23.483877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-06-07 21:48:23.484147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-06-07 21:48:23.484156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-06-07 21:48:23.484372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-06-07 21:48:23.484381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-06-07 21:48:23.484648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-06-07 21:48:23.484657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-06-07 21:48:23.484872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.291 [2024-06-07 21:48:23.484881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.291 qpair failed and we were unable to recover it. 00:31:23.291 [2024-06-07 21:48:23.485129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.292 [2024-06-07 21:48:23.485138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.292 qpair failed and we were unable to recover it. 00:31:23.292 [2024-06-07 21:48:23.485373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.292 [2024-06-07 21:48:23.485383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.292 qpair failed and we were unable to recover it. 00:31:23.292 [2024-06-07 21:48:23.485536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.292 [2024-06-07 21:48:23.485545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.292 qpair failed and we were unable to recover it. 00:31:23.292 [2024-06-07 21:48:23.485839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.292 [2024-06-07 21:48:23.485849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.292 qpair failed and we were unable to recover it. 00:31:23.292 [2024-06-07 21:48:23.486083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.292 [2024-06-07 21:48:23.486093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.292 qpair failed and we were unable to recover it. 00:31:23.292 [2024-06-07 21:48:23.486324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.292 [2024-06-07 21:48:23.486333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.292 qpair failed and we were unable to recover it. 00:31:23.292 [2024-06-07 21:48:23.486568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.292 [2024-06-07 21:48:23.486577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.292 qpair failed and we were unable to recover it. 00:31:23.565 [2024-06-07 21:48:23.486794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.565 [2024-06-07 21:48:23.486803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.565 qpair failed and we were unable to recover it. 00:31:23.565 [2024-06-07 21:48:23.487022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.566 [2024-06-07 21:48:23.487037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.566 qpair failed and we were unable to recover it. 00:31:23.566 [2024-06-07 21:48:23.487248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.566 [2024-06-07 21:48:23.487257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.566 qpair failed and we were unable to recover it. 00:31:23.566 [2024-06-07 21:48:23.487560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.566 [2024-06-07 21:48:23.487570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.566 qpair failed and we were unable to recover it. 00:31:23.566 [2024-06-07 21:48:23.487784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.566 [2024-06-07 21:48:23.487794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.566 qpair failed and we were unable to recover it. 00:31:23.566 [2024-06-07 21:48:23.488061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.566 [2024-06-07 21:48:23.488072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.566 qpair failed and we were unable to recover it. 00:31:23.566 [2024-06-07 21:48:23.488213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.566 [2024-06-07 21:48:23.488222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.566 qpair failed and we were unable to recover it. 00:31:23.566 [2024-06-07 21:48:23.488420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.566 [2024-06-07 21:48:23.488429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.566 qpair failed and we were unable to recover it. 00:31:23.566 [2024-06-07 21:48:23.488661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.566 [2024-06-07 21:48:23.488670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.566 qpair failed and we were unable to recover it. 00:31:23.566 [2024-06-07 21:48:23.488910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.566 [2024-06-07 21:48:23.488919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.566 qpair failed and we were unable to recover it. 00:31:23.566 [2024-06-07 21:48:23.489202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.566 [2024-06-07 21:48:23.489212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.566 qpair failed and we were unable to recover it. 00:31:23.566 [2024-06-07 21:48:23.489428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.566 [2024-06-07 21:48:23.489437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.566 qpair failed and we were unable to recover it. 00:31:23.566 [2024-06-07 21:48:23.489657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.566 [2024-06-07 21:48:23.489666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.566 qpair failed and we were unable to recover it. 00:31:23.566 [2024-06-07 21:48:23.489976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.566 [2024-06-07 21:48:23.489985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.566 qpair failed and we were unable to recover it. 00:31:23.566 [2024-06-07 21:48:23.490220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.566 [2024-06-07 21:48:23.490230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.566 qpair failed and we were unable to recover it. 00:31:23.566 [2024-06-07 21:48:23.490464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.566 [2024-06-07 21:48:23.490473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.566 qpair failed and we were unable to recover it. 00:31:23.566 [2024-06-07 21:48:23.490637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.566 [2024-06-07 21:48:23.490646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.566 qpair failed and we were unable to recover it. 00:31:23.566 [2024-06-07 21:48:23.490933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.566 [2024-06-07 21:48:23.490944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.566 qpair failed and we were unable to recover it. 00:31:23.566 [2024-06-07 21:48:23.491057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.566 [2024-06-07 21:48:23.491067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.566 qpair failed and we were unable to recover it. 00:31:23.566 [2024-06-07 21:48:23.491291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.566 [2024-06-07 21:48:23.491301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.566 qpair failed and we were unable to recover it. 00:31:23.566 [2024-06-07 21:48:23.491596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.566 [2024-06-07 21:48:23.491605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.566 qpair failed and we were unable to recover it. 00:31:23.566 [2024-06-07 21:48:23.491849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.566 [2024-06-07 21:48:23.491859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.566 qpair failed and we were unable to recover it. 00:31:23.566 [2024-06-07 21:48:23.492093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.566 [2024-06-07 21:48:23.492103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.566 qpair failed and we were unable to recover it. 00:31:23.566 [2024-06-07 21:48:23.492313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.566 [2024-06-07 21:48:23.492323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.566 qpair failed and we were unable to recover it. 00:31:23.566 [2024-06-07 21:48:23.492535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.566 [2024-06-07 21:48:23.492544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.566 qpair failed and we were unable to recover it. 00:31:23.566 [2024-06-07 21:48:23.492705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.566 [2024-06-07 21:48:23.492715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.566 qpair failed and we were unable to recover it. 00:31:23.566 [2024-06-07 21:48:23.493022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.566 [2024-06-07 21:48:23.493036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.566 qpair failed and we were unable to recover it. 00:31:23.566 [2024-06-07 21:48:23.493199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.566 [2024-06-07 21:48:23.493208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.566 qpair failed and we were unable to recover it. 00:31:23.566 [2024-06-07 21:48:23.493357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.566 [2024-06-07 21:48:23.493367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.566 qpair failed and we were unable to recover it. 00:31:23.566 [2024-06-07 21:48:23.493530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.566 [2024-06-07 21:48:23.493539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.566 qpair failed and we were unable to recover it. 00:31:23.566 [2024-06-07 21:48:23.493774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.566 [2024-06-07 21:48:23.493783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.566 qpair failed and we were unable to recover it. 00:31:23.566 [2024-06-07 21:48:23.494021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.566 [2024-06-07 21:48:23.494035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.566 qpair failed and we were unable to recover it. 00:31:23.566 [2024-06-07 21:48:23.494264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.566 [2024-06-07 21:48:23.494274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.566 qpair failed and we were unable to recover it. 00:31:23.566 [2024-06-07 21:48:23.494468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.566 [2024-06-07 21:48:23.494478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.566 qpair failed and we were unable to recover it. 00:31:23.566 [2024-06-07 21:48:23.494715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.566 [2024-06-07 21:48:23.494724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.566 qpair failed and we were unable to recover it. 00:31:23.566 [2024-06-07 21:48:23.494959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.567 [2024-06-07 21:48:23.494968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.567 qpair failed and we were unable to recover it. 00:31:23.567 [2024-06-07 21:48:23.495176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.567 [2024-06-07 21:48:23.495186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.567 qpair failed and we were unable to recover it. 00:31:23.567 [2024-06-07 21:48:23.495325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.567 [2024-06-07 21:48:23.495335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.567 qpair failed and we were unable to recover it. 00:31:23.567 [2024-06-07 21:48:23.495447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.567 [2024-06-07 21:48:23.495457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.567 qpair failed and we were unable to recover it. 00:31:23.567 [2024-06-07 21:48:23.495590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.567 [2024-06-07 21:48:23.495600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.567 qpair failed and we were unable to recover it. 00:31:23.567 [2024-06-07 21:48:23.495802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.567 [2024-06-07 21:48:23.495811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.567 qpair failed and we were unable to recover it. 00:31:23.567 [2024-06-07 21:48:23.496016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.567 [2024-06-07 21:48:23.496028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.567 qpair failed and we were unable to recover it. 00:31:23.567 [2024-06-07 21:48:23.496191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.567 [2024-06-07 21:48:23.496200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.567 qpair failed and we were unable to recover it. 00:31:23.567 [2024-06-07 21:48:23.496506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.567 [2024-06-07 21:48:23.496516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.567 qpair failed and we were unable to recover it. 00:31:23.567 [2024-06-07 21:48:23.496810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.567 [2024-06-07 21:48:23.496820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.567 qpair failed and we were unable to recover it. 00:31:23.567 [2024-06-07 21:48:23.497050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.567 [2024-06-07 21:48:23.497060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.567 qpair failed and we were unable to recover it. 00:31:23.567 [2024-06-07 21:48:23.497164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.567 [2024-06-07 21:48:23.497174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.567 qpair failed and we were unable to recover it. 00:31:23.567 [2024-06-07 21:48:23.497387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.567 [2024-06-07 21:48:23.497397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.567 qpair failed and we were unable to recover it. 00:31:23.567 [2024-06-07 21:48:23.497613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.567 [2024-06-07 21:48:23.497623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.567 qpair failed and we were unable to recover it. 00:31:23.567 [2024-06-07 21:48:23.497906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.567 [2024-06-07 21:48:23.497915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.567 qpair failed and we were unable to recover it. 00:31:23.567 [2024-06-07 21:48:23.498123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.567 [2024-06-07 21:48:23.498134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.567 qpair failed and we were unable to recover it. 00:31:23.567 [2024-06-07 21:48:23.498361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.567 [2024-06-07 21:48:23.498370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.567 qpair failed and we were unable to recover it. 00:31:23.567 [2024-06-07 21:48:23.498517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.567 [2024-06-07 21:48:23.498526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.567 qpair failed and we were unable to recover it. 00:31:23.567 [2024-06-07 21:48:23.498789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.567 [2024-06-07 21:48:23.498799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.567 qpair failed and we were unable to recover it. 00:31:23.567 [2024-06-07 21:48:23.499065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.567 [2024-06-07 21:48:23.499075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.567 qpair failed and we were unable to recover it. 00:31:23.567 [2024-06-07 21:48:23.499270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.567 [2024-06-07 21:48:23.499280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.567 qpair failed and we were unable to recover it. 00:31:23.567 [2024-06-07 21:48:23.499548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.567 [2024-06-07 21:48:23.499558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.567 qpair failed and we were unable to recover it. 00:31:23.567 [2024-06-07 21:48:23.499753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.567 [2024-06-07 21:48:23.499766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.567 qpair failed and we were unable to recover it. 00:31:23.567 [2024-06-07 21:48:23.500033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.567 [2024-06-07 21:48:23.500043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.567 qpair failed and we were unable to recover it. 00:31:23.567 [2024-06-07 21:48:23.500313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.567 [2024-06-07 21:48:23.500323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.567 qpair failed and we were unable to recover it. 00:31:23.567 [2024-06-07 21:48:23.500591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.567 [2024-06-07 21:48:23.500600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.567 qpair failed and we were unable to recover it. 00:31:23.567 [2024-06-07 21:48:23.500747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.567 [2024-06-07 21:48:23.500756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.567 qpair failed and we were unable to recover it. 00:31:23.567 [2024-06-07 21:48:23.501021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.567 [2024-06-07 21:48:23.501040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.567 qpair failed and we were unable to recover it. 00:31:23.567 [2024-06-07 21:48:23.501307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.567 [2024-06-07 21:48:23.501316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.567 qpair failed and we were unable to recover it. 00:31:23.567 [2024-06-07 21:48:23.501580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.567 [2024-06-07 21:48:23.501590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.567 qpair failed and we were unable to recover it. 00:31:23.567 [2024-06-07 21:48:23.501824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.567 [2024-06-07 21:48:23.501835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.567 qpair failed and we were unable to recover it. 00:31:23.567 [2024-06-07 21:48:23.502177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.567 [2024-06-07 21:48:23.502187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.567 qpair failed and we were unable to recover it. 00:31:23.567 [2024-06-07 21:48:23.502390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.567 [2024-06-07 21:48:23.502400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.567 qpair failed and we were unable to recover it. 00:31:23.567 [2024-06-07 21:48:23.502721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.567 [2024-06-07 21:48:23.502732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.567 qpair failed and we were unable to recover it. 00:31:23.567 [2024-06-07 21:48:23.502958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.567 [2024-06-07 21:48:23.502968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.567 qpair failed and we were unable to recover it. 00:31:23.567 [2024-06-07 21:48:23.503261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.567 [2024-06-07 21:48:23.503271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.567 qpair failed and we were unable to recover it. 00:31:23.567 [2024-06-07 21:48:23.503545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.568 [2024-06-07 21:48:23.503555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.568 qpair failed and we were unable to recover it. 00:31:23.568 [2024-06-07 21:48:23.503754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.568 [2024-06-07 21:48:23.503764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.568 qpair failed and we were unable to recover it. 00:31:23.568 [2024-06-07 21:48:23.504040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.568 [2024-06-07 21:48:23.504050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.568 qpair failed and we were unable to recover it. 00:31:23.568 [2024-06-07 21:48:23.504253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.568 [2024-06-07 21:48:23.504262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.568 qpair failed and we were unable to recover it. 00:31:23.568 [2024-06-07 21:48:23.504422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.568 [2024-06-07 21:48:23.504432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.568 qpair failed and we were unable to recover it. 00:31:23.568 [2024-06-07 21:48:23.504638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.568 [2024-06-07 21:48:23.504648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.568 qpair failed and we were unable to recover it. 00:31:23.568 [2024-06-07 21:48:23.504920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.568 [2024-06-07 21:48:23.504930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.568 qpair failed and we were unable to recover it. 00:31:23.568 [2024-06-07 21:48:23.505144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.568 [2024-06-07 21:48:23.505155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.568 qpair failed and we were unable to recover it. 00:31:23.568 [2024-06-07 21:48:23.505395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.568 [2024-06-07 21:48:23.505405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.568 qpair failed and we were unable to recover it. 00:31:23.568 [2024-06-07 21:48:23.505670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.568 [2024-06-07 21:48:23.505680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.568 qpair failed and we were unable to recover it. 00:31:23.568 [2024-06-07 21:48:23.505835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.568 [2024-06-07 21:48:23.505845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.568 qpair failed and we were unable to recover it. 00:31:23.568 [2024-06-07 21:48:23.506000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.568 [2024-06-07 21:48:23.506010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.568 qpair failed and we were unable to recover it. 00:31:23.568 [2024-06-07 21:48:23.506211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.568 [2024-06-07 21:48:23.506221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.568 qpair failed and we were unable to recover it. 00:31:23.568 [2024-06-07 21:48:23.506465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.568 [2024-06-07 21:48:23.506475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.568 qpair failed and we were unable to recover it. 00:31:23.568 [2024-06-07 21:48:23.506685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.568 [2024-06-07 21:48:23.506696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.568 qpair failed and we were unable to recover it. 00:31:23.568 [2024-06-07 21:48:23.506834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.568 [2024-06-07 21:48:23.506845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.568 qpair failed and we were unable to recover it. 00:31:23.568 [2024-06-07 21:48:23.506994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.568 [2024-06-07 21:48:23.507004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.568 qpair failed and we were unable to recover it. 00:31:23.568 [2024-06-07 21:48:23.507200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.568 [2024-06-07 21:48:23.507210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.568 qpair failed and we were unable to recover it. 00:31:23.568 [2024-06-07 21:48:23.507440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.568 [2024-06-07 21:48:23.507450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.568 qpair failed and we were unable to recover it. 00:31:23.568 [2024-06-07 21:48:23.507739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.568 [2024-06-07 21:48:23.507749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.568 qpair failed and we were unable to recover it. 00:31:23.568 [2024-06-07 21:48:23.507964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.568 [2024-06-07 21:48:23.507973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.568 qpair failed and we were unable to recover it. 00:31:23.568 [2024-06-07 21:48:23.508270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.568 [2024-06-07 21:48:23.508281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.568 qpair failed and we were unable to recover it. 00:31:23.568 [2024-06-07 21:48:23.508511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.568 [2024-06-07 21:48:23.508521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.568 qpair failed and we were unable to recover it. 00:31:23.568 [2024-06-07 21:48:23.508719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.568 [2024-06-07 21:48:23.508729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.568 qpair failed and we were unable to recover it. 00:31:23.568 [2024-06-07 21:48:23.509018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.568 [2024-06-07 21:48:23.509031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.568 qpair failed and we were unable to recover it. 00:31:23.568 [2024-06-07 21:48:23.509188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.568 [2024-06-07 21:48:23.509198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.568 qpair failed and we were unable to recover it. 00:31:23.568 [2024-06-07 21:48:23.509513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.568 [2024-06-07 21:48:23.509524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.568 qpair failed and we were unable to recover it. 00:31:23.568 [2024-06-07 21:48:23.509680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.568 [2024-06-07 21:48:23.509690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.568 qpair failed and we were unable to recover it. 00:31:23.568 [2024-06-07 21:48:23.509848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.568 [2024-06-07 21:48:23.509857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.568 qpair failed and we were unable to recover it. 00:31:23.568 [2024-06-07 21:48:23.510065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.568 [2024-06-07 21:48:23.510075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.568 qpair failed and we were unable to recover it. 00:31:23.568 [2024-06-07 21:48:23.510281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.568 [2024-06-07 21:48:23.510291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.568 qpair failed and we were unable to recover it. 00:31:23.568 [2024-06-07 21:48:23.510501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.568 [2024-06-07 21:48:23.510511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.568 qpair failed and we were unable to recover it. 00:31:23.568 [2024-06-07 21:48:23.510751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.568 [2024-06-07 21:48:23.510761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.568 qpair failed and we were unable to recover it. 00:31:23.568 [2024-06-07 21:48:23.510909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.568 [2024-06-07 21:48:23.510919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.568 qpair failed and we were unable to recover it. 00:31:23.568 [2024-06-07 21:48:23.511132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.568 [2024-06-07 21:48:23.511142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.568 qpair failed and we were unable to recover it. 00:31:23.568 [2024-06-07 21:48:23.511339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.568 [2024-06-07 21:48:23.511349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.568 qpair failed and we were unable to recover it. 00:31:23.568 [2024-06-07 21:48:23.511555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.568 [2024-06-07 21:48:23.511565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.568 qpair failed and we were unable to recover it. 00:31:23.568 [2024-06-07 21:48:23.511762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.569 [2024-06-07 21:48:23.511771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.569 qpair failed and we were unable to recover it. 00:31:23.569 [2024-06-07 21:48:23.511983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.569 [2024-06-07 21:48:23.511992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.569 qpair failed and we were unable to recover it. 00:31:23.569 [2024-06-07 21:48:23.512197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.569 [2024-06-07 21:48:23.512208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.569 qpair failed and we were unable to recover it. 00:31:23.569 [2024-06-07 21:48:23.512363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.569 [2024-06-07 21:48:23.512372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.569 qpair failed and we were unable to recover it. 00:31:23.569 [2024-06-07 21:48:23.512657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.569 [2024-06-07 21:48:23.512667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.569 qpair failed and we were unable to recover it. 00:31:23.569 [2024-06-07 21:48:23.512951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.569 [2024-06-07 21:48:23.512960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.569 qpair failed and we were unable to recover it. 00:31:23.569 [2024-06-07 21:48:23.513115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.569 [2024-06-07 21:48:23.513124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.569 qpair failed and we were unable to recover it. 00:31:23.569 [2024-06-07 21:48:23.513416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.569 [2024-06-07 21:48:23.513426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.569 qpair failed and we were unable to recover it. 00:31:23.569 [2024-06-07 21:48:23.513629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.569 [2024-06-07 21:48:23.513639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.569 qpair failed and we were unable to recover it. 00:31:23.569 [2024-06-07 21:48:23.513838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.569 [2024-06-07 21:48:23.513847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.569 qpair failed and we were unable to recover it. 00:31:23.569 [2024-06-07 21:48:23.514126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.569 [2024-06-07 21:48:23.514137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.569 qpair failed and we were unable to recover it. 00:31:23.569 [2024-06-07 21:48:23.514415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.569 [2024-06-07 21:48:23.514425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.569 qpair failed and we were unable to recover it. 00:31:23.569 [2024-06-07 21:48:23.514570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.569 [2024-06-07 21:48:23.514580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.569 qpair failed and we were unable to recover it. 00:31:23.569 [2024-06-07 21:48:23.514743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.569 [2024-06-07 21:48:23.514752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.569 qpair failed and we were unable to recover it. 00:31:23.569 [2024-06-07 21:48:23.515048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.569 [2024-06-07 21:48:23.515058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.569 qpair failed and we were unable to recover it. 00:31:23.569 [2024-06-07 21:48:23.515216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.569 [2024-06-07 21:48:23.515225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.569 qpair failed and we were unable to recover it. 00:31:23.569 [2024-06-07 21:48:23.515503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.569 [2024-06-07 21:48:23.515513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.569 qpair failed and we were unable to recover it. 00:31:23.569 [2024-06-07 21:48:23.515729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.569 [2024-06-07 21:48:23.515739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.569 qpair failed and we were unable to recover it. 00:31:23.569 [2024-06-07 21:48:23.515961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.569 [2024-06-07 21:48:23.515971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.569 qpair failed and we were unable to recover it. 00:31:23.569 [2024-06-07 21:48:23.516129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.569 [2024-06-07 21:48:23.516139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.569 qpair failed and we were unable to recover it. 00:31:23.569 [2024-06-07 21:48:23.516366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.569 [2024-06-07 21:48:23.516376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.569 qpair failed and we were unable to recover it. 00:31:23.569 [2024-06-07 21:48:23.516614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.569 [2024-06-07 21:48:23.516624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.569 qpair failed and we were unable to recover it. 00:31:23.569 [2024-06-07 21:48:23.516769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.569 [2024-06-07 21:48:23.516779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.569 qpair failed and we were unable to recover it. 00:31:23.569 [2024-06-07 21:48:23.516921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.569 [2024-06-07 21:48:23.516930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.569 qpair failed and we were unable to recover it. 00:31:23.569 [2024-06-07 21:48:23.517129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.569 [2024-06-07 21:48:23.517139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.569 qpair failed and we were unable to recover it. 00:31:23.569 [2024-06-07 21:48:23.517360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.569 [2024-06-07 21:48:23.517369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.569 qpair failed and we were unable to recover it. 00:31:23.569 [2024-06-07 21:48:23.517567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.569 [2024-06-07 21:48:23.517577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.569 qpair failed and we were unable to recover it. 00:31:23.569 [2024-06-07 21:48:23.517777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.569 [2024-06-07 21:48:23.517787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.569 qpair failed and we were unable to recover it. 00:31:23.569 [2024-06-07 21:48:23.518032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.569 [2024-06-07 21:48:23.518042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.569 qpair failed and we were unable to recover it. 00:31:23.569 [2024-06-07 21:48:23.518376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.569 [2024-06-07 21:48:23.518389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.569 qpair failed and we were unable to recover it. 00:31:23.569 [2024-06-07 21:48:23.518554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.569 [2024-06-07 21:48:23.518564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.569 qpair failed and we were unable to recover it. 00:31:23.569 [2024-06-07 21:48:23.518690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.569 [2024-06-07 21:48:23.518700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.569 qpair failed and we were unable to recover it. 00:31:23.569 [2024-06-07 21:48:23.518967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.569 [2024-06-07 21:48:23.518977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.569 qpair failed and we were unable to recover it. 00:31:23.569 [2024-06-07 21:48:23.519249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.569 [2024-06-07 21:48:23.519260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.570 qpair failed and we were unable to recover it. 00:31:23.570 [2024-06-07 21:48:23.519460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.570 [2024-06-07 21:48:23.519470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.570 qpair failed and we were unable to recover it. 00:31:23.570 [2024-06-07 21:48:23.519763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.570 [2024-06-07 21:48:23.519773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.570 qpair failed and we were unable to recover it. 00:31:23.570 [2024-06-07 21:48:23.520090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.570 [2024-06-07 21:48:23.520100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.570 qpair failed and we were unable to recover it. 00:31:23.570 [2024-06-07 21:48:23.520309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.570 [2024-06-07 21:48:23.520320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.570 qpair failed and we were unable to recover it. 00:31:23.570 [2024-06-07 21:48:23.520565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.570 [2024-06-07 21:48:23.520576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.570 qpair failed and we were unable to recover it. 00:31:23.570 [2024-06-07 21:48:23.520794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.570 [2024-06-07 21:48:23.520804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.570 qpair failed and we were unable to recover it. 00:31:23.570 [2024-06-07 21:48:23.521081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.570 [2024-06-07 21:48:23.521091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.570 qpair failed and we were unable to recover it. 00:31:23.570 [2024-06-07 21:48:23.521286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.570 [2024-06-07 21:48:23.521296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.570 qpair failed and we were unable to recover it. 00:31:23.570 [2024-06-07 21:48:23.521562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.570 [2024-06-07 21:48:23.521572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.570 qpair failed and we were unable to recover it. 00:31:23.570 [2024-06-07 21:48:23.521843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.570 [2024-06-07 21:48:23.521853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.570 qpair failed and we were unable to recover it. 00:31:23.570 [2024-06-07 21:48:23.522078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.570 [2024-06-07 21:48:23.522089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.570 qpair failed and we were unable to recover it. 00:31:23.570 [2024-06-07 21:48:23.522330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.570 [2024-06-07 21:48:23.522339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.570 qpair failed and we were unable to recover it. 00:31:23.570 [2024-06-07 21:48:23.522636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.570 [2024-06-07 21:48:23.522646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.570 qpair failed and we were unable to recover it. 00:31:23.570 [2024-06-07 21:48:23.522880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.570 [2024-06-07 21:48:23.522890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.570 qpair failed and we were unable to recover it. 00:31:23.570 [2024-06-07 21:48:23.523052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.570 [2024-06-07 21:48:23.523062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.570 qpair failed and we were unable to recover it. 00:31:23.570 [2024-06-07 21:48:23.523279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.570 [2024-06-07 21:48:23.523289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.570 qpair failed and we were unable to recover it. 00:31:23.570 [2024-06-07 21:48:23.523555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.570 [2024-06-07 21:48:23.523566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.570 qpair failed and we were unable to recover it. 00:31:23.570 [2024-06-07 21:48:23.523800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.570 [2024-06-07 21:48:23.523811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.570 qpair failed and we were unable to recover it. 00:31:23.570 [2024-06-07 21:48:23.524044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.570 [2024-06-07 21:48:23.524055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.570 qpair failed and we were unable to recover it. 00:31:23.570 [2024-06-07 21:48:23.524264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.570 [2024-06-07 21:48:23.524274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.570 qpair failed and we were unable to recover it. 00:31:23.570 [2024-06-07 21:48:23.524508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.570 [2024-06-07 21:48:23.524518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.570 qpair failed and we were unable to recover it. 00:31:23.570 [2024-06-07 21:48:23.524805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.570 [2024-06-07 21:48:23.524815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.570 qpair failed and we were unable to recover it. 00:31:23.570 [2024-06-07 21:48:23.524982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.570 [2024-06-07 21:48:23.524992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.570 qpair failed and we were unable to recover it. 00:31:23.570 [2024-06-07 21:48:23.525262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.570 [2024-06-07 21:48:23.525272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.570 qpair failed and we were unable to recover it. 00:31:23.570 [2024-06-07 21:48:23.525484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.570 [2024-06-07 21:48:23.525494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.570 qpair failed and we were unable to recover it. 00:31:23.570 [2024-06-07 21:48:23.525692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.570 [2024-06-07 21:48:23.525702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.570 qpair failed and we were unable to recover it. 00:31:23.570 [2024-06-07 21:48:23.525795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.570 [2024-06-07 21:48:23.525805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.570 qpair failed and we were unable to recover it. 00:31:23.570 [2024-06-07 21:48:23.525938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.570 [2024-06-07 21:48:23.525948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.570 qpair failed and we were unable to recover it. 00:31:23.570 [2024-06-07 21:48:23.526155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.571 [2024-06-07 21:48:23.526165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.571 qpair failed and we were unable to recover it. 00:31:23.571 [2024-06-07 21:48:23.526303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.571 [2024-06-07 21:48:23.526314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.571 qpair failed and we were unable to recover it. 00:31:23.571 [2024-06-07 21:48:23.526520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.571 [2024-06-07 21:48:23.526530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.571 qpair failed and we were unable to recover it. 00:31:23.571 [2024-06-07 21:48:23.526726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.571 [2024-06-07 21:48:23.526735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.571 qpair failed and we were unable to recover it. 00:31:23.571 [2024-06-07 21:48:23.526853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.571 [2024-06-07 21:48:23.526862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.571 qpair failed and we were unable to recover it. 00:31:23.571 [2024-06-07 21:48:23.527007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.571 [2024-06-07 21:48:23.527017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.571 qpair failed and we were unable to recover it. 00:31:23.571 [2024-06-07 21:48:23.527292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.571 [2024-06-07 21:48:23.527303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.571 qpair failed and we were unable to recover it. 00:31:23.571 [2024-06-07 21:48:23.527600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.571 [2024-06-07 21:48:23.527612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.571 qpair failed and we were unable to recover it. 00:31:23.571 [2024-06-07 21:48:23.527828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.571 [2024-06-07 21:48:23.527838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.571 qpair failed and we were unable to recover it. 00:31:23.571 [2024-06-07 21:48:23.528104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.571 [2024-06-07 21:48:23.528115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.571 qpair failed and we were unable to recover it. 00:31:23.571 [2024-06-07 21:48:23.528406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.571 [2024-06-07 21:48:23.528415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.571 qpair failed and we were unable to recover it. 00:31:23.571 [2024-06-07 21:48:23.528562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.571 [2024-06-07 21:48:23.528572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.571 qpair failed and we were unable to recover it. 00:31:23.571 [2024-06-07 21:48:23.528858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.571 [2024-06-07 21:48:23.528868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.571 qpair failed and we were unable to recover it. 00:31:23.571 [2024-06-07 21:48:23.529012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.571 [2024-06-07 21:48:23.529023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.571 qpair failed and we were unable to recover it. 00:31:23.571 [2024-06-07 21:48:23.529230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.571 [2024-06-07 21:48:23.529240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.571 qpair failed and we were unable to recover it. 00:31:23.571 [2024-06-07 21:48:23.529404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.571 [2024-06-07 21:48:23.529413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.571 qpair failed and we were unable to recover it. 00:31:23.571 [2024-06-07 21:48:23.529568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.571 [2024-06-07 21:48:23.529577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.571 qpair failed and we were unable to recover it. 00:31:23.571 [2024-06-07 21:48:23.529857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.571 [2024-06-07 21:48:23.529867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.571 qpair failed and we were unable to recover it. 00:31:23.571 [2024-06-07 21:48:23.530185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.571 [2024-06-07 21:48:23.530195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.571 qpair failed and we were unable to recover it. 00:31:23.571 [2024-06-07 21:48:23.530354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.571 [2024-06-07 21:48:23.530363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.571 qpair failed and we were unable to recover it. 00:31:23.571 [2024-06-07 21:48:23.530571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.571 [2024-06-07 21:48:23.530581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.571 qpair failed and we were unable to recover it. 00:31:23.571 [2024-06-07 21:48:23.530808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.571 [2024-06-07 21:48:23.530818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.571 qpair failed and we were unable to recover it. 00:31:23.571 [2024-06-07 21:48:23.531100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.571 [2024-06-07 21:48:23.531110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.571 qpair failed and we were unable to recover it. 00:31:23.571 [2024-06-07 21:48:23.531237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.571 [2024-06-07 21:48:23.531247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.571 qpair failed and we were unable to recover it. 00:31:23.571 [2024-06-07 21:48:23.531513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.571 [2024-06-07 21:48:23.531524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.571 qpair failed and we were unable to recover it. 00:31:23.571 [2024-06-07 21:48:23.531812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.571 [2024-06-07 21:48:23.531822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.571 qpair failed and we were unable to recover it. 00:31:23.571 [2024-06-07 21:48:23.532055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.571 [2024-06-07 21:48:23.532065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.571 qpair failed and we were unable to recover it. 00:31:23.571 [2024-06-07 21:48:23.532344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.571 [2024-06-07 21:48:23.532353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.571 qpair failed and we were unable to recover it. 00:31:23.571 [2024-06-07 21:48:23.532590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.571 [2024-06-07 21:48:23.532600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.571 qpair failed and we were unable to recover it. 00:31:23.571 [2024-06-07 21:48:23.532757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.571 [2024-06-07 21:48:23.532767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.571 qpair failed and we were unable to recover it. 00:31:23.571 [2024-06-07 21:48:23.532919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.571 [2024-06-07 21:48:23.532929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.571 qpair failed and we were unable to recover it. 00:31:23.571 [2024-06-07 21:48:23.533146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.571 [2024-06-07 21:48:23.533156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.571 qpair failed and we were unable to recover it. 00:31:23.571 [2024-06-07 21:48:23.533363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.571 [2024-06-07 21:48:23.533372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.571 qpair failed and we were unable to recover it. 00:31:23.571 [2024-06-07 21:48:23.533532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.571 [2024-06-07 21:48:23.533543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.571 qpair failed and we were unable to recover it. 00:31:23.572 [2024-06-07 21:48:23.533740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.572 [2024-06-07 21:48:23.533750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.572 qpair failed and we were unable to recover it. 00:31:23.572 [2024-06-07 21:48:23.534038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.572 [2024-06-07 21:48:23.534048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.572 qpair failed and we were unable to recover it. 00:31:23.572 [2024-06-07 21:48:23.534344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.572 [2024-06-07 21:48:23.534354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.572 qpair failed and we were unable to recover it. 00:31:23.572 [2024-06-07 21:48:23.534514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.572 [2024-06-07 21:48:23.534524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.572 qpair failed and we were unable to recover it. 00:31:23.572 [2024-06-07 21:48:23.534737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.572 [2024-06-07 21:48:23.534747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.572 qpair failed and we were unable to recover it. 00:31:23.572 [2024-06-07 21:48:23.534892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.572 [2024-06-07 21:48:23.534902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.572 qpair failed and we were unable to recover it. 00:31:23.572 [2024-06-07 21:48:23.535179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.572 [2024-06-07 21:48:23.535189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.572 qpair failed and we were unable to recover it. 00:31:23.572 [2024-06-07 21:48:23.535398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.572 [2024-06-07 21:48:23.535408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.572 qpair failed and we were unable to recover it. 00:31:23.572 [2024-06-07 21:48:23.535616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.572 [2024-06-07 21:48:23.535626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.572 qpair failed and we were unable to recover it. 00:31:23.572 [2024-06-07 21:48:23.535731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.572 [2024-06-07 21:48:23.535741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.572 qpair failed and we were unable to recover it. 00:31:23.572 [2024-06-07 21:48:23.536046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.572 [2024-06-07 21:48:23.536056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.572 qpair failed and we were unable to recover it. 00:31:23.572 [2024-06-07 21:48:23.536259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.572 [2024-06-07 21:48:23.536269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.572 qpair failed and we were unable to recover it. 00:31:23.572 [2024-06-07 21:48:23.536555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.572 [2024-06-07 21:48:23.536565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.572 qpair failed and we were unable to recover it. 00:31:23.572 [2024-06-07 21:48:23.536779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.572 [2024-06-07 21:48:23.536791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.572 qpair failed and we were unable to recover it. 00:31:23.572 [2024-06-07 21:48:23.537086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.572 [2024-06-07 21:48:23.537096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.572 qpair failed and we were unable to recover it. 00:31:23.572 [2024-06-07 21:48:23.537297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.572 [2024-06-07 21:48:23.537307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.572 qpair failed and we were unable to recover it. 00:31:23.572 [2024-06-07 21:48:23.537558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.572 [2024-06-07 21:48:23.537569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.572 qpair failed and we were unable to recover it. 00:31:23.572 [2024-06-07 21:48:23.537808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.572 [2024-06-07 21:48:23.537818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.572 qpair failed and we were unable to recover it. 00:31:23.572 [2024-06-07 21:48:23.538126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.572 [2024-06-07 21:48:23.538137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.572 qpair failed and we were unable to recover it. 00:31:23.572 [2024-06-07 21:48:23.538290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.572 [2024-06-07 21:48:23.538300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.572 qpair failed and we were unable to recover it. 00:31:23.572 [2024-06-07 21:48:23.538444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.572 [2024-06-07 21:48:23.538454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.572 qpair failed and we were unable to recover it. 00:31:23.572 [2024-06-07 21:48:23.538769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.572 [2024-06-07 21:48:23.538779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.572 qpair failed and we were unable to recover it. 00:31:23.572 [2024-06-07 21:48:23.538970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.572 [2024-06-07 21:48:23.538980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.572 qpair failed and we were unable to recover it. 00:31:23.572 [2024-06-07 21:48:23.539130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.572 [2024-06-07 21:48:23.539140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.572 qpair failed and we were unable to recover it. 00:31:23.572 [2024-06-07 21:48:23.539440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.572 [2024-06-07 21:48:23.539450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.572 qpair failed and we were unable to recover it. 00:31:23.572 [2024-06-07 21:48:23.539670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.572 [2024-06-07 21:48:23.539680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.572 qpair failed and we were unable to recover it. 00:31:23.572 [2024-06-07 21:48:23.539915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.572 [2024-06-07 21:48:23.539924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.572 qpair failed and we were unable to recover it. 00:31:23.572 [2024-06-07 21:48:23.540132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.572 [2024-06-07 21:48:23.540142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.572 qpair failed and we were unable to recover it. 00:31:23.572 [2024-06-07 21:48:23.540437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.572 [2024-06-07 21:48:23.540446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.572 qpair failed and we were unable to recover it. 00:31:23.572 [2024-06-07 21:48:23.540644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.572 [2024-06-07 21:48:23.540653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.572 qpair failed and we were unable to recover it. 00:31:23.572 [2024-06-07 21:48:23.540896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.572 [2024-06-07 21:48:23.540906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.572 qpair failed and we were unable to recover it. 00:31:23.572 [2024-06-07 21:48:23.541113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.572 [2024-06-07 21:48:23.541123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.572 qpair failed and we were unable to recover it. 00:31:23.572 [2024-06-07 21:48:23.541211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.572 [2024-06-07 21:48:23.541220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.572 qpair failed and we were unable to recover it. 00:31:23.572 [2024-06-07 21:48:23.541368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.572 [2024-06-07 21:48:23.541378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.572 qpair failed and we were unable to recover it. 00:31:23.572 [2024-06-07 21:48:23.541528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.572 [2024-06-07 21:48:23.541537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.572 qpair failed and we were unable to recover it. 00:31:23.573 [2024-06-07 21:48:23.541753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.573 [2024-06-07 21:48:23.541763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.573 qpair failed and we were unable to recover it. 00:31:23.573 [2024-06-07 21:48:23.541969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.573 [2024-06-07 21:48:23.541979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.573 qpair failed and we were unable to recover it. 00:31:23.573 [2024-06-07 21:48:23.542185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.573 [2024-06-07 21:48:23.542195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.573 qpair failed and we were unable to recover it. 00:31:23.573 [2024-06-07 21:48:23.542393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.573 [2024-06-07 21:48:23.542402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.573 qpair failed and we were unable to recover it. 00:31:23.573 [2024-06-07 21:48:23.542601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.573 [2024-06-07 21:48:23.542611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.573 qpair failed and we were unable to recover it. 00:31:23.573 [2024-06-07 21:48:23.542940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.573 [2024-06-07 21:48:23.542950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.573 qpair failed and we were unable to recover it. 00:31:23.573 [2024-06-07 21:48:23.543178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.573 [2024-06-07 21:48:23.543189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.573 qpair failed and we were unable to recover it. 00:31:23.573 [2024-06-07 21:48:23.543463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.573 [2024-06-07 21:48:23.543472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.573 qpair failed and we were unable to recover it. 00:31:23.573 [2024-06-07 21:48:23.543624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.573 [2024-06-07 21:48:23.543634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.573 qpair failed and we were unable to recover it. 00:31:23.573 [2024-06-07 21:48:23.543845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.573 [2024-06-07 21:48:23.543854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.573 qpair failed and we were unable to recover it. 00:31:23.573 [2024-06-07 21:48:23.544061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.573 [2024-06-07 21:48:23.544071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.573 qpair failed and we were unable to recover it. 00:31:23.573 [2024-06-07 21:48:23.544283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.573 [2024-06-07 21:48:23.544293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.573 qpair failed and we were unable to recover it. 00:31:23.573 [2024-06-07 21:48:23.544526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.573 [2024-06-07 21:48:23.544535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.573 qpair failed and we were unable to recover it. 00:31:23.573 [2024-06-07 21:48:23.544758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.573 [2024-06-07 21:48:23.544769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.573 qpair failed and we were unable to recover it. 00:31:23.573 [2024-06-07 21:48:23.544969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.573 [2024-06-07 21:48:23.544979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.573 qpair failed and we were unable to recover it. 00:31:23.573 [2024-06-07 21:48:23.545246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.573 [2024-06-07 21:48:23.545257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.573 qpair failed and we were unable to recover it. 00:31:23.573 [2024-06-07 21:48:23.545521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.573 [2024-06-07 21:48:23.545531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.573 qpair failed and we were unable to recover it. 00:31:23.573 [2024-06-07 21:48:23.545846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.573 [2024-06-07 21:48:23.545856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.573 qpair failed and we were unable to recover it. 00:31:23.573 [2024-06-07 21:48:23.546015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.573 [2024-06-07 21:48:23.546030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.573 qpair failed and we were unable to recover it. 00:31:23.573 [2024-06-07 21:48:23.546323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.573 [2024-06-07 21:48:23.546333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.573 qpair failed and we were unable to recover it. 00:31:23.573 [2024-06-07 21:48:23.546489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.573 [2024-06-07 21:48:23.546498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.573 qpair failed and we were unable to recover it. 00:31:23.573 [2024-06-07 21:48:23.546767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.573 [2024-06-07 21:48:23.546777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.573 qpair failed and we were unable to recover it. 00:31:23.573 [2024-06-07 21:48:23.547001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.573 [2024-06-07 21:48:23.547010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.573 qpair failed and we were unable to recover it. 00:31:23.573 [2024-06-07 21:48:23.547225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.573 [2024-06-07 21:48:23.547235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.573 qpair failed and we were unable to recover it. 00:31:23.573 [2024-06-07 21:48:23.547456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.573 [2024-06-07 21:48:23.547465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.573 qpair failed and we were unable to recover it. 00:31:23.573 [2024-06-07 21:48:23.547662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.573 [2024-06-07 21:48:23.547671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.573 qpair failed and we were unable to recover it. 00:31:23.573 [2024-06-07 21:48:23.547884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.573 [2024-06-07 21:48:23.547894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.573 qpair failed and we were unable to recover it. 00:31:23.573 [2024-06-07 21:48:23.548197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.573 [2024-06-07 21:48:23.548208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.573 qpair failed and we were unable to recover it. 00:31:23.573 [2024-06-07 21:48:23.548366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.573 [2024-06-07 21:48:23.548376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.573 qpair failed and we were unable to recover it. 00:31:23.573 [2024-06-07 21:48:23.548527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.573 [2024-06-07 21:48:23.548537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.573 qpair failed and we were unable to recover it. 00:31:23.573 [2024-06-07 21:48:23.548744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.573 [2024-06-07 21:48:23.548754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.573 qpair failed and we were unable to recover it. 00:31:23.573 [2024-06-07 21:48:23.548978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.573 [2024-06-07 21:48:23.548988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.573 qpair failed and we were unable to recover it. 00:31:23.573 [2024-06-07 21:48:23.549198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.573 [2024-06-07 21:48:23.549208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.573 qpair failed and we were unable to recover it. 00:31:23.573 [2024-06-07 21:48:23.549419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.573 [2024-06-07 21:48:23.549430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.573 qpair failed and we were unable to recover it. 00:31:23.573 [2024-06-07 21:48:23.549750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.573 [2024-06-07 21:48:23.549760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.573 qpair failed and we were unable to recover it. 00:31:23.573 [2024-06-07 21:48:23.550003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.573 [2024-06-07 21:48:23.550013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.573 qpair failed and we were unable to recover it. 00:31:23.573 [2024-06-07 21:48:23.550233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.573 [2024-06-07 21:48:23.550243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.573 qpair failed and we were unable to recover it. 00:31:23.573 [2024-06-07 21:48:23.550504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.573 [2024-06-07 21:48:23.550513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.574 qpair failed and we were unable to recover it. 00:31:23.574 [2024-06-07 21:48:23.550782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.574 [2024-06-07 21:48:23.550792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.574 qpair failed and we were unable to recover it. 00:31:23.574 [2024-06-07 21:48:23.551006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.574 [2024-06-07 21:48:23.551015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.574 qpair failed and we were unable to recover it. 00:31:23.574 [2024-06-07 21:48:23.551313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.574 [2024-06-07 21:48:23.551323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.574 qpair failed and we were unable to recover it. 00:31:23.574 [2024-06-07 21:48:23.551544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.574 [2024-06-07 21:48:23.551553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.574 qpair failed and we were unable to recover it. 00:31:23.574 [2024-06-07 21:48:23.551818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.574 [2024-06-07 21:48:23.551828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.574 qpair failed and we were unable to recover it. 00:31:23.574 [2024-06-07 21:48:23.552047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.574 [2024-06-07 21:48:23.552057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.574 qpair failed and we were unable to recover it. 00:31:23.574 [2024-06-07 21:48:23.552209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.574 [2024-06-07 21:48:23.552219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.574 qpair failed and we were unable to recover it. 00:31:23.574 [2024-06-07 21:48:23.552511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.574 [2024-06-07 21:48:23.552523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.574 qpair failed and we were unable to recover it. 00:31:23.574 [2024-06-07 21:48:23.552803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.574 [2024-06-07 21:48:23.552813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.574 qpair failed and we were unable to recover it. 00:31:23.574 [2024-06-07 21:48:23.553105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.574 [2024-06-07 21:48:23.553115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.574 qpair failed and we were unable to recover it. 00:31:23.574 [2024-06-07 21:48:23.553410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.574 [2024-06-07 21:48:23.553421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.574 qpair failed and we were unable to recover it. 00:31:23.574 [2024-06-07 21:48:23.553628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.574 [2024-06-07 21:48:23.553638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.574 qpair failed and we were unable to recover it. 00:31:23.574 [2024-06-07 21:48:23.553874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.574 [2024-06-07 21:48:23.553883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.574 qpair failed and we were unable to recover it. 00:31:23.574 [2024-06-07 21:48:23.554063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.574 [2024-06-07 21:48:23.554073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.574 qpair failed and we were unable to recover it. 00:31:23.574 [2024-06-07 21:48:23.554286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.574 [2024-06-07 21:48:23.554296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.574 qpair failed and we were unable to recover it. 00:31:23.574 [2024-06-07 21:48:23.554535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.574 [2024-06-07 21:48:23.554545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.574 qpair failed and we were unable to recover it. 00:31:23.574 [2024-06-07 21:48:23.554812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.574 [2024-06-07 21:48:23.554822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.574 qpair failed and we were unable to recover it. 00:31:23.574 [2024-06-07 21:48:23.555118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.574 [2024-06-07 21:48:23.555128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.574 qpair failed and we were unable to recover it. 00:31:23.574 [2024-06-07 21:48:23.555337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.574 [2024-06-07 21:48:23.555346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.574 qpair failed and we were unable to recover it. 00:31:23.574 [2024-06-07 21:48:23.555611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.574 [2024-06-07 21:48:23.555621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.574 qpair failed and we were unable to recover it. 00:31:23.574 [2024-06-07 21:48:23.555884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.574 [2024-06-07 21:48:23.555894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.574 qpair failed and we were unable to recover it. 00:31:23.574 [2024-06-07 21:48:23.556042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.574 [2024-06-07 21:48:23.556052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.574 qpair failed and we were unable to recover it. 00:31:23.574 [2024-06-07 21:48:23.556160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.574 [2024-06-07 21:48:23.556169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.574 qpair failed and we were unable to recover it. 00:31:23.574 [2024-06-07 21:48:23.556414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.574 [2024-06-07 21:48:23.556424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.574 qpair failed and we were unable to recover it. 00:31:23.574 [2024-06-07 21:48:23.556728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.574 [2024-06-07 21:48:23.556738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.574 qpair failed and we were unable to recover it. 00:31:23.574 [2024-06-07 21:48:23.557033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.574 [2024-06-07 21:48:23.557043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.574 qpair failed and we were unable to recover it. 00:31:23.574 [2024-06-07 21:48:23.557185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.574 [2024-06-07 21:48:23.557195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.574 qpair failed and we were unable to recover it. 00:31:23.574 [2024-06-07 21:48:23.557387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.574 [2024-06-07 21:48:23.557397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.574 qpair failed and we were unable to recover it. 00:31:23.574 [2024-06-07 21:48:23.557643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.574 [2024-06-07 21:48:23.557653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.574 qpair failed and we were unable to recover it. 00:31:23.574 [2024-06-07 21:48:23.557784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.574 [2024-06-07 21:48:23.557794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.574 qpair failed and we were unable to recover it. 00:31:23.574 [2024-06-07 21:48:23.557990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.574 [2024-06-07 21:48:23.558000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.574 qpair failed and we were unable to recover it. 00:31:23.574 [2024-06-07 21:48:23.558151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.574 [2024-06-07 21:48:23.558161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.574 qpair failed and we were unable to recover it. 00:31:23.574 [2024-06-07 21:48:23.558381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.574 [2024-06-07 21:48:23.558391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.574 qpair failed and we were unable to recover it. 00:31:23.574 [2024-06-07 21:48:23.558549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.574 [2024-06-07 21:48:23.558559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.574 qpair failed and we were unable to recover it. 00:31:23.574 [2024-06-07 21:48:23.558776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.574 [2024-06-07 21:48:23.558786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.574 qpair failed and we were unable to recover it. 00:31:23.574 [2024-06-07 21:48:23.558943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.575 [2024-06-07 21:48:23.558952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.575 qpair failed and we were unable to recover it. 00:31:23.575 [2024-06-07 21:48:23.559159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.575 [2024-06-07 21:48:23.559169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.575 qpair failed and we were unable to recover it. 00:31:23.575 [2024-06-07 21:48:23.559367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.575 [2024-06-07 21:48:23.559378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.575 qpair failed and we were unable to recover it. 00:31:23.575 [2024-06-07 21:48:23.559573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.575 [2024-06-07 21:48:23.559582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.575 qpair failed and we were unable to recover it. 00:31:23.575 [2024-06-07 21:48:23.559780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.575 [2024-06-07 21:48:23.559790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.575 qpair failed and we were unable to recover it. 00:31:23.575 [2024-06-07 21:48:23.559998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.575 [2024-06-07 21:48:23.560008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.575 qpair failed and we were unable to recover it. 00:31:23.575 [2024-06-07 21:48:23.560215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.575 [2024-06-07 21:48:23.560225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.575 qpair failed and we were unable to recover it. 00:31:23.575 [2024-06-07 21:48:23.560437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.575 [2024-06-07 21:48:23.560447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.575 qpair failed and we were unable to recover it. 00:31:23.575 [2024-06-07 21:48:23.560741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.575 [2024-06-07 21:48:23.560751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.575 qpair failed and we were unable to recover it. 00:31:23.575 [2024-06-07 21:48:23.560896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.575 [2024-06-07 21:48:23.560906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.575 qpair failed and we were unable to recover it. 00:31:23.575 [2024-06-07 21:48:23.561197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.575 [2024-06-07 21:48:23.561207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.575 qpair failed and we were unable to recover it. 00:31:23.575 [2024-06-07 21:48:23.561345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.575 [2024-06-07 21:48:23.561356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.575 qpair failed and we were unable to recover it. 00:31:23.575 [2024-06-07 21:48:23.561487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.575 [2024-06-07 21:48:23.561499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.575 qpair failed and we were unable to recover it. 00:31:23.575 [2024-06-07 21:48:23.561743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.575 [2024-06-07 21:48:23.561753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.575 qpair failed and we were unable to recover it. 00:31:23.575 [2024-06-07 21:48:23.561957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.575 [2024-06-07 21:48:23.561967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.575 qpair failed and we were unable to recover it. 00:31:23.575 [2024-06-07 21:48:23.562181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.575 [2024-06-07 21:48:23.562192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.575 qpair failed and we were unable to recover it. 00:31:23.575 [2024-06-07 21:48:23.562497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.575 [2024-06-07 21:48:23.562507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.575 qpair failed and we were unable to recover it. 00:31:23.575 [2024-06-07 21:48:23.562744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.575 [2024-06-07 21:48:23.562754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.575 qpair failed and we were unable to recover it. 00:31:23.575 [2024-06-07 21:48:23.563050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.575 [2024-06-07 21:48:23.563060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.575 qpair failed and we were unable to recover it. 00:31:23.575 [2024-06-07 21:48:23.563346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.575 [2024-06-07 21:48:23.563357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.575 qpair failed and we were unable to recover it. 00:31:23.575 [2024-06-07 21:48:23.563502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.575 [2024-06-07 21:48:23.563512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.575 qpair failed and we were unable to recover it. 00:31:23.575 [2024-06-07 21:48:23.563777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.575 [2024-06-07 21:48:23.563787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.575 qpair failed and we were unable to recover it. 00:31:23.575 [2024-06-07 21:48:23.563937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.575 [2024-06-07 21:48:23.563947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.575 qpair failed and we were unable to recover it. 00:31:23.575 [2024-06-07 21:48:23.564143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.575 [2024-06-07 21:48:23.564153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.575 qpair failed and we were unable to recover it. 00:31:23.575 [2024-06-07 21:48:23.564244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.575 [2024-06-07 21:48:23.564254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.575 qpair failed and we were unable to recover it. 00:31:23.575 [2024-06-07 21:48:23.564407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.575 [2024-06-07 21:48:23.564417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.575 qpair failed and we were unable to recover it. 00:31:23.575 [2024-06-07 21:48:23.564709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.575 [2024-06-07 21:48:23.564719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.575 qpair failed and we were unable to recover it. 00:31:23.575 [2024-06-07 21:48:23.564932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.575 [2024-06-07 21:48:23.564942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.575 qpair failed and we were unable to recover it. 00:31:23.575 [2024-06-07 21:48:23.565252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.575 [2024-06-07 21:48:23.565262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-06-07 21:48:23.565369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-06-07 21:48:23.565378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-06-07 21:48:23.565530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-06-07 21:48:23.565540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-06-07 21:48:23.565867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-06-07 21:48:23.565876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-06-07 21:48:23.566193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-06-07 21:48:23.566204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-06-07 21:48:23.566302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-06-07 21:48:23.566312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-06-07 21:48:23.566576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-06-07 21:48:23.566586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-06-07 21:48:23.566833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-06-07 21:48:23.566842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-06-07 21:48:23.567135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-06-07 21:48:23.567144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-06-07 21:48:23.567383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-06-07 21:48:23.567393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-06-07 21:48:23.567604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-06-07 21:48:23.567615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-06-07 21:48:23.567891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-06-07 21:48:23.567901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-06-07 21:48:23.568146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-06-07 21:48:23.568156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-06-07 21:48:23.568423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-06-07 21:48:23.568433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-06-07 21:48:23.568645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-06-07 21:48:23.568654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-06-07 21:48:23.568924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-06-07 21:48:23.568934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-06-07 21:48:23.569139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-06-07 21:48:23.569149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-06-07 21:48:23.569350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-06-07 21:48:23.569359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-06-07 21:48:23.569514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-06-07 21:48:23.569524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-06-07 21:48:23.569731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-06-07 21:48:23.569740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-06-07 21:48:23.569898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-06-07 21:48:23.569908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-06-07 21:48:23.570114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-06-07 21:48:23.570125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-06-07 21:48:23.570390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-06-07 21:48:23.570400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-06-07 21:48:23.570608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-06-07 21:48:23.570618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-06-07 21:48:23.570832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-06-07 21:48:23.570844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-06-07 21:48:23.571075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-06-07 21:48:23.571085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-06-07 21:48:23.571294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-06-07 21:48:23.571304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-06-07 21:48:23.571507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-06-07 21:48:23.571516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-06-07 21:48:23.571792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-06-07 21:48:23.571802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-06-07 21:48:23.571951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-06-07 21:48:23.571961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-06-07 21:48:23.572102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-06-07 21:48:23.572112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-06-07 21:48:23.572335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-06-07 21:48:23.572347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-06-07 21:48:23.572547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-06-07 21:48:23.572557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-06-07 21:48:23.572802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-06-07 21:48:23.572812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-06-07 21:48:23.573048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-06-07 21:48:23.573058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.576 qpair failed and we were unable to recover it. 00:31:23.576 [2024-06-07 21:48:23.573267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.576 [2024-06-07 21:48:23.573277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-06-07 21:48:23.573502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-06-07 21:48:23.573511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-06-07 21:48:23.573776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-06-07 21:48:23.573786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-06-07 21:48:23.573992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-06-07 21:48:23.574001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-06-07 21:48:23.574270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-06-07 21:48:23.574280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-06-07 21:48:23.574485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-06-07 21:48:23.574495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-06-07 21:48:23.574813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-06-07 21:48:23.574823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-06-07 21:48:23.575033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-06-07 21:48:23.575043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-06-07 21:48:23.575269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-06-07 21:48:23.575278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-06-07 21:48:23.575488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-06-07 21:48:23.575497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-06-07 21:48:23.575705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-06-07 21:48:23.575714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-06-07 21:48:23.575867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-06-07 21:48:23.575877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-06-07 21:48:23.576088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-06-07 21:48:23.576098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-06-07 21:48:23.576256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-06-07 21:48:23.576266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-06-07 21:48:23.576544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-06-07 21:48:23.576553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-06-07 21:48:23.576711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-06-07 21:48:23.576720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-06-07 21:48:23.576928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-06-07 21:48:23.576938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-06-07 21:48:23.577146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-06-07 21:48:23.577157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-06-07 21:48:23.577378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-06-07 21:48:23.577387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-06-07 21:48:23.577623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-06-07 21:48:23.577633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-06-07 21:48:23.577789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-06-07 21:48:23.577799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-06-07 21:48:23.578011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-06-07 21:48:23.578021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-06-07 21:48:23.578294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-06-07 21:48:23.578304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-06-07 21:48:23.578543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-06-07 21:48:23.578553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-06-07 21:48:23.578767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-06-07 21:48:23.578777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-06-07 21:48:23.578935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-06-07 21:48:23.578945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-06-07 21:48:23.579223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-06-07 21:48:23.579233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-06-07 21:48:23.579379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-06-07 21:48:23.579390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-06-07 21:48:23.579690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-06-07 21:48:23.579700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-06-07 21:48:23.579919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-06-07 21:48:23.579930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-06-07 21:48:23.580223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-06-07 21:48:23.580233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-06-07 21:48:23.580475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-06-07 21:48:23.580485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-06-07 21:48:23.580639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-06-07 21:48:23.580649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-06-07 21:48:23.580897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-06-07 21:48:23.580906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.577 [2024-06-07 21:48:23.581178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.577 [2024-06-07 21:48:23.581187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.577 qpair failed and we were unable to recover it. 00:31:23.578 [2024-06-07 21:48:23.581418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-06-07 21:48:23.581428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-06-07 21:48:23.581722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-06-07 21:48:23.581732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-06-07 21:48:23.581957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-06-07 21:48:23.581966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-06-07 21:48:23.582114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-06-07 21:48:23.582123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-06-07 21:48:23.582336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-06-07 21:48:23.582346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-06-07 21:48:23.582567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-06-07 21:48:23.582576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-06-07 21:48:23.582784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-06-07 21:48:23.582793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-06-07 21:48:23.583036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-06-07 21:48:23.583046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-06-07 21:48:23.583339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-06-07 21:48:23.583348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-06-07 21:48:23.583577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-06-07 21:48:23.583587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-06-07 21:48:23.583800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-06-07 21:48:23.583809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-06-07 21:48:23.584031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-06-07 21:48:23.584040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-06-07 21:48:23.584253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-06-07 21:48:23.584262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-06-07 21:48:23.584462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-06-07 21:48:23.584472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-06-07 21:48:23.584749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-06-07 21:48:23.584758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-06-07 21:48:23.584978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-06-07 21:48:23.584988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-06-07 21:48:23.585275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-06-07 21:48:23.585285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-06-07 21:48:23.585572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-06-07 21:48:23.585582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-06-07 21:48:23.585794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-06-07 21:48:23.585803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-06-07 21:48:23.586093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-06-07 21:48:23.586103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-06-07 21:48:23.586406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-06-07 21:48:23.586416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-06-07 21:48:23.586635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-06-07 21:48:23.586644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-06-07 21:48:23.586885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-06-07 21:48:23.586894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-06-07 21:48:23.587102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-06-07 21:48:23.587112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-06-07 21:48:23.587217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-06-07 21:48:23.587226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-06-07 21:48:23.587449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-06-07 21:48:23.587459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-06-07 21:48:23.587674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-06-07 21:48:23.587683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-06-07 21:48:23.587900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-06-07 21:48:23.587909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-06-07 21:48:23.588172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-06-07 21:48:23.588182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-06-07 21:48:23.588322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-06-07 21:48:23.588331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-06-07 21:48:23.588536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-06-07 21:48:23.588545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-06-07 21:48:23.588835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-06-07 21:48:23.588844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-06-07 21:48:23.589089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-06-07 21:48:23.589098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-06-07 21:48:23.589321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-06-07 21:48:23.589331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-06-07 21:48:23.589494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-06-07 21:48:23.589505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-06-07 21:48:23.589734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.578 [2024-06-07 21:48:23.589744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.578 qpair failed and we were unable to recover it. 00:31:23.578 [2024-06-07 21:48:23.590033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-06-07 21:48:23.590043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-06-07 21:48:23.590347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-06-07 21:48:23.590357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-06-07 21:48:23.590637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-06-07 21:48:23.590646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-06-07 21:48:23.590861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-06-07 21:48:23.590870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-06-07 21:48:23.591113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-06-07 21:48:23.591122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-06-07 21:48:23.591417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-06-07 21:48:23.591426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-06-07 21:48:23.591640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-06-07 21:48:23.591649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-06-07 21:48:23.591853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-06-07 21:48:23.591863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-06-07 21:48:23.592100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-06-07 21:48:23.592110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-06-07 21:48:23.592351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-06-07 21:48:23.592360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-06-07 21:48:23.592579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-06-07 21:48:23.592588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-06-07 21:48:23.592736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-06-07 21:48:23.592745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-06-07 21:48:23.592895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-06-07 21:48:23.592905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-06-07 21:48:23.593147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-06-07 21:48:23.593156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-06-07 21:48:23.593321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-06-07 21:48:23.593331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-06-07 21:48:23.593546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-06-07 21:48:23.593556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-06-07 21:48:23.593705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-06-07 21:48:23.593714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-06-07 21:48:23.593878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-06-07 21:48:23.593887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-06-07 21:48:23.594151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-06-07 21:48:23.594161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-06-07 21:48:23.594318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-06-07 21:48:23.594327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-06-07 21:48:23.594505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-06-07 21:48:23.594515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-06-07 21:48:23.594723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-06-07 21:48:23.594734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-06-07 21:48:23.594887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-06-07 21:48:23.594897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-06-07 21:48:23.595167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-06-07 21:48:23.595177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-06-07 21:48:23.595420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-06-07 21:48:23.595429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-06-07 21:48:23.595605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-06-07 21:48:23.595614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-06-07 21:48:23.595820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-06-07 21:48:23.595829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-06-07 21:48:23.595988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-06-07 21:48:23.595997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-06-07 21:48:23.596218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-06-07 21:48:23.596228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-06-07 21:48:23.596462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-06-07 21:48:23.596472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-06-07 21:48:23.596613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-06-07 21:48:23.596623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-06-07 21:48:23.596762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-06-07 21:48:23.596771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-06-07 21:48:23.597019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-06-07 21:48:23.597036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-06-07 21:48:23.597165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-06-07 21:48:23.597174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-06-07 21:48:23.597397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-06-07 21:48:23.597406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-06-07 21:48:23.597695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.579 [2024-06-07 21:48:23.597705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.579 qpair failed and we were unable to recover it. 00:31:23.579 [2024-06-07 21:48:23.597934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-06-07 21:48:23.597943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-06-07 21:48:23.598163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-06-07 21:48:23.598173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-06-07 21:48:23.598390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-06-07 21:48:23.598400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-06-07 21:48:23.598669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-06-07 21:48:23.598679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-06-07 21:48:23.598826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-06-07 21:48:23.598835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-06-07 21:48:23.599053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-06-07 21:48:23.599063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-06-07 21:48:23.599336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-06-07 21:48:23.599346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-06-07 21:48:23.599491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-06-07 21:48:23.599500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-06-07 21:48:23.599654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-06-07 21:48:23.599663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-06-07 21:48:23.599874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-06-07 21:48:23.599883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-06-07 21:48:23.600093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-06-07 21:48:23.600102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-06-07 21:48:23.600299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-06-07 21:48:23.600308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-06-07 21:48:23.600529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-06-07 21:48:23.600538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-06-07 21:48:23.600699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-06-07 21:48:23.600708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-06-07 21:48:23.600926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-06-07 21:48:23.600936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-06-07 21:48:23.601166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-06-07 21:48:23.601176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-06-07 21:48:23.601336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-06-07 21:48:23.601346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-06-07 21:48:23.601561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-06-07 21:48:23.601570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-06-07 21:48:23.601721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-06-07 21:48:23.601731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-06-07 21:48:23.601941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-06-07 21:48:23.601950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-06-07 21:48:23.602096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-06-07 21:48:23.602106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-06-07 21:48:23.602354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-06-07 21:48:23.602363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-06-07 21:48:23.602707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-06-07 21:48:23.602716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-06-07 21:48:23.603008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-06-07 21:48:23.603017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-06-07 21:48:23.603288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-06-07 21:48:23.603329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7b4000b90 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-06-07 21:48:23.603606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-06-07 21:48:23.603638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7b4000b90 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-06-07 21:48:23.603904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-06-07 21:48:23.603934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7b4000b90 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-06-07 21:48:23.604138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-06-07 21:48:23.604171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7b4000b90 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-06-07 21:48:23.604346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-06-07 21:48:23.604356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-06-07 21:48:23.604503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.580 [2024-06-07 21:48:23.604512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.580 qpair failed and we were unable to recover it. 00:31:23.580 [2024-06-07 21:48:23.604705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-06-07 21:48:23.604714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-06-07 21:48:23.604919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-06-07 21:48:23.604928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-06-07 21:48:23.605121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-06-07 21:48:23.605131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-06-07 21:48:23.605401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-06-07 21:48:23.605411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-06-07 21:48:23.605700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-06-07 21:48:23.605709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-06-07 21:48:23.605936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-06-07 21:48:23.605945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-06-07 21:48:23.606158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-06-07 21:48:23.606168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-06-07 21:48:23.606368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-06-07 21:48:23.606377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-06-07 21:48:23.606592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-06-07 21:48:23.606602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-06-07 21:48:23.606838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-06-07 21:48:23.606847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-06-07 21:48:23.607004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-06-07 21:48:23.607013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-06-07 21:48:23.607221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-06-07 21:48:23.607231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-06-07 21:48:23.607536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-06-07 21:48:23.607547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-06-07 21:48:23.607761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-06-07 21:48:23.607770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-06-07 21:48:23.607923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-06-07 21:48:23.607933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-06-07 21:48:23.608145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-06-07 21:48:23.608155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-06-07 21:48:23.608351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-06-07 21:48:23.608360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-06-07 21:48:23.608655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-06-07 21:48:23.608664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-06-07 21:48:23.608827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-06-07 21:48:23.608836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-06-07 21:48:23.609071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-06-07 21:48:23.609080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-06-07 21:48:23.609277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-06-07 21:48:23.609286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-06-07 21:48:23.609450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-06-07 21:48:23.609460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-06-07 21:48:23.609695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-06-07 21:48:23.609704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-06-07 21:48:23.609965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-06-07 21:48:23.609974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-06-07 21:48:23.610136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-06-07 21:48:23.610146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-06-07 21:48:23.610359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-06-07 21:48:23.610369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-06-07 21:48:23.610578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-06-07 21:48:23.610588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-06-07 21:48:23.610800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-06-07 21:48:23.610810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-06-07 21:48:23.611019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-06-07 21:48:23.611032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-06-07 21:48:23.611301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-06-07 21:48:23.611311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-06-07 21:48:23.611576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-06-07 21:48:23.611585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-06-07 21:48:23.611885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-06-07 21:48:23.611894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-06-07 21:48:23.611987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-06-07 21:48:23.611996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-06-07 21:48:23.612244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-06-07 21:48:23.612253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-06-07 21:48:23.612549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.581 [2024-06-07 21:48:23.612558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.581 qpair failed and we were unable to recover it. 00:31:23.581 [2024-06-07 21:48:23.612775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-06-07 21:48:23.612784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-06-07 21:48:23.612986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-06-07 21:48:23.612995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-06-07 21:48:23.613122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-06-07 21:48:23.613132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-06-07 21:48:23.613326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-06-07 21:48:23.613335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-06-07 21:48:23.613630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-06-07 21:48:23.613639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-06-07 21:48:23.613800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-06-07 21:48:23.613810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 21:48:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:23.582 [2024-06-07 21:48:23.614024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-06-07 21:48:23.614045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 21:48:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@863 -- # return 0 00:31:23.582 [2024-06-07 21:48:23.614313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-06-07 21:48:23.614325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 21:48:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:23.582 [2024-06-07 21:48:23.614593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-06-07 21:48:23.614604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-06-07 21:48:23.614824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-06-07 21:48:23.614834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 21:48:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:23.582 21:48:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:23.582 [2024-06-07 21:48:23.615133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-06-07 21:48:23.615145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-06-07 21:48:23.615371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-06-07 21:48:23.615380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-06-07 21:48:23.615579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-06-07 21:48:23.615588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-06-07 21:48:23.615908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-06-07 21:48:23.615917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-06-07 21:48:23.616201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-06-07 21:48:23.616211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-06-07 21:48:23.616449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-06-07 21:48:23.616461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-06-07 21:48:23.616674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-06-07 21:48:23.616683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-06-07 21:48:23.616977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-06-07 21:48:23.616986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-06-07 21:48:23.617281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-06-07 21:48:23.617291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-06-07 21:48:23.617433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-06-07 21:48:23.617443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-06-07 21:48:23.617669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-06-07 21:48:23.617679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-06-07 21:48:23.617974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-06-07 21:48:23.617984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-06-07 21:48:23.618183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-06-07 21:48:23.618193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-06-07 21:48:23.618389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-06-07 21:48:23.618398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-06-07 21:48:23.618682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-06-07 21:48:23.618693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-06-07 21:48:23.618997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-06-07 21:48:23.619008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-06-07 21:48:23.619252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-06-07 21:48:23.619263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-06-07 21:48:23.619551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-06-07 21:48:23.619561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-06-07 21:48:23.619760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-06-07 21:48:23.619770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-06-07 21:48:23.620000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-06-07 21:48:23.620010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-06-07 21:48:23.620154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.582 [2024-06-07 21:48:23.620164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.582 qpair failed and we were unable to recover it. 00:31:23.582 [2024-06-07 21:48:23.620492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-06-07 21:48:23.620502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-06-07 21:48:23.620719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-06-07 21:48:23.620729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-06-07 21:48:23.620876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-06-07 21:48:23.620886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-06-07 21:48:23.621050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-06-07 21:48:23.621060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-06-07 21:48:23.621331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-06-07 21:48:23.621340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-06-07 21:48:23.621550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-06-07 21:48:23.621559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-06-07 21:48:23.621880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-06-07 21:48:23.621889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-06-07 21:48:23.622126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-06-07 21:48:23.622138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-06-07 21:48:23.622283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-06-07 21:48:23.622293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-06-07 21:48:23.622432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-06-07 21:48:23.622442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-06-07 21:48:23.622659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-06-07 21:48:23.622668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-06-07 21:48:23.622906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-06-07 21:48:23.622916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-06-07 21:48:23.623195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-06-07 21:48:23.623205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-06-07 21:48:23.623416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-06-07 21:48:23.623426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-06-07 21:48:23.623637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-06-07 21:48:23.623647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-06-07 21:48:23.623780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-06-07 21:48:23.623790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-06-07 21:48:23.624057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-06-07 21:48:23.624067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-06-07 21:48:23.624261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-06-07 21:48:23.624270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-06-07 21:48:23.624430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-06-07 21:48:23.624439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-06-07 21:48:23.624654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-06-07 21:48:23.624664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-06-07 21:48:23.624820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-06-07 21:48:23.624830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-06-07 21:48:23.624989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-06-07 21:48:23.624998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-06-07 21:48:23.625209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-06-07 21:48:23.625219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-06-07 21:48:23.625376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-06-07 21:48:23.625385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-06-07 21:48:23.625592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-06-07 21:48:23.625604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-06-07 21:48:23.625768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-06-07 21:48:23.625777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-06-07 21:48:23.625994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-06-07 21:48:23.626004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-06-07 21:48:23.626151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-06-07 21:48:23.626161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-06-07 21:48:23.626258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-06-07 21:48:23.626268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-06-07 21:48:23.626484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-06-07 21:48:23.626495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-06-07 21:48:23.626627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-06-07 21:48:23.626636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-06-07 21:48:23.626905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-06-07 21:48:23.626915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-06-07 21:48:23.627185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-06-07 21:48:23.627195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-06-07 21:48:23.627466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-06-07 21:48:23.627476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-06-07 21:48:23.627702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-06-07 21:48:23.627711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-06-07 21:48:23.627861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-06-07 21:48:23.627870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.583 qpair failed and we were unable to recover it. 00:31:23.583 [2024-06-07 21:48:23.628077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.583 [2024-06-07 21:48:23.628087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-06-07 21:48:23.628291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-06-07 21:48:23.628300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-06-07 21:48:23.628445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-06-07 21:48:23.628454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-06-07 21:48:23.628604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-06-07 21:48:23.628614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-06-07 21:48:23.628850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-06-07 21:48:23.628861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-06-07 21:48:23.629009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-06-07 21:48:23.629019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-06-07 21:48:23.629291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-06-07 21:48:23.629301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-06-07 21:48:23.629532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-06-07 21:48:23.629542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-06-07 21:48:23.629682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-06-07 21:48:23.629691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-06-07 21:48:23.629959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-06-07 21:48:23.629968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-06-07 21:48:23.630181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-06-07 21:48:23.630190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-06-07 21:48:23.630427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-06-07 21:48:23.630437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-06-07 21:48:23.630595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-06-07 21:48:23.630605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-06-07 21:48:23.630786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-06-07 21:48:23.630795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-06-07 21:48:23.631115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-06-07 21:48:23.631125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-06-07 21:48:23.631362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-06-07 21:48:23.631372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-06-07 21:48:23.631510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-06-07 21:48:23.631519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-06-07 21:48:23.631638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-06-07 21:48:23.631648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-06-07 21:48:23.631854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-06-07 21:48:23.631864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-06-07 21:48:23.632004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-06-07 21:48:23.632014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-06-07 21:48:23.632155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-06-07 21:48:23.632165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-06-07 21:48:23.632318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-06-07 21:48:23.632328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-06-07 21:48:23.632467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-06-07 21:48:23.632477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-06-07 21:48:23.632640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-06-07 21:48:23.632650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-06-07 21:48:23.632828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-06-07 21:48:23.632838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-06-07 21:48:23.632988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-06-07 21:48:23.632998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-06-07 21:48:23.633199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-06-07 21:48:23.633209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-06-07 21:48:23.633415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-06-07 21:48:23.633425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-06-07 21:48:23.633621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-06-07 21:48:23.633632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-06-07 21:48:23.633839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-06-07 21:48:23.633848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-06-07 21:48:23.633996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-06-07 21:48:23.634007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-06-07 21:48:23.634107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-06-07 21:48:23.634118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.584 [2024-06-07 21:48:23.634254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.584 [2024-06-07 21:48:23.634264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.584 qpair failed and we were unable to recover it. 00:31:23.585 [2024-06-07 21:48:23.634482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-06-07 21:48:23.634492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-06-07 21:48:23.634688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-06-07 21:48:23.634697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-06-07 21:48:23.634840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-06-07 21:48:23.634849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-06-07 21:48:23.634945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-06-07 21:48:23.634954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-06-07 21:48:23.635047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-06-07 21:48:23.635056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-06-07 21:48:23.635154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-06-07 21:48:23.635164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-06-07 21:48:23.635433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-06-07 21:48:23.635445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-06-07 21:48:23.635638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-06-07 21:48:23.635648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-06-07 21:48:23.635858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-06-07 21:48:23.635868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-06-07 21:48:23.636080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-06-07 21:48:23.636089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-06-07 21:48:23.636230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-06-07 21:48:23.636239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-06-07 21:48:23.636452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-06-07 21:48:23.636462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-06-07 21:48:23.636749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-06-07 21:48:23.636759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-06-07 21:48:23.636891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-06-07 21:48:23.636900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-06-07 21:48:23.637100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-06-07 21:48:23.637110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-06-07 21:48:23.637247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-06-07 21:48:23.637256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-06-07 21:48:23.637471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-06-07 21:48:23.637480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-06-07 21:48:23.637665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-06-07 21:48:23.637674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-06-07 21:48:23.637885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-06-07 21:48:23.637895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-06-07 21:48:23.638117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-06-07 21:48:23.638127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-06-07 21:48:23.638341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-06-07 21:48:23.638351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-06-07 21:48:23.638558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-06-07 21:48:23.638568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-06-07 21:48:23.638715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-06-07 21:48:23.638727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-06-07 21:48:23.638871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-06-07 21:48:23.638880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-06-07 21:48:23.639009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-06-07 21:48:23.639019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-06-07 21:48:23.639237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-06-07 21:48:23.639247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-06-07 21:48:23.639342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-06-07 21:48:23.639351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-06-07 21:48:23.639646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-06-07 21:48:23.639656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-06-07 21:48:23.639924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-06-07 21:48:23.639934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-06-07 21:48:23.640134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.585 [2024-06-07 21:48:23.640144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.585 qpair failed and we were unable to recover it. 00:31:23.585 [2024-06-07 21:48:23.640429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-06-07 21:48:23.640439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-06-07 21:48:23.640705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-06-07 21:48:23.640714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-06-07 21:48:23.640864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-06-07 21:48:23.640873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-06-07 21:48:23.641070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-06-07 21:48:23.641081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-06-07 21:48:23.641220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-06-07 21:48:23.641231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-06-07 21:48:23.641387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-06-07 21:48:23.641397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-06-07 21:48:23.641613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-06-07 21:48:23.641622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-06-07 21:48:23.641743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-06-07 21:48:23.641753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-06-07 21:48:23.642022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-06-07 21:48:23.642036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-06-07 21:48:23.642209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-06-07 21:48:23.642219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-06-07 21:48:23.642395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-06-07 21:48:23.642404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-06-07 21:48:23.642607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-06-07 21:48:23.642616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-06-07 21:48:23.642776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-06-07 21:48:23.642785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-06-07 21:48:23.642994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-06-07 21:48:23.643003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-06-07 21:48:23.643206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-06-07 21:48:23.643216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-06-07 21:48:23.643413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-06-07 21:48:23.643423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-06-07 21:48:23.643579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-06-07 21:48:23.643589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-06-07 21:48:23.643745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-06-07 21:48:23.643754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-06-07 21:48:23.644075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-06-07 21:48:23.644085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-06-07 21:48:23.644321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-06-07 21:48:23.644330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-06-07 21:48:23.644469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-06-07 21:48:23.644479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-06-07 21:48:23.644755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-06-07 21:48:23.644764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-06-07 21:48:23.644912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-06-07 21:48:23.644922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-06-07 21:48:23.645071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-06-07 21:48:23.645081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-06-07 21:48:23.645229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-06-07 21:48:23.645238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-06-07 21:48:23.645471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-06-07 21:48:23.645480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-06-07 21:48:23.645628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-06-07 21:48:23.645638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-06-07 21:48:23.645786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-06-07 21:48:23.645795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-06-07 21:48:23.646073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-06-07 21:48:23.646084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-06-07 21:48:23.646302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-06-07 21:48:23.646312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-06-07 21:48:23.646507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-06-07 21:48:23.646516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-06-07 21:48:23.646735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-06-07 21:48:23.646745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-06-07 21:48:23.646898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-06-07 21:48:23.646909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-06-07 21:48:23.647189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-06-07 21:48:23.647200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-06-07 21:48:23.647339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-06-07 21:48:23.647349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-06-07 21:48:23.647494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-06-07 21:48:23.647503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.586 qpair failed and we were unable to recover it. 00:31:23.586 [2024-06-07 21:48:23.647644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.586 [2024-06-07 21:48:23.647653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-06-07 21:48:23.647798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-06-07 21:48:23.647807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-06-07 21:48:23.647938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-06-07 21:48:23.647948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-06-07 21:48:23.648145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-06-07 21:48:23.648155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-06-07 21:48:23.648492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-06-07 21:48:23.648502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-06-07 21:48:23.648817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-06-07 21:48:23.648827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 21:48:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:23.587 [2024-06-07 21:48:23.648969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-06-07 21:48:23.648979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-06-07 21:48:23.649121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-06-07 21:48:23.649131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 21:48:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:23.587 [2024-06-07 21:48:23.649326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-06-07 21:48:23.649339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-06-07 21:48:23.649542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-06-07 21:48:23.649552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 21:48:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:23.587 [2024-06-07 21:48:23.649711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-06-07 21:48:23.649723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 21:48:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:23.587 [2024-06-07 21:48:23.650041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-06-07 21:48:23.650052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-06-07 21:48:23.650258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-06-07 21:48:23.650268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-06-07 21:48:23.650408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-06-07 21:48:23.650417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-06-07 21:48:23.650560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-06-07 21:48:23.650569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-06-07 21:48:23.650706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-06-07 21:48:23.650717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-06-07 21:48:23.650930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-06-07 21:48:23.650940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-06-07 21:48:23.651100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-06-07 21:48:23.651110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-06-07 21:48:23.651308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-06-07 21:48:23.651317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-06-07 21:48:23.651581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-06-07 21:48:23.651590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-06-07 21:48:23.651855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-06-07 21:48:23.651865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-06-07 21:48:23.652000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-06-07 21:48:23.652010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-06-07 21:48:23.652148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-06-07 21:48:23.652159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-06-07 21:48:23.652304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-06-07 21:48:23.652313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-06-07 21:48:23.652461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-06-07 21:48:23.652470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-06-07 21:48:23.652613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-06-07 21:48:23.652622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-06-07 21:48:23.652832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-06-07 21:48:23.652842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-06-07 21:48:23.652996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-06-07 21:48:23.653006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-06-07 21:48:23.653247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-06-07 21:48:23.653257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-06-07 21:48:23.653466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-06-07 21:48:23.653475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-06-07 21:48:23.653616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-06-07 21:48:23.653625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-06-07 21:48:23.653728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-06-07 21:48:23.653737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-06-07 21:48:23.653890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-06-07 21:48:23.653899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-06-07 21:48:23.654037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-06-07 21:48:23.654047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-06-07 21:48:23.654273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-06-07 21:48:23.654284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-06-07 21:48:23.654509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.587 [2024-06-07 21:48:23.654518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.587 qpair failed and we were unable to recover it. 00:31:23.587 [2024-06-07 21:48:23.654743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-06-07 21:48:23.654753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-06-07 21:48:23.654888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-06-07 21:48:23.654897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-06-07 21:48:23.655099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-06-07 21:48:23.655109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-06-07 21:48:23.655242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-06-07 21:48:23.655251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-06-07 21:48:23.655409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-06-07 21:48:23.655418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-06-07 21:48:23.655632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-06-07 21:48:23.655641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-06-07 21:48:23.655911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-06-07 21:48:23.655921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-06-07 21:48:23.656202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-06-07 21:48:23.656212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-06-07 21:48:23.656356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-06-07 21:48:23.656366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-06-07 21:48:23.656510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-06-07 21:48:23.656520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-06-07 21:48:23.656758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-06-07 21:48:23.656768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-06-07 21:48:23.656909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-06-07 21:48:23.656919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-06-07 21:48:23.657078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-06-07 21:48:23.657089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-06-07 21:48:23.657303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-06-07 21:48:23.657313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-06-07 21:48:23.657517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-06-07 21:48:23.657527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-06-07 21:48:23.657794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-06-07 21:48:23.657804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-06-07 21:48:23.658010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-06-07 21:48:23.658020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-06-07 21:48:23.658162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-06-07 21:48:23.658172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-06-07 21:48:23.658371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-06-07 21:48:23.658382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-06-07 21:48:23.658526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-06-07 21:48:23.658535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-06-07 21:48:23.658669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-06-07 21:48:23.658679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-06-07 21:48:23.658822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-06-07 21:48:23.658832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-06-07 21:48:23.659099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-06-07 21:48:23.659110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-06-07 21:48:23.659253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-06-07 21:48:23.659263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-06-07 21:48:23.659410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-06-07 21:48:23.659420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-06-07 21:48:23.659624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-06-07 21:48:23.659634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-06-07 21:48:23.659851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-06-07 21:48:23.659861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-06-07 21:48:23.660017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-06-07 21:48:23.660032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-06-07 21:48:23.660268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-06-07 21:48:23.660278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-06-07 21:48:23.660481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-06-07 21:48:23.660492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-06-07 21:48:23.660629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-06-07 21:48:23.660639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-06-07 21:48:23.660851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-06-07 21:48:23.660861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-06-07 21:48:23.661127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-06-07 21:48:23.661137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-06-07 21:48:23.661352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-06-07 21:48:23.661362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-06-07 21:48:23.661603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-06-07 21:48:23.661613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.588 [2024-06-07 21:48:23.661819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.588 [2024-06-07 21:48:23.661829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.588 qpair failed and we were unable to recover it. 00:31:23.589 [2024-06-07 21:48:23.661973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-06-07 21:48:23.661983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-06-07 21:48:23.662184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-06-07 21:48:23.662195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-06-07 21:48:23.662403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-06-07 21:48:23.662416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-06-07 21:48:23.662616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-06-07 21:48:23.662627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-06-07 21:48:23.662838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-06-07 21:48:23.662849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-06-07 21:48:23.663004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-06-07 21:48:23.663014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-06-07 21:48:23.663288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-06-07 21:48:23.663301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-06-07 21:48:23.663504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-06-07 21:48:23.663515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-06-07 21:48:23.663783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-06-07 21:48:23.663794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-06-07 21:48:23.663953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-06-07 21:48:23.663964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-06-07 21:48:23.664102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-06-07 21:48:23.664112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-06-07 21:48:23.664244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-06-07 21:48:23.664254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-06-07 21:48:23.664532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-06-07 21:48:23.664543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-06-07 21:48:23.664672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-06-07 21:48:23.664683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-06-07 21:48:23.664880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-06-07 21:48:23.664891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-06-07 21:48:23.665120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-06-07 21:48:23.665130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-06-07 21:48:23.665358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-06-07 21:48:23.665368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-06-07 21:48:23.665586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-06-07 21:48:23.665596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-06-07 21:48:23.665720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-06-07 21:48:23.665729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-06-07 21:48:23.665960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-06-07 21:48:23.665970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-06-07 21:48:23.666179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-06-07 21:48:23.666189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-06-07 21:48:23.666484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-06-07 21:48:23.666493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-06-07 21:48:23.666636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-06-07 21:48:23.666645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-06-07 21:48:23.666789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-06-07 21:48:23.666798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-06-07 21:48:23.666944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-06-07 21:48:23.666954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-06-07 21:48:23.667219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-06-07 21:48:23.667229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-06-07 21:48:23.667440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-06-07 21:48:23.667450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-06-07 21:48:23.667751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-06-07 21:48:23.667761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-06-07 21:48:23.667900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-06-07 21:48:23.667909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 Malloc0 00:31:23.589 [2024-06-07 21:48:23.668109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-06-07 21:48:23.668119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-06-07 21:48:23.668262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-06-07 21:48:23.668271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-06-07 21:48:23.668562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-06-07 21:48:23.668572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 21:48:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:23.589 [2024-06-07 21:48:23.668771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-06-07 21:48:23.668781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-06-07 21:48:23.668996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-06-07 21:48:23.669006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 21:48:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:23.589 [2024-06-07 21:48:23.669213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-06-07 21:48:23.669222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.589 qpair failed and we were unable to recover it. 00:31:23.589 [2024-06-07 21:48:23.669370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.589 [2024-06-07 21:48:23.669380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 21:48:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:23.590 [2024-06-07 21:48:23.669512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-06-07 21:48:23.669522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 21:48:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:23.590 [2024-06-07 21:48:23.669716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-06-07 21:48:23.669726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-06-07 21:48:23.669851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-06-07 21:48:23.669861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-06-07 21:48:23.670061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-06-07 21:48:23.670071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-06-07 21:48:23.670285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-06-07 21:48:23.670295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-06-07 21:48:23.670445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-06-07 21:48:23.670455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-06-07 21:48:23.670553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-06-07 21:48:23.670563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-06-07 21:48:23.670780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-06-07 21:48:23.670790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-06-07 21:48:23.670927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-06-07 21:48:23.670937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-06-07 21:48:23.671136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-06-07 21:48:23.671145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-06-07 21:48:23.671303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-06-07 21:48:23.671312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-06-07 21:48:23.671640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-06-07 21:48:23.671649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-06-07 21:48:23.671887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-06-07 21:48:23.671897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-06-07 21:48:23.672138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-06-07 21:48:23.672147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-06-07 21:48:23.672365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-06-07 21:48:23.672374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-06-07 21:48:23.672535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-06-07 21:48:23.672544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-06-07 21:48:23.672832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-06-07 21:48:23.672841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-06-07 21:48:23.672983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-06-07 21:48:23.672992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-06-07 21:48:23.673261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-06-07 21:48:23.673271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-06-07 21:48:23.673413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-06-07 21:48:23.673422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-06-07 21:48:23.673741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-06-07 21:48:23.673750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-06-07 21:48:23.673946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-06-07 21:48:23.673955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-06-07 21:48:23.674067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-06-07 21:48:23.674076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-06-07 21:48:23.674204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-06-07 21:48:23.674213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-06-07 21:48:23.674451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-06-07 21:48:23.674461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-06-07 21:48:23.674729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-06-07 21:48:23.674739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-06-07 21:48:23.675003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-06-07 21:48:23.675012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-06-07 21:48:23.675219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-06-07 21:48:23.675229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.590 qpair failed and we were unable to recover it. 00:31:23.590 [2024-06-07 21:48:23.675442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.590 [2024-06-07 21:48:23.675452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-06-07 21:48:23.675608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-06-07 21:48:23.675618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-06-07 21:48:23.675617] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:23.591 [2024-06-07 21:48:23.675887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-06-07 21:48:23.675897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-06-07 21:48:23.676110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-06-07 21:48:23.676120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-06-07 21:48:23.676425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-06-07 21:48:23.676435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-06-07 21:48:23.676573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-06-07 21:48:23.676582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-06-07 21:48:23.676802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-06-07 21:48:23.676812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-06-07 21:48:23.677095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-06-07 21:48:23.677105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-06-07 21:48:23.677335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-06-07 21:48:23.677344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-06-07 21:48:23.677557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-06-07 21:48:23.677566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-06-07 21:48:23.677763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-06-07 21:48:23.677772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-06-07 21:48:23.677987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-06-07 21:48:23.677997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-06-07 21:48:23.678198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-06-07 21:48:23.678207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-06-07 21:48:23.678331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-06-07 21:48:23.678340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-06-07 21:48:23.678495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-06-07 21:48:23.678504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-06-07 21:48:23.678736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-06-07 21:48:23.678746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-06-07 21:48:23.678886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-06-07 21:48:23.678895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-06-07 21:48:23.679110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-06-07 21:48:23.679120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-06-07 21:48:23.679333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-06-07 21:48:23.679342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-06-07 21:48:23.679554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-06-07 21:48:23.679563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-06-07 21:48:23.679716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-06-07 21:48:23.679725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-06-07 21:48:23.679863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-06-07 21:48:23.679873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-06-07 21:48:23.680002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-06-07 21:48:23.680011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-06-07 21:48:23.680219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-06-07 21:48:23.680229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-06-07 21:48:23.680353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-06-07 21:48:23.680362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-06-07 21:48:23.680521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-06-07 21:48:23.680530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-06-07 21:48:23.680747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-06-07 21:48:23.680756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-06-07 21:48:23.680957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-06-07 21:48:23.680966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-06-07 21:48:23.681169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-06-07 21:48:23.681179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-06-07 21:48:23.681309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-06-07 21:48:23.681318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-06-07 21:48:23.681520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-06-07 21:48:23.681530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-06-07 21:48:23.681701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-06-07 21:48:23.681710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-06-07 21:48:23.681905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-06-07 21:48:23.681915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-06-07 21:48:23.682206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-06-07 21:48:23.682216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-06-07 21:48:23.682422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-06-07 21:48:23.682431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-06-07 21:48:23.682662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-06-07 21:48:23.682672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-06-07 21:48:23.682872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.591 [2024-06-07 21:48:23.682882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.591 qpair failed and we were unable to recover it. 00:31:23.591 [2024-06-07 21:48:23.683033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-06-07 21:48:23.683042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-06-07 21:48:23.683192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-06-07 21:48:23.683201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-06-07 21:48:23.683416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-06-07 21:48:23.683425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-06-07 21:48:23.683726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-06-07 21:48:23.683736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-06-07 21:48:23.683890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-06-07 21:48:23.683899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-06-07 21:48:23.683992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-06-07 21:48:23.684002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 21:48:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:23.592 [2024-06-07 21:48:23.684198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-06-07 21:48:23.684209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-06-07 21:48:23.684408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-06-07 21:48:23.684418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 21:48:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:23.592 [2024-06-07 21:48:23.684629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-06-07 21:48:23.684639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 21:48:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:23.592 [2024-06-07 21:48:23.684836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-06-07 21:48:23.684846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-06-07 21:48:23.685057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 21:48:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:23.592 [2024-06-07 21:48:23.685068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-06-07 21:48:23.685357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-06-07 21:48:23.685366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-06-07 21:48:23.685576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-06-07 21:48:23.685586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-06-07 21:48:23.685791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-06-07 21:48:23.685800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-06-07 21:48:23.686003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-06-07 21:48:23.686012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-06-07 21:48:23.686266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-06-07 21:48:23.686276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-06-07 21:48:23.686477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-06-07 21:48:23.686487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-06-07 21:48:23.686652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-06-07 21:48:23.686662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-06-07 21:48:23.686881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-06-07 21:48:23.686891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-06-07 21:48:23.687035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-06-07 21:48:23.687045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-06-07 21:48:23.687306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-06-07 21:48:23.687315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-06-07 21:48:23.687465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-06-07 21:48:23.687474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-06-07 21:48:23.687633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-06-07 21:48:23.687643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-06-07 21:48:23.687788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-06-07 21:48:23.687797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-06-07 21:48:23.688037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-06-07 21:48:23.688046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-06-07 21:48:23.688205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-06-07 21:48:23.688214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-06-07 21:48:23.688494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-06-07 21:48:23.688504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-06-07 21:48:23.688718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-06-07 21:48:23.688727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-06-07 21:48:23.688947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-06-07 21:48:23.688956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-06-07 21:48:23.689172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-06-07 21:48:23.689182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-06-07 21:48:23.689459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-06-07 21:48:23.689468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-06-07 21:48:23.689741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-06-07 21:48:23.689750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-06-07 21:48:23.689945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-06-07 21:48:23.689955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-06-07 21:48:23.690172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-06-07 21:48:23.690181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-06-07 21:48:23.690407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.592 [2024-06-07 21:48:23.690416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.592 qpair failed and we were unable to recover it. 00:31:23.592 [2024-06-07 21:48:23.690630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-06-07 21:48:23.690639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-06-07 21:48:23.690852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-06-07 21:48:23.690861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-06-07 21:48:23.691002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-06-07 21:48:23.691011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-06-07 21:48:23.691247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-06-07 21:48:23.691257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-06-07 21:48:23.691552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-06-07 21:48:23.691561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-06-07 21:48:23.691845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-06-07 21:48:23.691855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-06-07 21:48:23.692000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-06-07 21:48:23.692009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 21:48:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:23.593 [2024-06-07 21:48:23.692254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-06-07 21:48:23.692264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 21:48:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:23.593 [2024-06-07 21:48:23.692478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-06-07 21:48:23.692488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 21:48:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:23.593 [2024-06-07 21:48:23.692770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-06-07 21:48:23.692780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 21:48:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:23.593 [2024-06-07 21:48:23.693081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-06-07 21:48:23.693092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-06-07 21:48:23.693417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-06-07 21:48:23.693426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-06-07 21:48:23.693587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-06-07 21:48:23.693596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-06-07 21:48:23.693837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-06-07 21:48:23.693846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-06-07 21:48:23.694082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-06-07 21:48:23.694092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-06-07 21:48:23.694357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-06-07 21:48:23.694366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-06-07 21:48:23.694567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-06-07 21:48:23.694576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-06-07 21:48:23.694669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-06-07 21:48:23.694678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-06-07 21:48:23.694899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-06-07 21:48:23.694908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-06-07 21:48:23.695142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-06-07 21:48:23.695152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-06-07 21:48:23.695286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-06-07 21:48:23.695295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-06-07 21:48:23.695516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-06-07 21:48:23.695525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-06-07 21:48:23.695819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-06-07 21:48:23.695828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-06-07 21:48:23.696036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-06-07 21:48:23.696046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-06-07 21:48:23.696248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-06-07 21:48:23.696257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-06-07 21:48:23.696466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-06-07 21:48:23.696475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-06-07 21:48:23.696604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-06-07 21:48:23.696613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-06-07 21:48:23.696832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-06-07 21:48:23.696841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-06-07 21:48:23.697044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-06-07 21:48:23.697053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-06-07 21:48:23.697365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-06-07 21:48:23.697375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-06-07 21:48:23.697639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-06-07 21:48:23.697648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-06-07 21:48:23.697853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-06-07 21:48:23.697862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-06-07 21:48:23.698100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-06-07 21:48:23.698110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-06-07 21:48:23.698327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-06-07 21:48:23.698336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-06-07 21:48:23.698538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-06-07 21:48:23.698549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.593 [2024-06-07 21:48:23.698815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.593 [2024-06-07 21:48:23.698824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.593 qpair failed and we were unable to recover it. 00:31:23.594 [2024-06-07 21:48:23.699037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-06-07 21:48:23.699047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 [2024-06-07 21:48:23.699264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-06-07 21:48:23.699273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 [2024-06-07 21:48:23.699407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-06-07 21:48:23.699417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 [2024-06-07 21:48:23.699712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-06-07 21:48:23.699721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 [2024-06-07 21:48:23.699883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-06-07 21:48:23.699892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 [2024-06-07 21:48:23.700125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-06-07 21:48:23.700135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 21:48:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:23.594 [2024-06-07 21:48:23.700273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-06-07 21:48:23.700283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 [2024-06-07 21:48:23.700497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-06-07 21:48:23.700507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 21:48:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:23.594 21:48:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:23.594 [2024-06-07 21:48:23.700836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-06-07 21:48:23.700846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 [2024-06-07 21:48:23.701063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-06-07 21:48:23.701074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 21:48:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:23.594 [2024-06-07 21:48:23.701272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-06-07 21:48:23.701282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 [2024-06-07 21:48:23.701574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-06-07 21:48:23.701583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 [2024-06-07 21:48:23.701830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-06-07 21:48:23.701839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 [2024-06-07 21:48:23.701946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-06-07 21:48:23.701955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 [2024-06-07 21:48:23.702163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-06-07 21:48:23.702172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 [2024-06-07 21:48:23.702385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-06-07 21:48:23.702394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 [2024-06-07 21:48:23.702616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-06-07 21:48:23.702625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 [2024-06-07 21:48:23.702918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-06-07 21:48:23.702928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 [2024-06-07 21:48:23.703068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-06-07 21:48:23.703077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 [2024-06-07 21:48:23.703225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-06-07 21:48:23.703235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 [2024-06-07 21:48:23.703471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-06-07 21:48:23.703481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 [2024-06-07 21:48:23.703679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-06-07 21:48:23.703688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 [2024-06-07 21:48:23.703836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.594 [2024-06-07 21:48:23.703845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7bc000b90 with addr=10.0.0.2, port=4420 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 [2024-06-07 21:48:23.703865] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:23.594 [2024-06-07 21:48:23.706335] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.594 [2024-06-07 21:48:23.706439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.594 [2024-06-07 21:48:23.706459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.594 [2024-06-07 21:48:23.706466] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.594 [2024-06-07 21:48:23.706472] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:23.594 [2024-06-07 21:48:23.706493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 21:48:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:23.594 21:48:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:23.594 21:48:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:23.594 21:48:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:23.594 [2024-06-07 21:48:23.716323] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.594 21:48:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:23.594 [2024-06-07 21:48:23.716418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.594 [2024-06-07 21:48:23.716435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.594 [2024-06-07 21:48:23.716441] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.594 [2024-06-07 21:48:23.716447] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:23.594 [2024-06-07 21:48:23.716463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.594 qpair failed and we were unable to recover it. 00:31:23.594 21:48:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1638661 00:31:23.594 [2024-06-07 21:48:23.726270] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.594 [2024-06-07 21:48:23.726356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.594 [2024-06-07 21:48:23.726372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.595 [2024-06-07 21:48:23.726378] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.595 [2024-06-07 21:48:23.726384] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:23.595 [2024-06-07 21:48:23.726399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.595 qpair failed and we were unable to recover it. 00:31:23.595 [2024-06-07 21:48:23.736209] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.595 [2024-06-07 21:48:23.736300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.595 [2024-06-07 21:48:23.736316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.595 [2024-06-07 21:48:23.736322] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.595 [2024-06-07 21:48:23.736330] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:23.595 [2024-06-07 21:48:23.736344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.595 qpair failed and we were unable to recover it. 00:31:23.595 [2024-06-07 21:48:23.746275] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.595 [2024-06-07 21:48:23.746365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.595 [2024-06-07 21:48:23.746380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.595 [2024-06-07 21:48:23.746386] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.595 [2024-06-07 21:48:23.746391] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:23.595 [2024-06-07 21:48:23.746405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.595 qpair failed and we were unable to recover it. 00:31:23.595 [2024-06-07 21:48:23.756273] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.595 [2024-06-07 21:48:23.756359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.595 [2024-06-07 21:48:23.756374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.595 [2024-06-07 21:48:23.756380] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.595 [2024-06-07 21:48:23.756385] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:23.595 [2024-06-07 21:48:23.756399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.595 qpair failed and we were unable to recover it. 00:31:23.595 [2024-06-07 21:48:23.766306] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.595 [2024-06-07 21:48:23.766390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.595 [2024-06-07 21:48:23.766406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.595 [2024-06-07 21:48:23.766411] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.595 [2024-06-07 21:48:23.766417] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:23.595 [2024-06-07 21:48:23.766430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.595 qpair failed and we were unable to recover it. 00:31:23.595 [2024-06-07 21:48:23.776246] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.595 [2024-06-07 21:48:23.776335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.595 [2024-06-07 21:48:23.776350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.595 [2024-06-07 21:48:23.776356] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.595 [2024-06-07 21:48:23.776361] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:23.595 [2024-06-07 21:48:23.776375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.595 qpair failed and we were unable to recover it. 00:31:23.595 [2024-06-07 21:48:23.786352] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.595 [2024-06-07 21:48:23.786431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.595 [2024-06-07 21:48:23.786446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.595 [2024-06-07 21:48:23.786453] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.595 [2024-06-07 21:48:23.786458] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:23.595 [2024-06-07 21:48:23.786472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.595 qpair failed and we were unable to recover it. 00:31:23.595 [2024-06-07 21:48:23.796376] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.595 [2024-06-07 21:48:23.796474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.595 [2024-06-07 21:48:23.796488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.595 [2024-06-07 21:48:23.796494] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.595 [2024-06-07 21:48:23.796500] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:23.595 [2024-06-07 21:48:23.796514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.595 qpair failed and we were unable to recover it. 00:31:23.595 [2024-06-07 21:48:23.806419] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.595 [2024-06-07 21:48:23.806499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.595 [2024-06-07 21:48:23.806514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.595 [2024-06-07 21:48:23.806520] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.595 [2024-06-07 21:48:23.806526] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:23.595 [2024-06-07 21:48:23.806539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.595 qpair failed and we were unable to recover it. 00:31:23.595 [2024-06-07 21:48:23.816435] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.595 [2024-06-07 21:48:23.816518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.595 [2024-06-07 21:48:23.816533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.595 [2024-06-07 21:48:23.816539] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.595 [2024-06-07 21:48:23.816545] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:23.595 [2024-06-07 21:48:23.816558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.595 qpair failed and we were unable to recover it. 00:31:23.856 [2024-06-07 21:48:23.826508] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.856 [2024-06-07 21:48:23.826628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.856 [2024-06-07 21:48:23.826644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.856 [2024-06-07 21:48:23.826656] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.856 [2024-06-07 21:48:23.826662] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:23.856 [2024-06-07 21:48:23.826676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.856 qpair failed and we were unable to recover it. 00:31:23.856 [2024-06-07 21:48:23.836546] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.856 [2024-06-07 21:48:23.836633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.856 [2024-06-07 21:48:23.836648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.856 [2024-06-07 21:48:23.836655] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.856 [2024-06-07 21:48:23.836660] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:23.856 [2024-06-07 21:48:23.836674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.856 qpair failed and we were unable to recover it. 00:31:23.856 [2024-06-07 21:48:23.846602] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.856 [2024-06-07 21:48:23.846687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.856 [2024-06-07 21:48:23.846702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.856 [2024-06-07 21:48:23.846708] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.856 [2024-06-07 21:48:23.846713] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:23.856 [2024-06-07 21:48:23.846727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.856 qpair failed and we were unable to recover it. 00:31:23.856 [2024-06-07 21:48:23.856550] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.856 [2024-06-07 21:48:23.856680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.856 [2024-06-07 21:48:23.856696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.856 [2024-06-07 21:48:23.856703] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.856 [2024-06-07 21:48:23.856708] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:23.856 [2024-06-07 21:48:23.856723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.856 qpair failed and we were unable to recover it. 00:31:23.856 [2024-06-07 21:48:23.866594] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.856 [2024-06-07 21:48:23.866677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.856 [2024-06-07 21:48:23.866693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.856 [2024-06-07 21:48:23.866699] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.856 [2024-06-07 21:48:23.866705] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:23.856 [2024-06-07 21:48:23.866719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.856 qpair failed and we were unable to recover it. 00:31:23.856 [2024-06-07 21:48:23.876607] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.856 [2024-06-07 21:48:23.876687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.856 [2024-06-07 21:48:23.876701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.856 [2024-06-07 21:48:23.876708] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.857 [2024-06-07 21:48:23.876713] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:23.857 [2024-06-07 21:48:23.876727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.857 qpair failed and we were unable to recover it. 00:31:23.857 [2024-06-07 21:48:23.886656] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.857 [2024-06-07 21:48:23.886738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.857 [2024-06-07 21:48:23.886752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.857 [2024-06-07 21:48:23.886758] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.857 [2024-06-07 21:48:23.886764] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:23.857 [2024-06-07 21:48:23.886777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.857 qpair failed and we were unable to recover it. 00:31:23.857 [2024-06-07 21:48:23.896667] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.857 [2024-06-07 21:48:23.896752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.857 [2024-06-07 21:48:23.896767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.857 [2024-06-07 21:48:23.896773] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.857 [2024-06-07 21:48:23.896778] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:23.857 [2024-06-07 21:48:23.896792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.857 qpair failed and we were unable to recover it. 00:31:23.857 [2024-06-07 21:48:23.906710] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.857 [2024-06-07 21:48:23.906790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.857 [2024-06-07 21:48:23.906805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.857 [2024-06-07 21:48:23.906811] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.857 [2024-06-07 21:48:23.906817] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:23.857 [2024-06-07 21:48:23.906831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.857 qpair failed and we were unable to recover it. 00:31:23.857 [2024-06-07 21:48:23.916777] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.857 [2024-06-07 21:48:23.916855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.857 [2024-06-07 21:48:23.916873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.857 [2024-06-07 21:48:23.916879] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.857 [2024-06-07 21:48:23.916884] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:23.857 [2024-06-07 21:48:23.916898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.857 qpair failed and we were unable to recover it. 00:31:23.857 [2024-06-07 21:48:23.926825] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.857 [2024-06-07 21:48:23.926906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.857 [2024-06-07 21:48:23.926920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.857 [2024-06-07 21:48:23.926926] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.857 [2024-06-07 21:48:23.926931] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:23.857 [2024-06-07 21:48:23.926945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.857 qpair failed and we were unable to recover it. 00:31:23.857 [2024-06-07 21:48:23.936899] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.857 [2024-06-07 21:48:23.936997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.857 [2024-06-07 21:48:23.937011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.857 [2024-06-07 21:48:23.937017] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.857 [2024-06-07 21:48:23.937022] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:23.857 [2024-06-07 21:48:23.937041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.857 qpair failed and we were unable to recover it. 00:31:23.857 [2024-06-07 21:48:23.946944] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.857 [2024-06-07 21:48:23.947022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.857 [2024-06-07 21:48:23.947042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.857 [2024-06-07 21:48:23.947048] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.857 [2024-06-07 21:48:23.947053] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:23.857 [2024-06-07 21:48:23.947067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.857 qpair failed and we were unable to recover it. 00:31:23.857 [2024-06-07 21:48:23.957003] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.857 [2024-06-07 21:48:23.957089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.857 [2024-06-07 21:48:23.957105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.857 [2024-06-07 21:48:23.957111] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.857 [2024-06-07 21:48:23.957116] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:23.857 [2024-06-07 21:48:23.957133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.857 qpair failed and we were unable to recover it. 00:31:23.857 [2024-06-07 21:48:23.966986] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.857 [2024-06-07 21:48:23.967064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.857 [2024-06-07 21:48:23.967080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.857 [2024-06-07 21:48:23.967086] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.857 [2024-06-07 21:48:23.967091] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:23.857 [2024-06-07 21:48:23.967104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.857 qpair failed and we were unable to recover it. 00:31:23.857 [2024-06-07 21:48:23.976919] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.857 [2024-06-07 21:48:23.976998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.857 [2024-06-07 21:48:23.977013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.857 [2024-06-07 21:48:23.977020] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.857 [2024-06-07 21:48:23.977031] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:23.857 [2024-06-07 21:48:23.977045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.857 qpair failed and we were unable to recover it. 00:31:23.857 [2024-06-07 21:48:23.986961] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.857 [2024-06-07 21:48:23.987048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.857 [2024-06-07 21:48:23.987063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.857 [2024-06-07 21:48:23.987069] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.857 [2024-06-07 21:48:23.987074] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:23.857 [2024-06-07 21:48:23.987088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.857 qpair failed and we were unable to recover it. 00:31:23.857 [2024-06-07 21:48:23.997011] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.857 [2024-06-07 21:48:23.997097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.858 [2024-06-07 21:48:23.997112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.858 [2024-06-07 21:48:23.997118] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.858 [2024-06-07 21:48:23.997123] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:23.858 [2024-06-07 21:48:23.997137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.858 qpair failed and we were unable to recover it. 00:31:23.858 [2024-06-07 21:48:24.007059] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.858 [2024-06-07 21:48:24.007139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.858 [2024-06-07 21:48:24.007157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.858 [2024-06-07 21:48:24.007164] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.858 [2024-06-07 21:48:24.007169] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:23.858 [2024-06-07 21:48:24.007183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.858 qpair failed and we were unable to recover it. 00:31:23.858 [2024-06-07 21:48:24.017044] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.858 [2024-06-07 21:48:24.017130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.858 [2024-06-07 21:48:24.017145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.858 [2024-06-07 21:48:24.017151] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.858 [2024-06-07 21:48:24.017156] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:23.858 [2024-06-07 21:48:24.017170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.858 qpair failed and we were unable to recover it. 00:31:23.858 [2024-06-07 21:48:24.027070] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.858 [2024-06-07 21:48:24.027153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.858 [2024-06-07 21:48:24.027168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.858 [2024-06-07 21:48:24.027174] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.858 [2024-06-07 21:48:24.027179] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:23.858 [2024-06-07 21:48:24.027193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.858 qpair failed and we were unable to recover it. 00:31:23.858 [2024-06-07 21:48:24.037088] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.858 [2024-06-07 21:48:24.037184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.858 [2024-06-07 21:48:24.037199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.858 [2024-06-07 21:48:24.037205] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.858 [2024-06-07 21:48:24.037211] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:23.858 [2024-06-07 21:48:24.037225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.858 qpair failed and we were unable to recover it. 00:31:23.858 [2024-06-07 21:48:24.047118] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.858 [2024-06-07 21:48:24.047196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.858 [2024-06-07 21:48:24.047211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.858 [2024-06-07 21:48:24.047217] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.858 [2024-06-07 21:48:24.047223] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:23.858 [2024-06-07 21:48:24.047239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.858 qpair failed and we were unable to recover it. 00:31:23.858 [2024-06-07 21:48:24.057139] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.858 [2024-06-07 21:48:24.057221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.858 [2024-06-07 21:48:24.057236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.858 [2024-06-07 21:48:24.057242] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.858 [2024-06-07 21:48:24.057247] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:23.858 [2024-06-07 21:48:24.057261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.858 qpair failed and we were unable to recover it. 00:31:23.858 [2024-06-07 21:48:24.067122] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.858 [2024-06-07 21:48:24.067203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.858 [2024-06-07 21:48:24.067219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.858 [2024-06-07 21:48:24.067225] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.858 [2024-06-07 21:48:24.067230] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:23.858 [2024-06-07 21:48:24.067243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.858 qpair failed and we were unable to recover it. 00:31:23.858 [2024-06-07 21:48:24.077201] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.858 [2024-06-07 21:48:24.077317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.858 [2024-06-07 21:48:24.077332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.858 [2024-06-07 21:48:24.077339] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.858 [2024-06-07 21:48:24.077344] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:23.858 [2024-06-07 21:48:24.077358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.858 qpair failed and we were unable to recover it. 00:31:23.858 [2024-06-07 21:48:24.087291] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.858 [2024-06-07 21:48:24.087382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.858 [2024-06-07 21:48:24.087397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.858 [2024-06-07 21:48:24.087403] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.858 [2024-06-07 21:48:24.087409] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:23.858 [2024-06-07 21:48:24.087423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.858 qpair failed and we were unable to recover it. 00:31:23.858 [2024-06-07 21:48:24.097258] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.858 [2024-06-07 21:48:24.097349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.858 [2024-06-07 21:48:24.097366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.858 [2024-06-07 21:48:24.097372] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.858 [2024-06-07 21:48:24.097378] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:23.858 [2024-06-07 21:48:24.097391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.858 qpair failed and we were unable to recover it. 00:31:23.858 [2024-06-07 21:48:24.107293] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.858 [2024-06-07 21:48:24.107382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.858 [2024-06-07 21:48:24.107396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.858 [2024-06-07 21:48:24.107403] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.858 [2024-06-07 21:48:24.107408] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:23.858 [2024-06-07 21:48:24.107422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.858 qpair failed and we were unable to recover it. 00:31:23.858 [2024-06-07 21:48:24.117269] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.858 [2024-06-07 21:48:24.117347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.858 [2024-06-07 21:48:24.117362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.858 [2024-06-07 21:48:24.117368] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.858 [2024-06-07 21:48:24.117373] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:23.858 [2024-06-07 21:48:24.117387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.858 qpair failed and we were unable to recover it. 00:31:24.118 [2024-06-07 21:48:24.127358] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.118 [2024-06-07 21:48:24.127435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.118 [2024-06-07 21:48:24.127450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.118 [2024-06-07 21:48:24.127456] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.118 [2024-06-07 21:48:24.127461] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.118 [2024-06-07 21:48:24.127474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.118 qpair failed and we were unable to recover it. 00:31:24.118 [2024-06-07 21:48:24.137370] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.118 [2024-06-07 21:48:24.137455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.118 [2024-06-07 21:48:24.137470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.118 [2024-06-07 21:48:24.137476] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.118 [2024-06-07 21:48:24.137484] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.119 [2024-06-07 21:48:24.137498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.119 qpair failed and we were unable to recover it. 00:31:24.119 [2024-06-07 21:48:24.147394] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.119 [2024-06-07 21:48:24.147479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.119 [2024-06-07 21:48:24.147494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.119 [2024-06-07 21:48:24.147500] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.119 [2024-06-07 21:48:24.147505] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.119 [2024-06-07 21:48:24.147519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.119 qpair failed and we were unable to recover it. 00:31:24.119 [2024-06-07 21:48:24.157425] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.119 [2024-06-07 21:48:24.157508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.119 [2024-06-07 21:48:24.157523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.119 [2024-06-07 21:48:24.157529] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.119 [2024-06-07 21:48:24.157534] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.119 [2024-06-07 21:48:24.157548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.119 qpair failed and we were unable to recover it. 00:31:24.119 [2024-06-07 21:48:24.167474] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.119 [2024-06-07 21:48:24.167548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.119 [2024-06-07 21:48:24.167563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.119 [2024-06-07 21:48:24.167569] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.119 [2024-06-07 21:48:24.167574] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.119 [2024-06-07 21:48:24.167587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.119 qpair failed and we were unable to recover it. 00:31:24.119 [2024-06-07 21:48:24.177556] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.119 [2024-06-07 21:48:24.177643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.119 [2024-06-07 21:48:24.177657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.119 [2024-06-07 21:48:24.177663] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.119 [2024-06-07 21:48:24.177668] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.119 [2024-06-07 21:48:24.177682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.119 qpair failed and we were unable to recover it. 00:31:24.119 [2024-06-07 21:48:24.187583] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.119 [2024-06-07 21:48:24.187665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.119 [2024-06-07 21:48:24.187681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.119 [2024-06-07 21:48:24.187688] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.119 [2024-06-07 21:48:24.187693] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.119 [2024-06-07 21:48:24.187707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.119 qpair failed and we were unable to recover it. 00:31:24.119 [2024-06-07 21:48:24.197642] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.119 [2024-06-07 21:48:24.197720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.119 [2024-06-07 21:48:24.197735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.119 [2024-06-07 21:48:24.197741] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.119 [2024-06-07 21:48:24.197746] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.119 [2024-06-07 21:48:24.197759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.119 qpair failed and we were unable to recover it. 00:31:24.119 [2024-06-07 21:48:24.207602] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.119 [2024-06-07 21:48:24.207680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.119 [2024-06-07 21:48:24.207695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.119 [2024-06-07 21:48:24.207702] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.119 [2024-06-07 21:48:24.207707] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.119 [2024-06-07 21:48:24.207720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.119 qpair failed and we were unable to recover it. 00:31:24.119 [2024-06-07 21:48:24.217573] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.119 [2024-06-07 21:48:24.217655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.119 [2024-06-07 21:48:24.217669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.119 [2024-06-07 21:48:24.217675] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.119 [2024-06-07 21:48:24.217680] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.119 [2024-06-07 21:48:24.217694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.119 qpair failed and we were unable to recover it. 00:31:24.119 [2024-06-07 21:48:24.227645] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.119 [2024-06-07 21:48:24.227742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.119 [2024-06-07 21:48:24.227757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.119 [2024-06-07 21:48:24.227766] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.119 [2024-06-07 21:48:24.227771] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.119 [2024-06-07 21:48:24.227785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.119 qpair failed and we were unable to recover it. 00:31:24.119 [2024-06-07 21:48:24.237707] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.119 [2024-06-07 21:48:24.237780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.119 [2024-06-07 21:48:24.237795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.119 [2024-06-07 21:48:24.237801] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.119 [2024-06-07 21:48:24.237806] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.119 [2024-06-07 21:48:24.237819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.119 qpair failed and we were unable to recover it. 00:31:24.119 [2024-06-07 21:48:24.247713] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.119 [2024-06-07 21:48:24.247792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.119 [2024-06-07 21:48:24.247806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.119 [2024-06-07 21:48:24.247812] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.119 [2024-06-07 21:48:24.247817] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.119 [2024-06-07 21:48:24.247832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.119 qpair failed and we were unable to recover it. 00:31:24.119 [2024-06-07 21:48:24.257659] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.119 [2024-06-07 21:48:24.257764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.119 [2024-06-07 21:48:24.257779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.119 [2024-06-07 21:48:24.257785] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.119 [2024-06-07 21:48:24.257790] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.119 [2024-06-07 21:48:24.257804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.119 qpair failed and we were unable to recover it. 00:31:24.119 [2024-06-07 21:48:24.267749] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.119 [2024-06-07 21:48:24.267829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.119 [2024-06-07 21:48:24.267844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.119 [2024-06-07 21:48:24.267850] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.119 [2024-06-07 21:48:24.267855] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.119 [2024-06-07 21:48:24.267869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.119 qpair failed and we were unable to recover it. 00:31:24.119 [2024-06-07 21:48:24.277779] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.120 [2024-06-07 21:48:24.277862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.120 [2024-06-07 21:48:24.277876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.120 [2024-06-07 21:48:24.277882] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.120 [2024-06-07 21:48:24.277887] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.120 [2024-06-07 21:48:24.277901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.120 qpair failed and we were unable to recover it. 00:31:24.120 [2024-06-07 21:48:24.287802] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.120 [2024-06-07 21:48:24.287879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.120 [2024-06-07 21:48:24.287894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.120 [2024-06-07 21:48:24.287901] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.120 [2024-06-07 21:48:24.287906] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.120 [2024-06-07 21:48:24.287920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.120 qpair failed and we were unable to recover it. 00:31:24.120 [2024-06-07 21:48:24.297824] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.120 [2024-06-07 21:48:24.297907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.120 [2024-06-07 21:48:24.297922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.120 [2024-06-07 21:48:24.297928] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.120 [2024-06-07 21:48:24.297933] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.120 [2024-06-07 21:48:24.297947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.120 qpair failed and we were unable to recover it. 00:31:24.120 [2024-06-07 21:48:24.307889] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.120 [2024-06-07 21:48:24.307972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.120 [2024-06-07 21:48:24.307987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.120 [2024-06-07 21:48:24.307993] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.120 [2024-06-07 21:48:24.307998] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.120 [2024-06-07 21:48:24.308012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.120 qpair failed and we were unable to recover it. 00:31:24.120 [2024-06-07 21:48:24.317893] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.120 [2024-06-07 21:48:24.317972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.120 [2024-06-07 21:48:24.317987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.120 [2024-06-07 21:48:24.317996] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.120 [2024-06-07 21:48:24.318001] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.120 [2024-06-07 21:48:24.318015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.120 qpair failed and we were unable to recover it. 00:31:24.120 [2024-06-07 21:48:24.327923] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.120 [2024-06-07 21:48:24.328015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.120 [2024-06-07 21:48:24.328034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.120 [2024-06-07 21:48:24.328040] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.120 [2024-06-07 21:48:24.328045] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.120 [2024-06-07 21:48:24.328060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.120 qpair failed and we were unable to recover it. 00:31:24.120 [2024-06-07 21:48:24.337940] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.120 [2024-06-07 21:48:24.338023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.120 [2024-06-07 21:48:24.338043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.120 [2024-06-07 21:48:24.338049] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.120 [2024-06-07 21:48:24.338054] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.120 [2024-06-07 21:48:24.338068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.120 qpair failed and we were unable to recover it. 00:31:24.120 [2024-06-07 21:48:24.347989] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.120 [2024-06-07 21:48:24.348096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.120 [2024-06-07 21:48:24.348111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.120 [2024-06-07 21:48:24.348117] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.120 [2024-06-07 21:48:24.348122] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.120 [2024-06-07 21:48:24.348136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.120 qpair failed and we were unable to recover it. 00:31:24.120 [2024-06-07 21:48:24.358018] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.120 [2024-06-07 21:48:24.358107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.120 [2024-06-07 21:48:24.358122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.120 [2024-06-07 21:48:24.358128] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.120 [2024-06-07 21:48:24.358133] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.120 [2024-06-07 21:48:24.358147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.120 qpair failed and we were unable to recover it. 00:31:24.120 [2024-06-07 21:48:24.368023] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.120 [2024-06-07 21:48:24.368108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.120 [2024-06-07 21:48:24.368123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.120 [2024-06-07 21:48:24.368130] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.120 [2024-06-07 21:48:24.368135] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.120 [2024-06-07 21:48:24.368149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.120 qpair failed and we were unable to recover it. 00:31:24.120 [2024-06-07 21:48:24.378071] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.120 [2024-06-07 21:48:24.378181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.120 [2024-06-07 21:48:24.378195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.120 [2024-06-07 21:48:24.378202] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.120 [2024-06-07 21:48:24.378207] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.120 [2024-06-07 21:48:24.378222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.120 qpair failed and we were unable to recover it. 00:31:24.380 [2024-06-07 21:48:24.388095] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.380 [2024-06-07 21:48:24.388182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.380 [2024-06-07 21:48:24.388197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.380 [2024-06-07 21:48:24.388203] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.380 [2024-06-07 21:48:24.388209] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.380 [2024-06-07 21:48:24.388224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.380 qpair failed and we were unable to recover it. 00:31:24.380 [2024-06-07 21:48:24.398126] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.380 [2024-06-07 21:48:24.398204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.380 [2024-06-07 21:48:24.398219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.380 [2024-06-07 21:48:24.398225] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.380 [2024-06-07 21:48:24.398231] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.380 [2024-06-07 21:48:24.398245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.380 qpair failed and we were unable to recover it. 00:31:24.380 [2024-06-07 21:48:24.408090] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.380 [2024-06-07 21:48:24.408189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.380 [2024-06-07 21:48:24.408207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.380 [2024-06-07 21:48:24.408214] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.380 [2024-06-07 21:48:24.408219] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.380 [2024-06-07 21:48:24.408233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.380 qpair failed and we were unable to recover it. 00:31:24.380 [2024-06-07 21:48:24.418172] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.380 [2024-06-07 21:48:24.418325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.380 [2024-06-07 21:48:24.418341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.380 [2024-06-07 21:48:24.418347] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.380 [2024-06-07 21:48:24.418352] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.380 [2024-06-07 21:48:24.418367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.380 qpair failed and we were unable to recover it. 00:31:24.380 [2024-06-07 21:48:24.428232] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.380 [2024-06-07 21:48:24.428309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.380 [2024-06-07 21:48:24.428324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.380 [2024-06-07 21:48:24.428331] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.380 [2024-06-07 21:48:24.428336] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.380 [2024-06-07 21:48:24.428350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.380 qpair failed and we were unable to recover it. 00:31:24.380 [2024-06-07 21:48:24.438270] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.380 [2024-06-07 21:48:24.438349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.380 [2024-06-07 21:48:24.438364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.380 [2024-06-07 21:48:24.438370] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.380 [2024-06-07 21:48:24.438375] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.380 [2024-06-07 21:48:24.438389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.380 qpair failed and we were unable to recover it. 00:31:24.380 [2024-06-07 21:48:24.448287] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.380 [2024-06-07 21:48:24.448377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.380 [2024-06-07 21:48:24.448391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.380 [2024-06-07 21:48:24.448398] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.380 [2024-06-07 21:48:24.448403] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.380 [2024-06-07 21:48:24.448422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.380 qpair failed and we were unable to recover it. 00:31:24.380 [2024-06-07 21:48:24.458269] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.380 [2024-06-07 21:48:24.458352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.380 [2024-06-07 21:48:24.458366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.380 [2024-06-07 21:48:24.458373] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.380 [2024-06-07 21:48:24.458378] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.380 [2024-06-07 21:48:24.458391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.380 qpair failed and we were unable to recover it. 00:31:24.380 [2024-06-07 21:48:24.468326] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.380 [2024-06-07 21:48:24.468410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.380 [2024-06-07 21:48:24.468425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.380 [2024-06-07 21:48:24.468431] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.380 [2024-06-07 21:48:24.468437] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.380 [2024-06-07 21:48:24.468450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.380 qpair failed and we were unable to recover it. 00:31:24.380 [2024-06-07 21:48:24.478363] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.380 [2024-06-07 21:48:24.478444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.380 [2024-06-07 21:48:24.478458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.380 [2024-06-07 21:48:24.478464] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.380 [2024-06-07 21:48:24.478469] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.381 [2024-06-07 21:48:24.478484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.381 qpair failed and we were unable to recover it. 00:31:24.381 [2024-06-07 21:48:24.488396] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.381 [2024-06-07 21:48:24.488509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.381 [2024-06-07 21:48:24.488529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.381 [2024-06-07 21:48:24.488536] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.381 [2024-06-07 21:48:24.488543] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.381 [2024-06-07 21:48:24.488558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.381 qpair failed and we were unable to recover it. 00:31:24.381 [2024-06-07 21:48:24.498388] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.381 [2024-06-07 21:48:24.498467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.381 [2024-06-07 21:48:24.498485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.381 [2024-06-07 21:48:24.498492] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.381 [2024-06-07 21:48:24.498497] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.381 [2024-06-07 21:48:24.498511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.381 qpair failed and we were unable to recover it. 00:31:24.381 [2024-06-07 21:48:24.508523] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.381 [2024-06-07 21:48:24.508610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.381 [2024-06-07 21:48:24.508625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.381 [2024-06-07 21:48:24.508633] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.381 [2024-06-07 21:48:24.508639] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.381 [2024-06-07 21:48:24.508652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.381 qpair failed and we were unable to recover it. 00:31:24.381 [2024-06-07 21:48:24.518521] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.381 [2024-06-07 21:48:24.518602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.381 [2024-06-07 21:48:24.518617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.381 [2024-06-07 21:48:24.518623] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.381 [2024-06-07 21:48:24.518628] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.381 [2024-06-07 21:48:24.518642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.381 qpair failed and we were unable to recover it. 00:31:24.381 [2024-06-07 21:48:24.528560] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.381 [2024-06-07 21:48:24.528644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.381 [2024-06-07 21:48:24.528659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.381 [2024-06-07 21:48:24.528665] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.381 [2024-06-07 21:48:24.528670] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.381 [2024-06-07 21:48:24.528684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.381 qpair failed and we were unable to recover it. 00:31:24.381 [2024-06-07 21:48:24.538620] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.381 [2024-06-07 21:48:24.538709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.381 [2024-06-07 21:48:24.538723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.381 [2024-06-07 21:48:24.538729] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.381 [2024-06-07 21:48:24.538737] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.381 [2024-06-07 21:48:24.538751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.381 qpair failed and we were unable to recover it. 00:31:24.381 [2024-06-07 21:48:24.548662] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.381 [2024-06-07 21:48:24.548746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.381 [2024-06-07 21:48:24.548761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.381 [2024-06-07 21:48:24.548767] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.381 [2024-06-07 21:48:24.548772] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.381 [2024-06-07 21:48:24.548785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.381 qpair failed and we were unable to recover it. 00:31:24.381 [2024-06-07 21:48:24.558596] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.381 [2024-06-07 21:48:24.558685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.381 [2024-06-07 21:48:24.558699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.381 [2024-06-07 21:48:24.558706] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.381 [2024-06-07 21:48:24.558710] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.381 [2024-06-07 21:48:24.558724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.381 qpair failed and we were unable to recover it. 00:31:24.381 [2024-06-07 21:48:24.568639] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.381 [2024-06-07 21:48:24.568718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.381 [2024-06-07 21:48:24.568733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.381 [2024-06-07 21:48:24.568739] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.381 [2024-06-07 21:48:24.568744] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.381 [2024-06-07 21:48:24.568758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.381 qpair failed and we were unable to recover it. 00:31:24.381 [2024-06-07 21:48:24.578739] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.381 [2024-06-07 21:48:24.578852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.381 [2024-06-07 21:48:24.578873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.381 [2024-06-07 21:48:24.578879] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.381 [2024-06-07 21:48:24.578885] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.381 [2024-06-07 21:48:24.578899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.381 qpair failed and we were unable to recover it. 00:31:24.381 [2024-06-07 21:48:24.588641] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.381 [2024-06-07 21:48:24.588727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.381 [2024-06-07 21:48:24.588743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.381 [2024-06-07 21:48:24.588749] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.381 [2024-06-07 21:48:24.588754] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.381 [2024-06-07 21:48:24.588768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.381 qpair failed and we were unable to recover it. 00:31:24.381 [2024-06-07 21:48:24.598728] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.381 [2024-06-07 21:48:24.598807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.381 [2024-06-07 21:48:24.598822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.381 [2024-06-07 21:48:24.598828] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.381 [2024-06-07 21:48:24.598833] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.381 [2024-06-07 21:48:24.598848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.381 qpair failed and we were unable to recover it. 00:31:24.381 [2024-06-07 21:48:24.608815] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.381 [2024-06-07 21:48:24.608943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.381 [2024-06-07 21:48:24.608959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.381 [2024-06-07 21:48:24.608965] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.381 [2024-06-07 21:48:24.608970] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.381 [2024-06-07 21:48:24.608984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.381 qpair failed and we were unable to recover it. 00:31:24.381 [2024-06-07 21:48:24.618789] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.381 [2024-06-07 21:48:24.618870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.382 [2024-06-07 21:48:24.618885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.382 [2024-06-07 21:48:24.618891] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.382 [2024-06-07 21:48:24.618896] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.382 [2024-06-07 21:48:24.618910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.382 qpair failed and we were unable to recover it. 00:31:24.382 [2024-06-07 21:48:24.628749] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.382 [2024-06-07 21:48:24.628828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.382 [2024-06-07 21:48:24.628843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.382 [2024-06-07 21:48:24.628852] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.382 [2024-06-07 21:48:24.628857] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.382 [2024-06-07 21:48:24.628871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.382 qpair failed and we were unable to recover it. 00:31:24.382 [2024-06-07 21:48:24.638835] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.382 [2024-06-07 21:48:24.638913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.382 [2024-06-07 21:48:24.638927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.382 [2024-06-07 21:48:24.638934] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.382 [2024-06-07 21:48:24.638939] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.382 [2024-06-07 21:48:24.638953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.382 qpair failed and we were unable to recover it. 00:31:24.641 [2024-06-07 21:48:24.648909] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.641 [2024-06-07 21:48:24.648996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.641 [2024-06-07 21:48:24.649010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.641 [2024-06-07 21:48:24.649016] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.641 [2024-06-07 21:48:24.649022] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.641 [2024-06-07 21:48:24.649043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.641 qpair failed and we were unable to recover it. 00:31:24.641 [2024-06-07 21:48:24.658897] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.641 [2024-06-07 21:48:24.658975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.641 [2024-06-07 21:48:24.658989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.641 [2024-06-07 21:48:24.658996] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.641 [2024-06-07 21:48:24.659001] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.641 [2024-06-07 21:48:24.659015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.641 qpair failed and we were unable to recover it. 00:31:24.641 [2024-06-07 21:48:24.668922] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.641 [2024-06-07 21:48:24.669003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.642 [2024-06-07 21:48:24.669018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.642 [2024-06-07 21:48:24.669029] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.642 [2024-06-07 21:48:24.669035] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.642 [2024-06-07 21:48:24.669049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.642 qpair failed and we were unable to recover it. 00:31:24.642 [2024-06-07 21:48:24.678970] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.642 [2024-06-07 21:48:24.679054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.642 [2024-06-07 21:48:24.679069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.642 [2024-06-07 21:48:24.679075] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.642 [2024-06-07 21:48:24.679080] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.642 [2024-06-07 21:48:24.679095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.642 qpair failed and we were unable to recover it. 00:31:24.642 [2024-06-07 21:48:24.688999] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.642 [2024-06-07 21:48:24.689095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.642 [2024-06-07 21:48:24.689110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.642 [2024-06-07 21:48:24.689116] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.642 [2024-06-07 21:48:24.689121] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.642 [2024-06-07 21:48:24.689136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.642 qpair failed and we were unable to recover it. 00:31:24.642 [2024-06-07 21:48:24.698963] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.642 [2024-06-07 21:48:24.699051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.642 [2024-06-07 21:48:24.699066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.642 [2024-06-07 21:48:24.699072] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.642 [2024-06-07 21:48:24.699077] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.642 [2024-06-07 21:48:24.699092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.642 qpair failed and we were unable to recover it. 00:31:24.642 [2024-06-07 21:48:24.709053] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.642 [2024-06-07 21:48:24.709141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.642 [2024-06-07 21:48:24.709156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.642 [2024-06-07 21:48:24.709162] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.642 [2024-06-07 21:48:24.709168] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.642 [2024-06-07 21:48:24.709182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.642 qpair failed and we were unable to recover it. 00:31:24.642 [2024-06-07 21:48:24.719080] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.642 [2024-06-07 21:48:24.719185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.642 [2024-06-07 21:48:24.719199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.642 [2024-06-07 21:48:24.719209] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.642 [2024-06-07 21:48:24.719214] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.642 [2024-06-07 21:48:24.719228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.642 qpair failed and we were unable to recover it. 00:31:24.642 [2024-06-07 21:48:24.729073] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.642 [2024-06-07 21:48:24.729149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.642 [2024-06-07 21:48:24.729165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.642 [2024-06-07 21:48:24.729170] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.642 [2024-06-07 21:48:24.729176] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.642 [2024-06-07 21:48:24.729189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.642 qpair failed and we were unable to recover it. 00:31:24.642 [2024-06-07 21:48:24.739159] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.642 [2024-06-07 21:48:24.739270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.642 [2024-06-07 21:48:24.739284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.642 [2024-06-07 21:48:24.739291] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.642 [2024-06-07 21:48:24.739296] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.642 [2024-06-07 21:48:24.739310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.642 qpair failed and we were unable to recover it. 00:31:24.642 [2024-06-07 21:48:24.749239] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.642 [2024-06-07 21:48:24.749363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.642 [2024-06-07 21:48:24.749377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.642 [2024-06-07 21:48:24.749383] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.642 [2024-06-07 21:48:24.749388] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.642 [2024-06-07 21:48:24.749402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.642 qpair failed and we were unable to recover it. 00:31:24.642 [2024-06-07 21:48:24.759203] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.642 [2024-06-07 21:48:24.759284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.642 [2024-06-07 21:48:24.759300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.642 [2024-06-07 21:48:24.759307] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.642 [2024-06-07 21:48:24.759312] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.642 [2024-06-07 21:48:24.759326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.642 qpair failed and we were unable to recover it. 00:31:24.642 [2024-06-07 21:48:24.769213] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.642 [2024-06-07 21:48:24.769296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.642 [2024-06-07 21:48:24.769311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.642 [2024-06-07 21:48:24.769317] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.642 [2024-06-07 21:48:24.769322] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.642 [2024-06-07 21:48:24.769335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.642 qpair failed and we were unable to recover it. 00:31:24.642 [2024-06-07 21:48:24.779244] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.642 [2024-06-07 21:48:24.779335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.642 [2024-06-07 21:48:24.779350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.642 [2024-06-07 21:48:24.779356] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.642 [2024-06-07 21:48:24.779361] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.642 [2024-06-07 21:48:24.779375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.642 qpair failed and we were unable to recover it. 00:31:24.642 [2024-06-07 21:48:24.789211] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.642 [2024-06-07 21:48:24.789291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.642 [2024-06-07 21:48:24.789305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.642 [2024-06-07 21:48:24.789311] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.642 [2024-06-07 21:48:24.789317] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.642 [2024-06-07 21:48:24.789330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.642 qpair failed and we were unable to recover it. 00:31:24.642 [2024-06-07 21:48:24.799252] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.642 [2024-06-07 21:48:24.799365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.642 [2024-06-07 21:48:24.799384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.642 [2024-06-07 21:48:24.799390] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.642 [2024-06-07 21:48:24.799396] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.643 [2024-06-07 21:48:24.799410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.643 qpair failed and we were unable to recover it. 00:31:24.643 [2024-06-07 21:48:24.809314] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.643 [2024-06-07 21:48:24.809400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.643 [2024-06-07 21:48:24.809418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.643 [2024-06-07 21:48:24.809424] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.643 [2024-06-07 21:48:24.809429] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.643 [2024-06-07 21:48:24.809443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.643 qpair failed and we were unable to recover it. 00:31:24.643 [2024-06-07 21:48:24.819366] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.643 [2024-06-07 21:48:24.819449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.643 [2024-06-07 21:48:24.819464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.643 [2024-06-07 21:48:24.819470] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.643 [2024-06-07 21:48:24.819475] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.643 [2024-06-07 21:48:24.819489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.643 qpair failed and we were unable to recover it. 00:31:24.643 [2024-06-07 21:48:24.829384] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.643 [2024-06-07 21:48:24.829465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.643 [2024-06-07 21:48:24.829480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.643 [2024-06-07 21:48:24.829486] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.643 [2024-06-07 21:48:24.829491] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.643 [2024-06-07 21:48:24.829504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.643 qpair failed and we were unable to recover it. 00:31:24.643 [2024-06-07 21:48:24.839353] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.643 [2024-06-07 21:48:24.839431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.643 [2024-06-07 21:48:24.839446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.643 [2024-06-07 21:48:24.839452] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.643 [2024-06-07 21:48:24.839457] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.643 [2024-06-07 21:48:24.839470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.643 qpair failed and we were unable to recover it. 00:31:24.643 [2024-06-07 21:48:24.849394] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.643 [2024-06-07 21:48:24.849515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.643 [2024-06-07 21:48:24.849532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.643 [2024-06-07 21:48:24.849538] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.643 [2024-06-07 21:48:24.849543] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.643 [2024-06-07 21:48:24.849560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.643 qpair failed and we were unable to recover it. 00:31:24.643 [2024-06-07 21:48:24.859536] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.643 [2024-06-07 21:48:24.859657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.643 [2024-06-07 21:48:24.859673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.643 [2024-06-07 21:48:24.859679] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.643 [2024-06-07 21:48:24.859685] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.643 [2024-06-07 21:48:24.859699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.643 qpair failed and we were unable to recover it. 00:31:24.643 [2024-06-07 21:48:24.869510] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.643 [2024-06-07 21:48:24.869591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.643 [2024-06-07 21:48:24.869606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.643 [2024-06-07 21:48:24.869612] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.643 [2024-06-07 21:48:24.869617] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.643 [2024-06-07 21:48:24.869631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.643 qpair failed and we were unable to recover it. 00:31:24.643 [2024-06-07 21:48:24.879549] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.643 [2024-06-07 21:48:24.879624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.643 [2024-06-07 21:48:24.879638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.643 [2024-06-07 21:48:24.879644] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.643 [2024-06-07 21:48:24.879649] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.643 [2024-06-07 21:48:24.879663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.643 qpair failed and we were unable to recover it. 00:31:24.643 [2024-06-07 21:48:24.889592] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.643 [2024-06-07 21:48:24.889671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.643 [2024-06-07 21:48:24.889686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.643 [2024-06-07 21:48:24.889692] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.643 [2024-06-07 21:48:24.889697] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.643 [2024-06-07 21:48:24.889711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.643 qpair failed and we were unable to recover it. 00:31:24.643 [2024-06-07 21:48:24.899535] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.643 [2024-06-07 21:48:24.899615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.643 [2024-06-07 21:48:24.899633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.643 [2024-06-07 21:48:24.899639] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.643 [2024-06-07 21:48:24.899644] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.643 [2024-06-07 21:48:24.899659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.643 qpair failed and we were unable to recover it. 00:31:24.903 [2024-06-07 21:48:24.909640] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.903 [2024-06-07 21:48:24.909720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.903 [2024-06-07 21:48:24.909736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.903 [2024-06-07 21:48:24.909742] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.903 [2024-06-07 21:48:24.909747] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.903 [2024-06-07 21:48:24.909761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.903 qpair failed and we were unable to recover it. 00:31:24.903 [2024-06-07 21:48:24.919694] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.903 [2024-06-07 21:48:24.919771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.903 [2024-06-07 21:48:24.919785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.903 [2024-06-07 21:48:24.919792] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.903 [2024-06-07 21:48:24.919797] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.903 [2024-06-07 21:48:24.919811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.903 qpair failed and we were unable to recover it. 00:31:24.903 [2024-06-07 21:48:24.929683] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.903 [2024-06-07 21:48:24.929770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.903 [2024-06-07 21:48:24.929785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.903 [2024-06-07 21:48:24.929791] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.903 [2024-06-07 21:48:24.929796] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.903 [2024-06-07 21:48:24.929810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.903 qpair failed and we were unable to recover it. 00:31:24.903 [2024-06-07 21:48:24.939664] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.903 [2024-06-07 21:48:24.939748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.903 [2024-06-07 21:48:24.939763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.903 [2024-06-07 21:48:24.939769] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.903 [2024-06-07 21:48:24.939778] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.903 [2024-06-07 21:48:24.939792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.903 qpair failed and we were unable to recover it. 00:31:24.903 [2024-06-07 21:48:24.949780] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.903 [2024-06-07 21:48:24.949907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.903 [2024-06-07 21:48:24.949923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.903 [2024-06-07 21:48:24.949930] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.903 [2024-06-07 21:48:24.949936] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.903 [2024-06-07 21:48:24.949950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.903 qpair failed and we were unable to recover it. 00:31:24.903 [2024-06-07 21:48:24.959831] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.903 [2024-06-07 21:48:24.959918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.903 [2024-06-07 21:48:24.959934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.903 [2024-06-07 21:48:24.959940] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.903 [2024-06-07 21:48:24.959946] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.903 [2024-06-07 21:48:24.959960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.903 qpair failed and we were unable to recover it. 00:31:24.903 [2024-06-07 21:48:24.969751] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.903 [2024-06-07 21:48:24.969828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.903 [2024-06-07 21:48:24.969843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.903 [2024-06-07 21:48:24.969849] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.903 [2024-06-07 21:48:24.969854] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.903 [2024-06-07 21:48:24.969868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.903 qpair failed and we were unable to recover it. 00:31:24.903 [2024-06-07 21:48:24.979863] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.903 [2024-06-07 21:48:24.979944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.903 [2024-06-07 21:48:24.979958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.903 [2024-06-07 21:48:24.979964] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.903 [2024-06-07 21:48:24.979969] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.903 [2024-06-07 21:48:24.979984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.903 qpair failed and we were unable to recover it. 00:31:24.903 [2024-06-07 21:48:24.989889] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.903 [2024-06-07 21:48:24.989997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.903 [2024-06-07 21:48:24.990013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.903 [2024-06-07 21:48:24.990019] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.903 [2024-06-07 21:48:24.990028] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.903 [2024-06-07 21:48:24.990043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.903 qpair failed and we were unable to recover it. 00:31:24.903 [2024-06-07 21:48:24.999875] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.903 [2024-06-07 21:48:24.999987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.903 [2024-06-07 21:48:25.000008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.903 [2024-06-07 21:48:25.000015] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.903 [2024-06-07 21:48:25.000020] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.903 [2024-06-07 21:48:25.000040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.903 qpair failed and we were unable to recover it. 00:31:24.903 [2024-06-07 21:48:25.009982] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.904 [2024-06-07 21:48:25.010071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.904 [2024-06-07 21:48:25.010086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.904 [2024-06-07 21:48:25.010092] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.904 [2024-06-07 21:48:25.010097] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.904 [2024-06-07 21:48:25.010111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.904 qpair failed and we were unable to recover it. 00:31:24.904 [2024-06-07 21:48:25.019966] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.904 [2024-06-07 21:48:25.020050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.904 [2024-06-07 21:48:25.020065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.904 [2024-06-07 21:48:25.020072] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.904 [2024-06-07 21:48:25.020076] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.904 [2024-06-07 21:48:25.020091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.904 qpair failed and we were unable to recover it. 00:31:24.904 [2024-06-07 21:48:25.029997] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.904 [2024-06-07 21:48:25.030084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.904 [2024-06-07 21:48:25.030099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.904 [2024-06-07 21:48:25.030105] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.904 [2024-06-07 21:48:25.030112] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.904 [2024-06-07 21:48:25.030126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.904 qpair failed and we were unable to recover it. 00:31:24.904 [2024-06-07 21:48:25.040022] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.904 [2024-06-07 21:48:25.040106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.904 [2024-06-07 21:48:25.040121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.904 [2024-06-07 21:48:25.040127] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.904 [2024-06-07 21:48:25.040133] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.904 [2024-06-07 21:48:25.040147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.904 qpair failed and we were unable to recover it. 00:31:24.904 [2024-06-07 21:48:25.050069] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.904 [2024-06-07 21:48:25.050152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.904 [2024-06-07 21:48:25.050167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.904 [2024-06-07 21:48:25.050173] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.904 [2024-06-07 21:48:25.050178] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.904 [2024-06-07 21:48:25.050192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.904 qpair failed and we were unable to recover it. 00:31:24.904 [2024-06-07 21:48:25.060108] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.904 [2024-06-07 21:48:25.060215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.904 [2024-06-07 21:48:25.060229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.904 [2024-06-07 21:48:25.060236] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.904 [2024-06-07 21:48:25.060241] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.904 [2024-06-07 21:48:25.060255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.904 qpair failed and we were unable to recover it. 00:31:24.904 [2024-06-07 21:48:25.070130] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.904 [2024-06-07 21:48:25.070287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.904 [2024-06-07 21:48:25.070303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.904 [2024-06-07 21:48:25.070309] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.904 [2024-06-07 21:48:25.070315] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.904 [2024-06-07 21:48:25.070329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.904 qpair failed and we were unable to recover it. 00:31:24.904 [2024-06-07 21:48:25.080143] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.904 [2024-06-07 21:48:25.080227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.904 [2024-06-07 21:48:25.080242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.904 [2024-06-07 21:48:25.080248] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.904 [2024-06-07 21:48:25.080253] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.904 [2024-06-07 21:48:25.080267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.904 qpair failed and we were unable to recover it. 00:31:24.904 [2024-06-07 21:48:25.090104] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.904 [2024-06-07 21:48:25.090181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.904 [2024-06-07 21:48:25.090196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.904 [2024-06-07 21:48:25.090202] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.904 [2024-06-07 21:48:25.090208] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.904 [2024-06-07 21:48:25.090222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.904 qpair failed and we were unable to recover it. 00:31:24.904 [2024-06-07 21:48:25.100271] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.904 [2024-06-07 21:48:25.100396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.904 [2024-06-07 21:48:25.100412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.904 [2024-06-07 21:48:25.100418] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.904 [2024-06-07 21:48:25.100424] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.904 [2024-06-07 21:48:25.100438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.904 qpair failed and we were unable to recover it. 00:31:24.904 [2024-06-07 21:48:25.110170] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.904 [2024-06-07 21:48:25.110253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.904 [2024-06-07 21:48:25.110268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.904 [2024-06-07 21:48:25.110275] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.904 [2024-06-07 21:48:25.110280] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.904 [2024-06-07 21:48:25.110293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.904 qpair failed and we were unable to recover it. 00:31:24.904 [2024-06-07 21:48:25.120235] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.904 [2024-06-07 21:48:25.120314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.904 [2024-06-07 21:48:25.120329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.904 [2024-06-07 21:48:25.120338] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.904 [2024-06-07 21:48:25.120344] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.904 [2024-06-07 21:48:25.120357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.904 qpair failed and we were unable to recover it. 00:31:24.904 [2024-06-07 21:48:25.130297] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.904 [2024-06-07 21:48:25.130377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.904 [2024-06-07 21:48:25.130393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.904 [2024-06-07 21:48:25.130399] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.904 [2024-06-07 21:48:25.130404] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.904 [2024-06-07 21:48:25.130418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.904 qpair failed and we were unable to recover it. 00:31:24.904 [2024-06-07 21:48:25.140303] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.904 [2024-06-07 21:48:25.140385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.904 [2024-06-07 21:48:25.140399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.904 [2024-06-07 21:48:25.140405] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.905 [2024-06-07 21:48:25.140411] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.905 [2024-06-07 21:48:25.140425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.905 qpair failed and we were unable to recover it. 00:31:24.905 [2024-06-07 21:48:25.150342] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.905 [2024-06-07 21:48:25.150492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.905 [2024-06-07 21:48:25.150508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.905 [2024-06-07 21:48:25.150514] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.905 [2024-06-07 21:48:25.150519] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.905 [2024-06-07 21:48:25.150534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.905 qpair failed and we were unable to recover it. 00:31:24.905 [2024-06-07 21:48:25.160371] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.905 [2024-06-07 21:48:25.160448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.905 [2024-06-07 21:48:25.160463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.905 [2024-06-07 21:48:25.160470] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.905 [2024-06-07 21:48:25.160475] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:24.905 [2024-06-07 21:48:25.160488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.905 qpair failed and we were unable to recover it. 00:31:25.165 [2024-06-07 21:48:25.170486] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.165 [2024-06-07 21:48:25.170565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.165 [2024-06-07 21:48:25.170580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.165 [2024-06-07 21:48:25.170586] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.165 [2024-06-07 21:48:25.170591] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.165 [2024-06-07 21:48:25.170606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.165 qpair failed and we were unable to recover it. 00:31:25.165 [2024-06-07 21:48:25.180414] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.165 [2024-06-07 21:48:25.180496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.165 [2024-06-07 21:48:25.180511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.165 [2024-06-07 21:48:25.180517] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.165 [2024-06-07 21:48:25.180522] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.165 [2024-06-07 21:48:25.180536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.165 qpair failed and we were unable to recover it. 00:31:25.165 [2024-06-07 21:48:25.190457] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.165 [2024-06-07 21:48:25.190609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.165 [2024-06-07 21:48:25.190625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.165 [2024-06-07 21:48:25.190631] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.165 [2024-06-07 21:48:25.190636] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.165 [2024-06-07 21:48:25.190651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.165 qpair failed and we were unable to recover it. 00:31:25.165 [2024-06-07 21:48:25.200487] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.165 [2024-06-07 21:48:25.200569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.165 [2024-06-07 21:48:25.200583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.165 [2024-06-07 21:48:25.200590] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.165 [2024-06-07 21:48:25.200595] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.165 [2024-06-07 21:48:25.200608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.165 qpair failed and we were unable to recover it. 00:31:25.165 [2024-06-07 21:48:25.210533] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.165 [2024-06-07 21:48:25.210630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.165 [2024-06-07 21:48:25.210649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.165 [2024-06-07 21:48:25.210655] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.165 [2024-06-07 21:48:25.210660] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.165 [2024-06-07 21:48:25.210674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.165 qpair failed and we were unable to recover it. 00:31:25.165 [2024-06-07 21:48:25.220547] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.165 [2024-06-07 21:48:25.220627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.165 [2024-06-07 21:48:25.220642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.165 [2024-06-07 21:48:25.220648] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.165 [2024-06-07 21:48:25.220653] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.165 [2024-06-07 21:48:25.220667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.165 qpair failed and we were unable to recover it. 00:31:25.165 [2024-06-07 21:48:25.230573] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.165 [2024-06-07 21:48:25.230650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.165 [2024-06-07 21:48:25.230664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.165 [2024-06-07 21:48:25.230670] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.165 [2024-06-07 21:48:25.230675] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.165 [2024-06-07 21:48:25.230689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.165 qpair failed and we were unable to recover it. 00:31:25.165 [2024-06-07 21:48:25.240623] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.165 [2024-06-07 21:48:25.240701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.165 [2024-06-07 21:48:25.240716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.165 [2024-06-07 21:48:25.240722] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.165 [2024-06-07 21:48:25.240727] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.165 [2024-06-07 21:48:25.240741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.165 qpair failed and we were unable to recover it. 00:31:25.165 [2024-06-07 21:48:25.250641] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.165 [2024-06-07 21:48:25.250757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.165 [2024-06-07 21:48:25.250772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.165 [2024-06-07 21:48:25.250778] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.165 [2024-06-07 21:48:25.250784] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.165 [2024-06-07 21:48:25.250801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.165 qpair failed and we were unable to recover it. 00:31:25.165 [2024-06-07 21:48:25.260672] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.165 [2024-06-07 21:48:25.260753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.165 [2024-06-07 21:48:25.260768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.165 [2024-06-07 21:48:25.260774] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.165 [2024-06-07 21:48:25.260779] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.165 [2024-06-07 21:48:25.260793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.165 qpair failed and we were unable to recover it. 00:31:25.165 [2024-06-07 21:48:25.270744] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.165 [2024-06-07 21:48:25.270838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.165 [2024-06-07 21:48:25.270852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.165 [2024-06-07 21:48:25.270859] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.165 [2024-06-07 21:48:25.270864] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.165 [2024-06-07 21:48:25.270877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.165 qpair failed and we were unable to recover it. 00:31:25.165 [2024-06-07 21:48:25.280735] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.165 [2024-06-07 21:48:25.280849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.165 [2024-06-07 21:48:25.280865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.165 [2024-06-07 21:48:25.280871] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.165 [2024-06-07 21:48:25.280876] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.166 [2024-06-07 21:48:25.280891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.166 qpair failed and we were unable to recover it. 00:31:25.166 [2024-06-07 21:48:25.290834] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.166 [2024-06-07 21:48:25.290911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.166 [2024-06-07 21:48:25.290926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.166 [2024-06-07 21:48:25.290932] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.166 [2024-06-07 21:48:25.290937] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.166 [2024-06-07 21:48:25.290951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.166 qpair failed and we were unable to recover it. 00:31:25.166 [2024-06-07 21:48:25.300796] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.166 [2024-06-07 21:48:25.300903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.166 [2024-06-07 21:48:25.300920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.166 [2024-06-07 21:48:25.300926] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.166 [2024-06-07 21:48:25.300932] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.166 [2024-06-07 21:48:25.300946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.166 qpair failed and we were unable to recover it. 00:31:25.166 [2024-06-07 21:48:25.310856] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.166 [2024-06-07 21:48:25.310940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.166 [2024-06-07 21:48:25.310955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.166 [2024-06-07 21:48:25.310961] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.166 [2024-06-07 21:48:25.310966] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.166 [2024-06-07 21:48:25.310979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.166 qpair failed and we were unable to recover it. 00:31:25.166 [2024-06-07 21:48:25.320869] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.166 [2024-06-07 21:48:25.321017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.166 [2024-06-07 21:48:25.321037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.166 [2024-06-07 21:48:25.321044] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.166 [2024-06-07 21:48:25.321049] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.166 [2024-06-07 21:48:25.321063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.166 qpair failed and we were unable to recover it. 00:31:25.166 [2024-06-07 21:48:25.330912] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.166 [2024-06-07 21:48:25.331002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.166 [2024-06-07 21:48:25.331016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.166 [2024-06-07 21:48:25.331022] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.166 [2024-06-07 21:48:25.331033] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.166 [2024-06-07 21:48:25.331047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.166 qpair failed and we were unable to recover it. 00:31:25.166 [2024-06-07 21:48:25.340922] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.166 [2024-06-07 21:48:25.341008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.166 [2024-06-07 21:48:25.341022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.166 [2024-06-07 21:48:25.341033] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.166 [2024-06-07 21:48:25.341042] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.166 [2024-06-07 21:48:25.341057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.166 qpair failed and we were unable to recover it. 00:31:25.166 [2024-06-07 21:48:25.350939] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.166 [2024-06-07 21:48:25.351051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.166 [2024-06-07 21:48:25.351065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.166 [2024-06-07 21:48:25.351071] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.166 [2024-06-07 21:48:25.351077] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.166 [2024-06-07 21:48:25.351092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.166 qpair failed and we were unable to recover it. 00:31:25.166 [2024-06-07 21:48:25.360945] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.166 [2024-06-07 21:48:25.361034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.166 [2024-06-07 21:48:25.361048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.166 [2024-06-07 21:48:25.361055] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.166 [2024-06-07 21:48:25.361060] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.166 [2024-06-07 21:48:25.361073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.166 qpair failed and we were unable to recover it. 00:31:25.166 [2024-06-07 21:48:25.371005] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.166 [2024-06-07 21:48:25.371094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.166 [2024-06-07 21:48:25.371108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.166 [2024-06-07 21:48:25.371115] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.166 [2024-06-07 21:48:25.371120] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.166 [2024-06-07 21:48:25.371134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.166 qpair failed and we were unable to recover it. 00:31:25.166 [2024-06-07 21:48:25.381019] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.166 [2024-06-07 21:48:25.381106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.166 [2024-06-07 21:48:25.381121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.166 [2024-06-07 21:48:25.381127] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.166 [2024-06-07 21:48:25.381132] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.166 [2024-06-07 21:48:25.381146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.166 qpair failed and we were unable to recover it. 00:31:25.166 [2024-06-07 21:48:25.391067] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.166 [2024-06-07 21:48:25.391162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.166 [2024-06-07 21:48:25.391177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.166 [2024-06-07 21:48:25.391183] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.166 [2024-06-07 21:48:25.391188] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.166 [2024-06-07 21:48:25.391201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.166 qpair failed and we were unable to recover it. 00:31:25.166 [2024-06-07 21:48:25.401096] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.166 [2024-06-07 21:48:25.401179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.166 [2024-06-07 21:48:25.401193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.166 [2024-06-07 21:48:25.401199] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.166 [2024-06-07 21:48:25.401204] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.166 [2024-06-07 21:48:25.401218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.166 qpair failed and we were unable to recover it. 00:31:25.166 [2024-06-07 21:48:25.411119] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.166 [2024-06-07 21:48:25.411197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.166 [2024-06-07 21:48:25.411212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.166 [2024-06-07 21:48:25.411218] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.166 [2024-06-07 21:48:25.411223] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.166 [2024-06-07 21:48:25.411236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.166 qpair failed and we were unable to recover it. 00:31:25.166 [2024-06-07 21:48:25.421100] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.167 [2024-06-07 21:48:25.421181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.167 [2024-06-07 21:48:25.421196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.167 [2024-06-07 21:48:25.421202] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.167 [2024-06-07 21:48:25.421207] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.167 [2024-06-07 21:48:25.421221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.167 qpair failed and we were unable to recover it. 00:31:25.167 [2024-06-07 21:48:25.431171] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.167 [2024-06-07 21:48:25.431250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.167 [2024-06-07 21:48:25.431265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.167 [2024-06-07 21:48:25.431271] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.167 [2024-06-07 21:48:25.431279] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.167 [2024-06-07 21:48:25.431293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.167 qpair failed and we were unable to recover it. 00:31:25.426 [2024-06-07 21:48:25.441154] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.426 [2024-06-07 21:48:25.441259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.426 [2024-06-07 21:48:25.441273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.426 [2024-06-07 21:48:25.441279] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.426 [2024-06-07 21:48:25.441285] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.427 [2024-06-07 21:48:25.441299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.427 qpair failed and we were unable to recover it. 00:31:25.427 [2024-06-07 21:48:25.451298] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.427 [2024-06-07 21:48:25.451414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.427 [2024-06-07 21:48:25.451430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.427 [2024-06-07 21:48:25.451436] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.427 [2024-06-07 21:48:25.451441] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.427 [2024-06-07 21:48:25.451456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.427 qpair failed and we were unable to recover it. 00:31:25.427 [2024-06-07 21:48:25.461273] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.427 [2024-06-07 21:48:25.461376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.427 [2024-06-07 21:48:25.461391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.427 [2024-06-07 21:48:25.461397] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.427 [2024-06-07 21:48:25.461402] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.427 [2024-06-07 21:48:25.461416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.427 qpair failed and we were unable to recover it. 00:31:25.427 [2024-06-07 21:48:25.471269] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.427 [2024-06-07 21:48:25.471348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.427 [2024-06-07 21:48:25.471362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.427 [2024-06-07 21:48:25.471368] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.427 [2024-06-07 21:48:25.471373] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.427 [2024-06-07 21:48:25.471387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.427 qpair failed and we were unable to recover it. 00:31:25.427 [2024-06-07 21:48:25.481330] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.427 [2024-06-07 21:48:25.481412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.427 [2024-06-07 21:48:25.481426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.427 [2024-06-07 21:48:25.481432] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.427 [2024-06-07 21:48:25.481437] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.427 [2024-06-07 21:48:25.481451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.427 qpair failed and we were unable to recover it. 00:31:25.427 [2024-06-07 21:48:25.491408] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.427 [2024-06-07 21:48:25.491497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.427 [2024-06-07 21:48:25.491511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.427 [2024-06-07 21:48:25.491517] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.427 [2024-06-07 21:48:25.491522] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.427 [2024-06-07 21:48:25.491536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.427 qpair failed and we were unable to recover it. 00:31:25.427 [2024-06-07 21:48:25.501356] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.427 [2024-06-07 21:48:25.501436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.427 [2024-06-07 21:48:25.501450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.427 [2024-06-07 21:48:25.501457] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.427 [2024-06-07 21:48:25.501462] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.427 [2024-06-07 21:48:25.501475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.427 qpair failed and we were unable to recover it. 00:31:25.427 [2024-06-07 21:48:25.511440] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.427 [2024-06-07 21:48:25.511515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.427 [2024-06-07 21:48:25.511530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.427 [2024-06-07 21:48:25.511536] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.427 [2024-06-07 21:48:25.511541] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.427 [2024-06-07 21:48:25.511554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.427 qpair failed and we were unable to recover it. 00:31:25.427 [2024-06-07 21:48:25.521438] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.427 [2024-06-07 21:48:25.521514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.427 [2024-06-07 21:48:25.521529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.427 [2024-06-07 21:48:25.521538] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.427 [2024-06-07 21:48:25.521543] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.427 [2024-06-07 21:48:25.521556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.427 qpair failed and we were unable to recover it. 00:31:25.427 [2024-06-07 21:48:25.531472] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.427 [2024-06-07 21:48:25.531548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.427 [2024-06-07 21:48:25.531563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.427 [2024-06-07 21:48:25.531569] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.427 [2024-06-07 21:48:25.531575] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.427 [2024-06-07 21:48:25.531588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.427 qpair failed and we were unable to recover it. 00:31:25.427 [2024-06-07 21:48:25.541508] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.427 [2024-06-07 21:48:25.541591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.427 [2024-06-07 21:48:25.541605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.427 [2024-06-07 21:48:25.541612] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.427 [2024-06-07 21:48:25.541617] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.427 [2024-06-07 21:48:25.541630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.427 qpair failed and we were unable to recover it. 00:31:25.427 [2024-06-07 21:48:25.551602] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.427 [2024-06-07 21:48:25.551709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.427 [2024-06-07 21:48:25.551723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.427 [2024-06-07 21:48:25.551730] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.427 [2024-06-07 21:48:25.551736] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.427 [2024-06-07 21:48:25.551750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.427 qpair failed and we were unable to recover it. 00:31:25.427 [2024-06-07 21:48:25.561491] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.427 [2024-06-07 21:48:25.561582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.427 [2024-06-07 21:48:25.561596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.427 [2024-06-07 21:48:25.561602] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.427 [2024-06-07 21:48:25.561608] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.427 [2024-06-07 21:48:25.561622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.427 qpair failed and we were unable to recover it. 00:31:25.427 [2024-06-07 21:48:25.571629] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.427 [2024-06-07 21:48:25.571716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.427 [2024-06-07 21:48:25.571730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.427 [2024-06-07 21:48:25.571737] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.427 [2024-06-07 21:48:25.571742] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.427 [2024-06-07 21:48:25.571755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.427 qpair failed and we were unable to recover it. 00:31:25.427 [2024-06-07 21:48:25.581639] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.428 [2024-06-07 21:48:25.581724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.428 [2024-06-07 21:48:25.581738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.428 [2024-06-07 21:48:25.581745] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.428 [2024-06-07 21:48:25.581749] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.428 [2024-06-07 21:48:25.581763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.428 qpair failed and we were unable to recover it. 00:31:25.428 [2024-06-07 21:48:25.591668] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.428 [2024-06-07 21:48:25.591750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.428 [2024-06-07 21:48:25.591764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.428 [2024-06-07 21:48:25.591770] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.428 [2024-06-07 21:48:25.591775] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.428 [2024-06-07 21:48:25.591789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.428 qpair failed and we were unable to recover it. 00:31:25.428 [2024-06-07 21:48:25.601693] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.428 [2024-06-07 21:48:25.601788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.428 [2024-06-07 21:48:25.601803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.428 [2024-06-07 21:48:25.601809] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.428 [2024-06-07 21:48:25.601814] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.428 [2024-06-07 21:48:25.601828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.428 qpair failed and we were unable to recover it. 00:31:25.428 [2024-06-07 21:48:25.611740] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.428 [2024-06-07 21:48:25.611823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.428 [2024-06-07 21:48:25.611841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.428 [2024-06-07 21:48:25.611847] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.428 [2024-06-07 21:48:25.611852] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.428 [2024-06-07 21:48:25.611866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.428 qpair failed and we were unable to recover it. 00:31:25.428 [2024-06-07 21:48:25.621760] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.428 [2024-06-07 21:48:25.621887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.428 [2024-06-07 21:48:25.621902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.428 [2024-06-07 21:48:25.621908] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.428 [2024-06-07 21:48:25.621913] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.428 [2024-06-07 21:48:25.621928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.428 qpair failed and we were unable to recover it. 00:31:25.428 [2024-06-07 21:48:25.631792] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.428 [2024-06-07 21:48:25.631873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.428 [2024-06-07 21:48:25.631888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.428 [2024-06-07 21:48:25.631894] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.428 [2024-06-07 21:48:25.631899] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.428 [2024-06-07 21:48:25.631913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.428 qpair failed and we were unable to recover it. 00:31:25.428 [2024-06-07 21:48:25.641844] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.428 [2024-06-07 21:48:25.641926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.428 [2024-06-07 21:48:25.641940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.428 [2024-06-07 21:48:25.641947] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.428 [2024-06-07 21:48:25.641952] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.428 [2024-06-07 21:48:25.641966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.428 qpair failed and we were unable to recover it. 00:31:25.428 [2024-06-07 21:48:25.651892] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.428 [2024-06-07 21:48:25.651988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.428 [2024-06-07 21:48:25.652003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.428 [2024-06-07 21:48:25.652009] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.428 [2024-06-07 21:48:25.652015] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.428 [2024-06-07 21:48:25.652037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.428 qpair failed and we were unable to recover it. 00:31:25.428 [2024-06-07 21:48:25.661847] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.428 [2024-06-07 21:48:25.661929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.428 [2024-06-07 21:48:25.661944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.428 [2024-06-07 21:48:25.661950] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.428 [2024-06-07 21:48:25.661955] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.428 [2024-06-07 21:48:25.661969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.428 qpair failed and we were unable to recover it. 00:31:25.428 [2024-06-07 21:48:25.671898] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.428 [2024-06-07 21:48:25.671979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.428 [2024-06-07 21:48:25.671993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.428 [2024-06-07 21:48:25.671999] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.428 [2024-06-07 21:48:25.672004] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.428 [2024-06-07 21:48:25.672018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.428 qpair failed and we were unable to recover it. 00:31:25.428 [2024-06-07 21:48:25.681953] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.428 [2024-06-07 21:48:25.682053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.428 [2024-06-07 21:48:25.682067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.428 [2024-06-07 21:48:25.682073] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.428 [2024-06-07 21:48:25.682079] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.428 [2024-06-07 21:48:25.682092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.428 qpair failed and we were unable to recover it. 00:31:25.428 [2024-06-07 21:48:25.691978] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.428 [2024-06-07 21:48:25.692056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.428 [2024-06-07 21:48:25.692071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.428 [2024-06-07 21:48:25.692078] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.428 [2024-06-07 21:48:25.692083] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.428 [2024-06-07 21:48:25.692096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.428 qpair failed and we were unable to recover it. 00:31:25.688 [2024-06-07 21:48:25.701917] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.688 [2024-06-07 21:48:25.701995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.688 [2024-06-07 21:48:25.702014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.688 [2024-06-07 21:48:25.702020] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.688 [2024-06-07 21:48:25.702031] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.688 [2024-06-07 21:48:25.702045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.688 qpair failed and we were unable to recover it. 00:31:25.688 [2024-06-07 21:48:25.712055] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.688 [2024-06-07 21:48:25.712135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.688 [2024-06-07 21:48:25.712150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.688 [2024-06-07 21:48:25.712157] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.688 [2024-06-07 21:48:25.712163] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.688 [2024-06-07 21:48:25.712177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.688 qpair failed and we were unable to recover it. 00:31:25.688 [2024-06-07 21:48:25.722057] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.688 [2024-06-07 21:48:25.722146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.688 [2024-06-07 21:48:25.722160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.688 [2024-06-07 21:48:25.722166] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.688 [2024-06-07 21:48:25.722172] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.688 [2024-06-07 21:48:25.722186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.688 qpair failed and we were unable to recover it. 00:31:25.688 [2024-06-07 21:48:25.732078] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.688 [2024-06-07 21:48:25.732173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.688 [2024-06-07 21:48:25.732188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.688 [2024-06-07 21:48:25.732194] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.688 [2024-06-07 21:48:25.732199] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.688 [2024-06-07 21:48:25.732213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.688 qpair failed and we were unable to recover it. 00:31:25.688 [2024-06-07 21:48:25.742100] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.688 [2024-06-07 21:48:25.742182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.688 [2024-06-07 21:48:25.742196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.688 [2024-06-07 21:48:25.742202] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.688 [2024-06-07 21:48:25.742207] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.688 [2024-06-07 21:48:25.742224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.688 qpair failed and we were unable to recover it. 00:31:25.688 [2024-06-07 21:48:25.752150] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.688 [2024-06-07 21:48:25.752240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.688 [2024-06-07 21:48:25.752255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.688 [2024-06-07 21:48:25.752261] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.688 [2024-06-07 21:48:25.752266] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.688 [2024-06-07 21:48:25.752280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.688 qpair failed and we were unable to recover it. 00:31:25.688 [2024-06-07 21:48:25.762196] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.688 [2024-06-07 21:48:25.762278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.688 [2024-06-07 21:48:25.762294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.688 [2024-06-07 21:48:25.762301] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.688 [2024-06-07 21:48:25.762306] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.688 [2024-06-07 21:48:25.762320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.688 qpair failed and we were unable to recover it. 00:31:25.688 [2024-06-07 21:48:25.772217] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.688 [2024-06-07 21:48:25.772297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.688 [2024-06-07 21:48:25.772312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.688 [2024-06-07 21:48:25.772318] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.688 [2024-06-07 21:48:25.772323] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.688 [2024-06-07 21:48:25.772337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.688 qpair failed and we were unable to recover it. 00:31:25.688 [2024-06-07 21:48:25.782241] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.688 [2024-06-07 21:48:25.782346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.688 [2024-06-07 21:48:25.782360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.688 [2024-06-07 21:48:25.782367] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.688 [2024-06-07 21:48:25.782372] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.688 [2024-06-07 21:48:25.782387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.688 qpair failed and we were unable to recover it. 00:31:25.688 [2024-06-07 21:48:25.792198] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.688 [2024-06-07 21:48:25.792286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.688 [2024-06-07 21:48:25.792301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.688 [2024-06-07 21:48:25.792307] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.688 [2024-06-07 21:48:25.792312] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.688 [2024-06-07 21:48:25.792326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.688 qpair failed and we were unable to recover it. 00:31:25.688 [2024-06-07 21:48:25.802297] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.688 [2024-06-07 21:48:25.802421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.688 [2024-06-07 21:48:25.802436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.688 [2024-06-07 21:48:25.802443] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.688 [2024-06-07 21:48:25.802448] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.688 [2024-06-07 21:48:25.802462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.688 qpair failed and we were unable to recover it. 00:31:25.688 [2024-06-07 21:48:25.812314] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.688 [2024-06-07 21:48:25.812391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.688 [2024-06-07 21:48:25.812406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.688 [2024-06-07 21:48:25.812413] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.688 [2024-06-07 21:48:25.812418] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.688 [2024-06-07 21:48:25.812433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.688 qpair failed and we were unable to recover it. 00:31:25.688 [2024-06-07 21:48:25.822357] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.688 [2024-06-07 21:48:25.822436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.688 [2024-06-07 21:48:25.822451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.688 [2024-06-07 21:48:25.822457] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.689 [2024-06-07 21:48:25.822463] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.689 [2024-06-07 21:48:25.822476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.689 qpair failed and we were unable to recover it. 00:31:25.689 [2024-06-07 21:48:25.832412] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.689 [2024-06-07 21:48:25.832494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.689 [2024-06-07 21:48:25.832509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.689 [2024-06-07 21:48:25.832515] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.689 [2024-06-07 21:48:25.832523] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.689 [2024-06-07 21:48:25.832537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.689 qpair failed and we were unable to recover it. 00:31:25.689 [2024-06-07 21:48:25.842419] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.689 [2024-06-07 21:48:25.842496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.689 [2024-06-07 21:48:25.842511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.689 [2024-06-07 21:48:25.842517] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.689 [2024-06-07 21:48:25.842523] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.689 [2024-06-07 21:48:25.842536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.689 qpair failed and we were unable to recover it. 00:31:25.689 [2024-06-07 21:48:25.852449] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.689 [2024-06-07 21:48:25.852554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.689 [2024-06-07 21:48:25.852568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.689 [2024-06-07 21:48:25.852575] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.689 [2024-06-07 21:48:25.852581] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.689 [2024-06-07 21:48:25.852595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.689 qpair failed and we were unable to recover it. 00:31:25.689 [2024-06-07 21:48:25.862472] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.689 [2024-06-07 21:48:25.862576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.689 [2024-06-07 21:48:25.862591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.689 [2024-06-07 21:48:25.862597] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.689 [2024-06-07 21:48:25.862603] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.689 [2024-06-07 21:48:25.862616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.689 qpair failed and we were unable to recover it. 00:31:25.689 [2024-06-07 21:48:25.872513] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.689 [2024-06-07 21:48:25.872591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.689 [2024-06-07 21:48:25.872605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.689 [2024-06-07 21:48:25.872612] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.689 [2024-06-07 21:48:25.872617] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.689 [2024-06-07 21:48:25.872630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.689 qpair failed and we were unable to recover it. 00:31:25.689 [2024-06-07 21:48:25.882543] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.689 [2024-06-07 21:48:25.882659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.689 [2024-06-07 21:48:25.882675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.689 [2024-06-07 21:48:25.882682] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.689 [2024-06-07 21:48:25.882687] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.689 [2024-06-07 21:48:25.882701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.689 qpair failed and we were unable to recover it. 00:31:25.689 [2024-06-07 21:48:25.892515] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.689 [2024-06-07 21:48:25.892631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.689 [2024-06-07 21:48:25.892647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.689 [2024-06-07 21:48:25.892653] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.689 [2024-06-07 21:48:25.892658] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.689 [2024-06-07 21:48:25.892672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.689 qpair failed and we were unable to recover it. 00:31:25.689 [2024-06-07 21:48:25.902586] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.689 [2024-06-07 21:48:25.902702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.689 [2024-06-07 21:48:25.902718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.689 [2024-06-07 21:48:25.902724] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.689 [2024-06-07 21:48:25.902730] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.689 [2024-06-07 21:48:25.902744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.689 qpair failed and we were unable to recover it. 00:31:25.689 [2024-06-07 21:48:25.912622] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.689 [2024-06-07 21:48:25.912700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.689 [2024-06-07 21:48:25.912716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.689 [2024-06-07 21:48:25.912722] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.689 [2024-06-07 21:48:25.912727] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.689 [2024-06-07 21:48:25.912741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.689 qpair failed and we were unable to recover it. 00:31:25.689 [2024-06-07 21:48:25.922596] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.689 [2024-06-07 21:48:25.922718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.689 [2024-06-07 21:48:25.922735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.689 [2024-06-07 21:48:25.922744] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.689 [2024-06-07 21:48:25.922750] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.689 [2024-06-07 21:48:25.922763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.689 qpair failed and we were unable to recover it. 00:31:25.689 [2024-06-07 21:48:25.932651] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.689 [2024-06-07 21:48:25.932723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.689 [2024-06-07 21:48:25.932738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.689 [2024-06-07 21:48:25.932744] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.689 [2024-06-07 21:48:25.932749] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.689 [2024-06-07 21:48:25.932763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.689 qpair failed and we were unable to recover it. 00:31:25.689 [2024-06-07 21:48:25.942698] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.689 [2024-06-07 21:48:25.942776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.689 [2024-06-07 21:48:25.942790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.689 [2024-06-07 21:48:25.942797] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.689 [2024-06-07 21:48:25.942802] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.689 [2024-06-07 21:48:25.942816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.689 qpair failed and we were unable to recover it. 00:31:25.689 [2024-06-07 21:48:25.952850] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.689 [2024-06-07 21:48:25.952938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.689 [2024-06-07 21:48:25.952952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.689 [2024-06-07 21:48:25.952959] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.689 [2024-06-07 21:48:25.952964] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.689 [2024-06-07 21:48:25.952977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.689 qpair failed and we were unable to recover it. 00:31:25.949 [2024-06-07 21:48:25.962823] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.949 [2024-06-07 21:48:25.962901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.949 [2024-06-07 21:48:25.962916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.949 [2024-06-07 21:48:25.962922] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.949 [2024-06-07 21:48:25.962927] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.949 [2024-06-07 21:48:25.962941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.949 qpair failed and we were unable to recover it. 00:31:25.949 [2024-06-07 21:48:25.972882] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.949 [2024-06-07 21:48:25.972964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.949 [2024-06-07 21:48:25.972978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.949 [2024-06-07 21:48:25.972984] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.949 [2024-06-07 21:48:25.972990] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.949 [2024-06-07 21:48:25.973004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.949 qpair failed and we were unable to recover it. 00:31:25.949 [2024-06-07 21:48:25.982872] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.949 [2024-06-07 21:48:25.982956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.949 [2024-06-07 21:48:25.982970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.949 [2024-06-07 21:48:25.982976] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.949 [2024-06-07 21:48:25.982981] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.949 [2024-06-07 21:48:25.982995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.949 qpair failed and we were unable to recover it. 00:31:25.949 [2024-06-07 21:48:25.992894] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.949 [2024-06-07 21:48:25.992976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.949 [2024-06-07 21:48:25.992991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.949 [2024-06-07 21:48:25.992997] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.949 [2024-06-07 21:48:25.993002] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.949 [2024-06-07 21:48:25.993015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.949 qpair failed and we were unable to recover it. 00:31:25.949 [2024-06-07 21:48:26.002921] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.949 [2024-06-07 21:48:26.003014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.949 [2024-06-07 21:48:26.003035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.949 [2024-06-07 21:48:26.003042] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.949 [2024-06-07 21:48:26.003048] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.949 [2024-06-07 21:48:26.003062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.949 qpair failed and we were unable to recover it. 00:31:25.949 [2024-06-07 21:48:26.012893] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.949 [2024-06-07 21:48:26.013015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.949 [2024-06-07 21:48:26.013041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.949 [2024-06-07 21:48:26.013048] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.949 [2024-06-07 21:48:26.013053] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.949 [2024-06-07 21:48:26.013068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.949 qpair failed and we were unable to recover it. 00:31:25.949 [2024-06-07 21:48:26.022933] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.949 [2024-06-07 21:48:26.023015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.949 [2024-06-07 21:48:26.023035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.949 [2024-06-07 21:48:26.023042] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.949 [2024-06-07 21:48:26.023047] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.949 [2024-06-07 21:48:26.023061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.949 qpair failed and we were unable to recover it. 00:31:25.949 [2024-06-07 21:48:26.032891] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.949 [2024-06-07 21:48:26.032986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.949 [2024-06-07 21:48:26.033001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.949 [2024-06-07 21:48:26.033008] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.950 [2024-06-07 21:48:26.033013] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.950 [2024-06-07 21:48:26.033034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.950 qpair failed and we were unable to recover it. 00:31:25.950 [2024-06-07 21:48:26.043033] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.950 [2024-06-07 21:48:26.043112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.950 [2024-06-07 21:48:26.043126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.950 [2024-06-07 21:48:26.043132] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.950 [2024-06-07 21:48:26.043138] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.950 [2024-06-07 21:48:26.043152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.950 qpair failed and we were unable to recover it. 00:31:25.950 [2024-06-07 21:48:26.052992] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.950 [2024-06-07 21:48:26.053122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.950 [2024-06-07 21:48:26.053138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.950 [2024-06-07 21:48:26.053144] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.950 [2024-06-07 21:48:26.053150] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.950 [2024-06-07 21:48:26.053164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.950 qpair failed and we were unable to recover it. 00:31:25.950 [2024-06-07 21:48:26.063090] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.950 [2024-06-07 21:48:26.063170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.950 [2024-06-07 21:48:26.063185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.950 [2024-06-07 21:48:26.063192] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.950 [2024-06-07 21:48:26.063197] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.950 [2024-06-07 21:48:26.063211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.950 qpair failed and we were unable to recover it. 00:31:25.950 [2024-06-07 21:48:26.073083] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.950 [2024-06-07 21:48:26.073167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.950 [2024-06-07 21:48:26.073182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.950 [2024-06-07 21:48:26.073188] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.950 [2024-06-07 21:48:26.073193] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.950 [2024-06-07 21:48:26.073208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.950 qpair failed and we were unable to recover it. 00:31:25.950 [2024-06-07 21:48:26.083067] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.950 [2024-06-07 21:48:26.083152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.950 [2024-06-07 21:48:26.083167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.950 [2024-06-07 21:48:26.083173] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.950 [2024-06-07 21:48:26.083179] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.950 [2024-06-07 21:48:26.083193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.950 qpair failed and we were unable to recover it. 00:31:25.950 [2024-06-07 21:48:26.093176] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.950 [2024-06-07 21:48:26.093269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.950 [2024-06-07 21:48:26.093283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.950 [2024-06-07 21:48:26.093291] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.950 [2024-06-07 21:48:26.093296] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.950 [2024-06-07 21:48:26.093310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.950 qpair failed and we were unable to recover it. 00:31:25.950 [2024-06-07 21:48:26.103217] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.950 [2024-06-07 21:48:26.103303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.950 [2024-06-07 21:48:26.103320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.950 [2024-06-07 21:48:26.103327] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.950 [2024-06-07 21:48:26.103332] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.950 [2024-06-07 21:48:26.103345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.950 qpair failed and we were unable to recover it. 00:31:25.950 [2024-06-07 21:48:26.113229] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.950 [2024-06-07 21:48:26.113312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.950 [2024-06-07 21:48:26.113327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.950 [2024-06-07 21:48:26.113333] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.950 [2024-06-07 21:48:26.113338] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.950 [2024-06-07 21:48:26.113352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.950 qpair failed and we were unable to recover it. 00:31:25.950 [2024-06-07 21:48:26.123286] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.950 [2024-06-07 21:48:26.123404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.950 [2024-06-07 21:48:26.123420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.950 [2024-06-07 21:48:26.123427] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.950 [2024-06-07 21:48:26.123432] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.950 [2024-06-07 21:48:26.123446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.950 qpair failed and we were unable to recover it. 00:31:25.950 [2024-06-07 21:48:26.133216] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.950 [2024-06-07 21:48:26.133344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.950 [2024-06-07 21:48:26.133359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.950 [2024-06-07 21:48:26.133366] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.950 [2024-06-07 21:48:26.133371] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.950 [2024-06-07 21:48:26.133385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.950 qpair failed and we were unable to recover it. 00:31:25.950 [2024-06-07 21:48:26.143327] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.950 [2024-06-07 21:48:26.143435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.950 [2024-06-07 21:48:26.143449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.950 [2024-06-07 21:48:26.143456] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.950 [2024-06-07 21:48:26.143462] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.950 [2024-06-07 21:48:26.143479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.950 qpair failed and we were unable to recover it. 00:31:25.950 [2024-06-07 21:48:26.153347] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.950 [2024-06-07 21:48:26.153436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.950 [2024-06-07 21:48:26.153451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.950 [2024-06-07 21:48:26.153457] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.950 [2024-06-07 21:48:26.153462] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.950 [2024-06-07 21:48:26.153476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.950 qpair failed and we were unable to recover it. 00:31:25.950 [2024-06-07 21:48:26.163362] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.950 [2024-06-07 21:48:26.163439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.950 [2024-06-07 21:48:26.163455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.950 [2024-06-07 21:48:26.163461] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.950 [2024-06-07 21:48:26.163466] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.950 [2024-06-07 21:48:26.163480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.950 qpair failed and we were unable to recover it. 00:31:25.950 [2024-06-07 21:48:26.173349] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.951 [2024-06-07 21:48:26.173472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.951 [2024-06-07 21:48:26.173488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.951 [2024-06-07 21:48:26.173495] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.951 [2024-06-07 21:48:26.173500] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.951 [2024-06-07 21:48:26.173514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.951 qpair failed and we were unable to recover it. 00:31:25.951 [2024-06-07 21:48:26.183359] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.951 [2024-06-07 21:48:26.183441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.951 [2024-06-07 21:48:26.183456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.951 [2024-06-07 21:48:26.183462] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.951 [2024-06-07 21:48:26.183467] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.951 [2024-06-07 21:48:26.183481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.951 qpair failed and we were unable to recover it. 00:31:25.951 [2024-06-07 21:48:26.193500] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.951 [2024-06-07 21:48:26.193590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.951 [2024-06-07 21:48:26.193608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.951 [2024-06-07 21:48:26.193614] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.951 [2024-06-07 21:48:26.193619] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.951 [2024-06-07 21:48:26.193632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.951 qpair failed and we were unable to recover it. 00:31:25.951 [2024-06-07 21:48:26.203431] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.951 [2024-06-07 21:48:26.203509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.951 [2024-06-07 21:48:26.203523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.951 [2024-06-07 21:48:26.203530] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.951 [2024-06-07 21:48:26.203535] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.951 [2024-06-07 21:48:26.203548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.951 qpair failed and we were unable to recover it. 00:31:25.951 [2024-06-07 21:48:26.213474] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.951 [2024-06-07 21:48:26.213554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.951 [2024-06-07 21:48:26.213568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.951 [2024-06-07 21:48:26.213575] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.951 [2024-06-07 21:48:26.213580] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:25.951 [2024-06-07 21:48:26.213593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.951 qpair failed and we were unable to recover it. 00:31:26.211 [2024-06-07 21:48:26.223544] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.211 [2024-06-07 21:48:26.223629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.211 [2024-06-07 21:48:26.223643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.211 [2024-06-07 21:48:26.223651] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.211 [2024-06-07 21:48:26.223657] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.211 [2024-06-07 21:48:26.223671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.211 qpair failed and we were unable to recover it. 00:31:26.211 [2024-06-07 21:48:26.233651] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.211 [2024-06-07 21:48:26.233772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.211 [2024-06-07 21:48:26.233787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.211 [2024-06-07 21:48:26.233794] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.211 [2024-06-07 21:48:26.233802] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.211 [2024-06-07 21:48:26.233816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.211 qpair failed and we were unable to recover it. 00:31:26.211 [2024-06-07 21:48:26.243540] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.211 [2024-06-07 21:48:26.243620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.211 [2024-06-07 21:48:26.243634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.211 [2024-06-07 21:48:26.243641] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.211 [2024-06-07 21:48:26.243646] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.211 [2024-06-07 21:48:26.243660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.211 qpair failed and we were unable to recover it. 00:31:26.211 [2024-06-07 21:48:26.253579] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.211 [2024-06-07 21:48:26.253669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.211 [2024-06-07 21:48:26.253684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.211 [2024-06-07 21:48:26.253690] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.211 [2024-06-07 21:48:26.253696] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.211 [2024-06-07 21:48:26.253710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.211 qpair failed and we were unable to recover it. 00:31:26.211 [2024-06-07 21:48:26.263698] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.211 [2024-06-07 21:48:26.263805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.211 [2024-06-07 21:48:26.263819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.211 [2024-06-07 21:48:26.263826] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.211 [2024-06-07 21:48:26.263831] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.211 [2024-06-07 21:48:26.263846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.211 qpair failed and we were unable to recover it. 00:31:26.211 [2024-06-07 21:48:26.273721] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.211 [2024-06-07 21:48:26.273800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.211 [2024-06-07 21:48:26.273815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.211 [2024-06-07 21:48:26.273822] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.211 [2024-06-07 21:48:26.273827] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.211 [2024-06-07 21:48:26.273840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.211 qpair failed and we were unable to recover it. 00:31:26.211 [2024-06-07 21:48:26.283749] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.211 [2024-06-07 21:48:26.283833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.212 [2024-06-07 21:48:26.283848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.212 [2024-06-07 21:48:26.283854] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.212 [2024-06-07 21:48:26.283859] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.212 [2024-06-07 21:48:26.283873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.212 qpair failed and we were unable to recover it. 00:31:26.212 [2024-06-07 21:48:26.293694] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.212 [2024-06-07 21:48:26.293771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.212 [2024-06-07 21:48:26.293786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.212 [2024-06-07 21:48:26.293792] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.212 [2024-06-07 21:48:26.293797] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.212 [2024-06-07 21:48:26.293812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.212 qpair failed and we were unable to recover it. 00:31:26.212 [2024-06-07 21:48:26.303805] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.212 [2024-06-07 21:48:26.303888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.212 [2024-06-07 21:48:26.303902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.212 [2024-06-07 21:48:26.303909] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.212 [2024-06-07 21:48:26.303914] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.212 [2024-06-07 21:48:26.303928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.212 qpair failed and we were unable to recover it. 00:31:26.212 [2024-06-07 21:48:26.313744] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.212 [2024-06-07 21:48:26.313838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.212 [2024-06-07 21:48:26.313853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.212 [2024-06-07 21:48:26.313859] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.212 [2024-06-07 21:48:26.313864] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.212 [2024-06-07 21:48:26.313878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.212 qpair failed and we were unable to recover it. 00:31:26.212 [2024-06-07 21:48:26.323819] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.212 [2024-06-07 21:48:26.323943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.212 [2024-06-07 21:48:26.323959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.212 [2024-06-07 21:48:26.323968] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.212 [2024-06-07 21:48:26.323974] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.212 [2024-06-07 21:48:26.323987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.212 qpair failed and we were unable to recover it. 00:31:26.212 [2024-06-07 21:48:26.333884] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.212 [2024-06-07 21:48:26.333971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.212 [2024-06-07 21:48:26.333985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.212 [2024-06-07 21:48:26.333992] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.212 [2024-06-07 21:48:26.333997] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.212 [2024-06-07 21:48:26.334011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.212 qpair failed and we were unable to recover it. 00:31:26.212 [2024-06-07 21:48:26.343832] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.212 [2024-06-07 21:48:26.343959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.212 [2024-06-07 21:48:26.343975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.212 [2024-06-07 21:48:26.343981] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.212 [2024-06-07 21:48:26.343986] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.212 [2024-06-07 21:48:26.344001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.212 qpair failed and we were unable to recover it. 00:31:26.212 [2024-06-07 21:48:26.354009] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.212 [2024-06-07 21:48:26.354094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.212 [2024-06-07 21:48:26.354109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.212 [2024-06-07 21:48:26.354115] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.212 [2024-06-07 21:48:26.354120] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.212 [2024-06-07 21:48:26.354134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.212 qpair failed and we were unable to recover it. 00:31:26.212 [2024-06-07 21:48:26.363992] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.212 [2024-06-07 21:48:26.364080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.212 [2024-06-07 21:48:26.364095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.212 [2024-06-07 21:48:26.364101] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.212 [2024-06-07 21:48:26.364107] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.212 [2024-06-07 21:48:26.364121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.212 qpair failed and we were unable to recover it. 00:31:26.212 [2024-06-07 21:48:26.373947] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.212 [2024-06-07 21:48:26.374032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.212 [2024-06-07 21:48:26.374047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.212 [2024-06-07 21:48:26.374053] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.212 [2024-06-07 21:48:26.374059] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.212 [2024-06-07 21:48:26.374072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.212 qpair failed and we were unable to recover it. 00:31:26.212 [2024-06-07 21:48:26.384023] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.212 [2024-06-07 21:48:26.384108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.212 [2024-06-07 21:48:26.384123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.212 [2024-06-07 21:48:26.384129] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.212 [2024-06-07 21:48:26.384134] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.212 [2024-06-07 21:48:26.384148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.212 qpair failed and we were unable to recover it. 00:31:26.212 [2024-06-07 21:48:26.394039] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.212 [2024-06-07 21:48:26.394126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.212 [2024-06-07 21:48:26.394140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.212 [2024-06-07 21:48:26.394147] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.212 [2024-06-07 21:48:26.394152] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.212 [2024-06-07 21:48:26.394165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.212 qpair failed and we were unable to recover it. 00:31:26.212 [2024-06-07 21:48:26.404057] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.212 [2024-06-07 21:48:26.404131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.212 [2024-06-07 21:48:26.404146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.212 [2024-06-07 21:48:26.404152] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.212 [2024-06-07 21:48:26.404158] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.212 [2024-06-07 21:48:26.404172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.212 qpair failed and we were unable to recover it. 00:31:26.212 [2024-06-07 21:48:26.414136] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.212 [2024-06-07 21:48:26.414212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.212 [2024-06-07 21:48:26.414227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.212 [2024-06-07 21:48:26.414236] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.212 [2024-06-07 21:48:26.414241] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.213 [2024-06-07 21:48:26.414255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.213 qpair failed and we were unable to recover it. 00:31:26.213 [2024-06-07 21:48:26.424194] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.213 [2024-06-07 21:48:26.424301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.213 [2024-06-07 21:48:26.424315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.213 [2024-06-07 21:48:26.424322] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.213 [2024-06-07 21:48:26.424327] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.213 [2024-06-07 21:48:26.424341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.213 qpair failed and we were unable to recover it. 00:31:26.213 [2024-06-07 21:48:26.434123] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.213 [2024-06-07 21:48:26.434213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.213 [2024-06-07 21:48:26.434227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.213 [2024-06-07 21:48:26.434233] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.213 [2024-06-07 21:48:26.434238] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.213 [2024-06-07 21:48:26.434251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.213 qpair failed and we were unable to recover it. 00:31:26.213 [2024-06-07 21:48:26.444205] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.213 [2024-06-07 21:48:26.444286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.213 [2024-06-07 21:48:26.444300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.213 [2024-06-07 21:48:26.444306] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.213 [2024-06-07 21:48:26.444311] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.213 [2024-06-07 21:48:26.444328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.213 qpair failed and we were unable to recover it. 00:31:26.213 [2024-06-07 21:48:26.454235] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.213 [2024-06-07 21:48:26.454318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.213 [2024-06-07 21:48:26.454333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.213 [2024-06-07 21:48:26.454340] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.213 [2024-06-07 21:48:26.454345] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.213 [2024-06-07 21:48:26.454361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.213 qpair failed and we were unable to recover it. 00:31:26.213 [2024-06-07 21:48:26.464228] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.213 [2024-06-07 21:48:26.464314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.213 [2024-06-07 21:48:26.464329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.213 [2024-06-07 21:48:26.464335] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.213 [2024-06-07 21:48:26.464341] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.213 [2024-06-07 21:48:26.464356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.213 qpair failed and we were unable to recover it. 00:31:26.213 [2024-06-07 21:48:26.474352] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.213 [2024-06-07 21:48:26.474438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.213 [2024-06-07 21:48:26.474453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.213 [2024-06-07 21:48:26.474459] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.213 [2024-06-07 21:48:26.474465] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.213 [2024-06-07 21:48:26.474479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.213 qpair failed and we were unable to recover it. 00:31:26.474 [2024-06-07 21:48:26.484396] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.474 [2024-06-07 21:48:26.484476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.474 [2024-06-07 21:48:26.484492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.474 [2024-06-07 21:48:26.484498] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.474 [2024-06-07 21:48:26.484503] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.474 [2024-06-07 21:48:26.484517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.474 qpair failed and we were unable to recover it. 00:31:26.474 [2024-06-07 21:48:26.494401] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.474 [2024-06-07 21:48:26.494480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.474 [2024-06-07 21:48:26.494495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.474 [2024-06-07 21:48:26.494502] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.474 [2024-06-07 21:48:26.494507] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.474 [2024-06-07 21:48:26.494521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.474 qpair failed and we were unable to recover it. 00:31:26.474 [2024-06-07 21:48:26.504334] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.474 [2024-06-07 21:48:26.504426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.474 [2024-06-07 21:48:26.504443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.474 [2024-06-07 21:48:26.504450] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.474 [2024-06-07 21:48:26.504455] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.474 [2024-06-07 21:48:26.504469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.474 qpair failed and we were unable to recover it. 00:31:26.474 [2024-06-07 21:48:26.514419] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.474 [2024-06-07 21:48:26.514498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.474 [2024-06-07 21:48:26.514514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.474 [2024-06-07 21:48:26.514521] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.474 [2024-06-07 21:48:26.514527] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.474 [2024-06-07 21:48:26.514542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.474 qpair failed and we were unable to recover it. 00:31:26.474 [2024-06-07 21:48:26.524510] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.474 [2024-06-07 21:48:26.524597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.474 [2024-06-07 21:48:26.524612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.474 [2024-06-07 21:48:26.524618] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.474 [2024-06-07 21:48:26.524624] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.474 [2024-06-07 21:48:26.524637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.474 qpair failed and we were unable to recover it. 00:31:26.474 [2024-06-07 21:48:26.534497] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.474 [2024-06-07 21:48:26.534571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.474 [2024-06-07 21:48:26.534586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.474 [2024-06-07 21:48:26.534593] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.474 [2024-06-07 21:48:26.534598] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.474 [2024-06-07 21:48:26.534612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.474 qpair failed and we were unable to recover it. 00:31:26.474 [2024-06-07 21:48:26.544548] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.474 [2024-06-07 21:48:26.544663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.474 [2024-06-07 21:48:26.544679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.474 [2024-06-07 21:48:26.544685] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.474 [2024-06-07 21:48:26.544690] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.474 [2024-06-07 21:48:26.544708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.474 qpair failed and we were unable to recover it. 00:31:26.474 [2024-06-07 21:48:26.554561] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.474 [2024-06-07 21:48:26.554644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.474 [2024-06-07 21:48:26.554659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.474 [2024-06-07 21:48:26.554665] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.474 [2024-06-07 21:48:26.554670] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.474 [2024-06-07 21:48:26.554684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.474 qpair failed and we were unable to recover it. 00:31:26.474 [2024-06-07 21:48:26.564621] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.474 [2024-06-07 21:48:26.564715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.475 [2024-06-07 21:48:26.564729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.475 [2024-06-07 21:48:26.564736] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.475 [2024-06-07 21:48:26.564741] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.475 [2024-06-07 21:48:26.564755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.475 qpair failed and we were unable to recover it. 00:31:26.475 [2024-06-07 21:48:26.574623] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.475 [2024-06-07 21:48:26.574707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.475 [2024-06-07 21:48:26.574721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.475 [2024-06-07 21:48:26.574727] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.475 [2024-06-07 21:48:26.574733] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.475 [2024-06-07 21:48:26.574746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.475 qpair failed and we were unable to recover it. 00:31:26.475 [2024-06-07 21:48:26.584677] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.475 [2024-06-07 21:48:26.584789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.475 [2024-06-07 21:48:26.584804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.475 [2024-06-07 21:48:26.584810] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.475 [2024-06-07 21:48:26.584816] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.475 [2024-06-07 21:48:26.584829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.475 qpair failed and we were unable to recover it. 00:31:26.475 [2024-06-07 21:48:26.594694] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.475 [2024-06-07 21:48:26.594778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.475 [2024-06-07 21:48:26.594796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.475 [2024-06-07 21:48:26.594802] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.475 [2024-06-07 21:48:26.594808] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.475 [2024-06-07 21:48:26.594821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.475 qpair failed and we were unable to recover it. 00:31:26.475 [2024-06-07 21:48:26.604705] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.475 [2024-06-07 21:48:26.604818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.475 [2024-06-07 21:48:26.604834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.475 [2024-06-07 21:48:26.604841] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.475 [2024-06-07 21:48:26.604846] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.475 [2024-06-07 21:48:26.604861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.475 qpair failed and we were unable to recover it. 00:31:26.475 [2024-06-07 21:48:26.614757] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.475 [2024-06-07 21:48:26.614849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.475 [2024-06-07 21:48:26.614864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.475 [2024-06-07 21:48:26.614870] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.475 [2024-06-07 21:48:26.614875] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.475 [2024-06-07 21:48:26.614889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.475 qpair failed and we were unable to recover it. 00:31:26.475 [2024-06-07 21:48:26.624765] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.475 [2024-06-07 21:48:26.624851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.475 [2024-06-07 21:48:26.624866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.475 [2024-06-07 21:48:26.624873] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.475 [2024-06-07 21:48:26.624878] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.475 [2024-06-07 21:48:26.624892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.475 qpair failed and we were unable to recover it. 00:31:26.475 [2024-06-07 21:48:26.634799] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.475 [2024-06-07 21:48:26.634881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.475 [2024-06-07 21:48:26.634896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.475 [2024-06-07 21:48:26.634903] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.475 [2024-06-07 21:48:26.634913] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.475 [2024-06-07 21:48:26.634926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.475 qpair failed and we were unable to recover it. 00:31:26.475 [2024-06-07 21:48:26.644840] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.475 [2024-06-07 21:48:26.644926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.475 [2024-06-07 21:48:26.644941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.475 [2024-06-07 21:48:26.644947] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.475 [2024-06-07 21:48:26.644953] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.475 [2024-06-07 21:48:26.644967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.475 qpair failed and we were unable to recover it. 00:31:26.475 [2024-06-07 21:48:26.654877] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.475 [2024-06-07 21:48:26.654954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.475 [2024-06-07 21:48:26.654969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.475 [2024-06-07 21:48:26.654975] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.475 [2024-06-07 21:48:26.654980] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.475 [2024-06-07 21:48:26.654994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.475 qpair failed and we were unable to recover it. 00:31:26.475 [2024-06-07 21:48:26.664911] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.475 [2024-06-07 21:48:26.664994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.475 [2024-06-07 21:48:26.665009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.475 [2024-06-07 21:48:26.665016] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.475 [2024-06-07 21:48:26.665021] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.475 [2024-06-07 21:48:26.665042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.475 qpair failed and we were unable to recover it. 00:31:26.475 [2024-06-07 21:48:26.674912] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.475 [2024-06-07 21:48:26.674991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.475 [2024-06-07 21:48:26.675006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.475 [2024-06-07 21:48:26.675012] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.475 [2024-06-07 21:48:26.675017] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.475 [2024-06-07 21:48:26.675035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.475 qpair failed and we were unable to recover it. 00:31:26.475 [2024-06-07 21:48:26.684943] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.475 [2024-06-07 21:48:26.685023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.475 [2024-06-07 21:48:26.685041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.475 [2024-06-07 21:48:26.685047] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.475 [2024-06-07 21:48:26.685053] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.475 [2024-06-07 21:48:26.685067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.475 qpair failed and we were unable to recover it. 00:31:26.475 [2024-06-07 21:48:26.695003] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.475 [2024-06-07 21:48:26.695140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.475 [2024-06-07 21:48:26.695156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.475 [2024-06-07 21:48:26.695162] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.475 [2024-06-07 21:48:26.695168] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.476 [2024-06-07 21:48:26.695182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.476 qpair failed and we were unable to recover it. 00:31:26.476 [2024-06-07 21:48:26.705002] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.476 [2024-06-07 21:48:26.705091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.476 [2024-06-07 21:48:26.705106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.476 [2024-06-07 21:48:26.705112] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.476 [2024-06-07 21:48:26.705117] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.476 [2024-06-07 21:48:26.705132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.476 qpair failed and we were unable to recover it. 00:31:26.476 [2024-06-07 21:48:26.715039] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.476 [2024-06-07 21:48:26.715123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.476 [2024-06-07 21:48:26.715139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.476 [2024-06-07 21:48:26.715145] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.476 [2024-06-07 21:48:26.715151] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.476 [2024-06-07 21:48:26.715165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.476 qpair failed and we were unable to recover it. 00:31:26.476 [2024-06-07 21:48:26.725068] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.476 [2024-06-07 21:48:26.725146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.476 [2024-06-07 21:48:26.725161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.476 [2024-06-07 21:48:26.725167] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.476 [2024-06-07 21:48:26.725175] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.476 [2024-06-07 21:48:26.725189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.476 qpair failed and we were unable to recover it. 00:31:26.476 [2024-06-07 21:48:26.735103] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.476 [2024-06-07 21:48:26.735185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.476 [2024-06-07 21:48:26.735199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.476 [2024-06-07 21:48:26.735206] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.476 [2024-06-07 21:48:26.735211] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.476 [2024-06-07 21:48:26.735224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.476 qpair failed and we were unable to recover it. 00:31:26.736 [2024-06-07 21:48:26.745124] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.736 [2024-06-07 21:48:26.745238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.736 [2024-06-07 21:48:26.745258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.736 [2024-06-07 21:48:26.745265] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.736 [2024-06-07 21:48:26.745270] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.736 [2024-06-07 21:48:26.745284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.736 qpair failed and we were unable to recover it. 00:31:26.736 [2024-06-07 21:48:26.755157] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.736 [2024-06-07 21:48:26.755257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.736 [2024-06-07 21:48:26.755271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.736 [2024-06-07 21:48:26.755277] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.736 [2024-06-07 21:48:26.755282] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.736 [2024-06-07 21:48:26.755296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.736 qpair failed and we were unable to recover it. 00:31:26.736 [2024-06-07 21:48:26.765200] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.736 [2024-06-07 21:48:26.765287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.736 [2024-06-07 21:48:26.765302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.736 [2024-06-07 21:48:26.765309] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.736 [2024-06-07 21:48:26.765314] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.736 [2024-06-07 21:48:26.765327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.736 qpair failed and we were unable to recover it. 00:31:26.736 [2024-06-07 21:48:26.775232] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.736 [2024-06-07 21:48:26.775309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.736 [2024-06-07 21:48:26.775324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.736 [2024-06-07 21:48:26.775330] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.736 [2024-06-07 21:48:26.775335] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.736 [2024-06-07 21:48:26.775348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.736 qpair failed and we were unable to recover it. 00:31:26.736 [2024-06-07 21:48:26.785269] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.736 [2024-06-07 21:48:26.785350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.736 [2024-06-07 21:48:26.785364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.736 [2024-06-07 21:48:26.785371] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.736 [2024-06-07 21:48:26.785376] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.736 [2024-06-07 21:48:26.785389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.736 qpair failed and we were unable to recover it. 00:31:26.736 [2024-06-07 21:48:26.795289] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.736 [2024-06-07 21:48:26.795367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.736 [2024-06-07 21:48:26.795381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.736 [2024-06-07 21:48:26.795387] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.736 [2024-06-07 21:48:26.795392] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.736 [2024-06-07 21:48:26.795406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.736 qpair failed and we were unable to recover it. 00:31:26.736 [2024-06-07 21:48:26.805291] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.736 [2024-06-07 21:48:26.805369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.736 [2024-06-07 21:48:26.805384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.736 [2024-06-07 21:48:26.805390] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.736 [2024-06-07 21:48:26.805396] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.736 [2024-06-07 21:48:26.805409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.736 qpair failed and we were unable to recover it. 00:31:26.736 [2024-06-07 21:48:26.815377] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.736 [2024-06-07 21:48:26.815457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.736 [2024-06-07 21:48:26.815472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.736 [2024-06-07 21:48:26.815481] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.736 [2024-06-07 21:48:26.815486] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.736 [2024-06-07 21:48:26.815500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.736 qpair failed and we were unable to recover it. 00:31:26.736 [2024-06-07 21:48:26.825349] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.736 [2024-06-07 21:48:26.825430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.736 [2024-06-07 21:48:26.825445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.736 [2024-06-07 21:48:26.825451] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.736 [2024-06-07 21:48:26.825456] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.736 [2024-06-07 21:48:26.825470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.736 qpair failed and we were unable to recover it. 00:31:26.736 [2024-06-07 21:48:26.835301] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.736 [2024-06-07 21:48:26.835386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.736 [2024-06-07 21:48:26.835401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.736 [2024-06-07 21:48:26.835407] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.736 [2024-06-07 21:48:26.835412] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.736 [2024-06-07 21:48:26.835426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.736 qpair failed and we were unable to recover it. 00:31:26.736 [2024-06-07 21:48:26.845344] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.736 [2024-06-07 21:48:26.845470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.736 [2024-06-07 21:48:26.845486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.736 [2024-06-07 21:48:26.845492] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.736 [2024-06-07 21:48:26.845497] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.736 [2024-06-07 21:48:26.845511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.736 qpair failed and we were unable to recover it. 00:31:26.736 [2024-06-07 21:48:26.855441] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.736 [2024-06-07 21:48:26.855558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.736 [2024-06-07 21:48:26.855574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.736 [2024-06-07 21:48:26.855580] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.736 [2024-06-07 21:48:26.855585] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.736 [2024-06-07 21:48:26.855599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.736 qpair failed and we were unable to recover it. 00:31:26.736 [2024-06-07 21:48:26.865463] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.736 [2024-06-07 21:48:26.865543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.736 [2024-06-07 21:48:26.865558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.736 [2024-06-07 21:48:26.865564] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.737 [2024-06-07 21:48:26.865569] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.737 [2024-06-07 21:48:26.865583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.737 qpair failed and we were unable to recover it. 00:31:26.737 [2024-06-07 21:48:26.875501] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.737 [2024-06-07 21:48:26.875583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.737 [2024-06-07 21:48:26.875598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.737 [2024-06-07 21:48:26.875604] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.737 [2024-06-07 21:48:26.875609] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.737 [2024-06-07 21:48:26.875623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.737 qpair failed and we were unable to recover it. 00:31:26.737 [2024-06-07 21:48:26.885569] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.737 [2024-06-07 21:48:26.885653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.737 [2024-06-07 21:48:26.885667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.737 [2024-06-07 21:48:26.885674] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.737 [2024-06-07 21:48:26.885679] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.737 [2024-06-07 21:48:26.885693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.737 qpair failed and we were unable to recover it. 00:31:26.737 [2024-06-07 21:48:26.895508] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.737 [2024-06-07 21:48:26.895585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.737 [2024-06-07 21:48:26.895599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.737 [2024-06-07 21:48:26.895605] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.737 [2024-06-07 21:48:26.895611] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.737 [2024-06-07 21:48:26.895625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.737 qpair failed and we were unable to recover it. 00:31:26.737 [2024-06-07 21:48:26.905584] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.737 [2024-06-07 21:48:26.905711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.737 [2024-06-07 21:48:26.905729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.737 [2024-06-07 21:48:26.905736] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.737 [2024-06-07 21:48:26.905741] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.737 [2024-06-07 21:48:26.905755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.737 qpair failed and we were unable to recover it. 00:31:26.737 [2024-06-07 21:48:26.915631] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.737 [2024-06-07 21:48:26.915712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.737 [2024-06-07 21:48:26.915727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.737 [2024-06-07 21:48:26.915734] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.737 [2024-06-07 21:48:26.915739] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.737 [2024-06-07 21:48:26.915753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.737 qpair failed and we were unable to recover it. 00:31:26.737 [2024-06-07 21:48:26.925661] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.737 [2024-06-07 21:48:26.925738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.737 [2024-06-07 21:48:26.925753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.737 [2024-06-07 21:48:26.925759] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.737 [2024-06-07 21:48:26.925764] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.737 [2024-06-07 21:48:26.925777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.737 qpair failed and we were unable to recover it. 00:31:26.737 [2024-06-07 21:48:26.935711] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.737 [2024-06-07 21:48:26.935788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.737 [2024-06-07 21:48:26.935802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.737 [2024-06-07 21:48:26.935808] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.737 [2024-06-07 21:48:26.935814] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.737 [2024-06-07 21:48:26.935827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.737 qpair failed and we were unable to recover it. 00:31:26.737 [2024-06-07 21:48:26.945708] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.737 [2024-06-07 21:48:26.945825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.737 [2024-06-07 21:48:26.945841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.737 [2024-06-07 21:48:26.945847] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.737 [2024-06-07 21:48:26.945852] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.737 [2024-06-07 21:48:26.945870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.737 qpair failed and we were unable to recover it. 00:31:26.737 [2024-06-07 21:48:26.955743] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.737 [2024-06-07 21:48:26.955826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.737 [2024-06-07 21:48:26.955841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.737 [2024-06-07 21:48:26.955847] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.737 [2024-06-07 21:48:26.955852] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.737 [2024-06-07 21:48:26.955866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.737 qpair failed and we were unable to recover it. 00:31:26.737 [2024-06-07 21:48:26.965787] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.737 [2024-06-07 21:48:26.965867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.737 [2024-06-07 21:48:26.965881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.737 [2024-06-07 21:48:26.965887] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.737 [2024-06-07 21:48:26.965892] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.737 [2024-06-07 21:48:26.965906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.737 qpair failed and we were unable to recover it. 00:31:26.737 [2024-06-07 21:48:26.975805] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.737 [2024-06-07 21:48:26.975883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.737 [2024-06-07 21:48:26.975898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.737 [2024-06-07 21:48:26.975904] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.737 [2024-06-07 21:48:26.975909] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.737 [2024-06-07 21:48:26.975923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.737 qpair failed and we were unable to recover it. 00:31:26.737 [2024-06-07 21:48:26.985821] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.737 [2024-06-07 21:48:26.985920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.737 [2024-06-07 21:48:26.985934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.737 [2024-06-07 21:48:26.985941] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.737 [2024-06-07 21:48:26.985946] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.737 [2024-06-07 21:48:26.985960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.737 qpair failed and we were unable to recover it. 00:31:26.737 [2024-06-07 21:48:26.995834] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.737 [2024-06-07 21:48:26.995918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.737 [2024-06-07 21:48:26.995936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.737 [2024-06-07 21:48:26.995942] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.737 [2024-06-07 21:48:26.995947] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.737 [2024-06-07 21:48:26.995960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.737 qpair failed and we were unable to recover it. 00:31:26.997 [2024-06-07 21:48:27.005900] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.997 [2024-06-07 21:48:27.005984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.997 [2024-06-07 21:48:27.005998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.997 [2024-06-07 21:48:27.006004] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.997 [2024-06-07 21:48:27.006010] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.997 [2024-06-07 21:48:27.006024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.997 qpair failed and we were unable to recover it. 00:31:26.997 [2024-06-07 21:48:27.015945] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.997 [2024-06-07 21:48:27.016024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.997 [2024-06-07 21:48:27.016045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.997 [2024-06-07 21:48:27.016051] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.997 [2024-06-07 21:48:27.016057] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.997 [2024-06-07 21:48:27.016071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.997 qpair failed and we were unable to recover it. 00:31:26.997 [2024-06-07 21:48:27.025999] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.997 [2024-06-07 21:48:27.026095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.997 [2024-06-07 21:48:27.026109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.997 [2024-06-07 21:48:27.026115] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.997 [2024-06-07 21:48:27.026121] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.997 [2024-06-07 21:48:27.026135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.997 qpair failed and we were unable to recover it. 00:31:26.997 [2024-06-07 21:48:27.035988] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.997 [2024-06-07 21:48:27.036074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.997 [2024-06-07 21:48:27.036089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.997 [2024-06-07 21:48:27.036096] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.997 [2024-06-07 21:48:27.036104] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.997 [2024-06-07 21:48:27.036119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.997 qpair failed and we were unable to recover it. 00:31:26.997 [2024-06-07 21:48:27.046009] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.997 [2024-06-07 21:48:27.046090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.997 [2024-06-07 21:48:27.046105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.997 [2024-06-07 21:48:27.046112] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.997 [2024-06-07 21:48:27.046117] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.997 [2024-06-07 21:48:27.046131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.997 qpair failed and we were unable to recover it. 00:31:26.997 [2024-06-07 21:48:27.056076] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.997 [2024-06-07 21:48:27.056158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.997 [2024-06-07 21:48:27.056173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.997 [2024-06-07 21:48:27.056179] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.997 [2024-06-07 21:48:27.056185] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.997 [2024-06-07 21:48:27.056199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.997 qpair failed and we were unable to recover it. 00:31:26.997 [2024-06-07 21:48:27.066104] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.997 [2024-06-07 21:48:27.066187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.997 [2024-06-07 21:48:27.066202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.997 [2024-06-07 21:48:27.066208] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.997 [2024-06-07 21:48:27.066213] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.997 [2024-06-07 21:48:27.066227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.997 qpair failed and we were unable to recover it. 00:31:26.997 [2024-06-07 21:48:27.076113] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.997 [2024-06-07 21:48:27.076195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.997 [2024-06-07 21:48:27.076210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.997 [2024-06-07 21:48:27.076216] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.997 [2024-06-07 21:48:27.076221] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.997 [2024-06-07 21:48:27.076235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.997 qpair failed and we were unable to recover it. 00:31:26.998 [2024-06-07 21:48:27.086127] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.998 [2024-06-07 21:48:27.086214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.998 [2024-06-07 21:48:27.086229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.998 [2024-06-07 21:48:27.086235] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.998 [2024-06-07 21:48:27.086241] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.998 [2024-06-07 21:48:27.086254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.998 qpair failed and we were unable to recover it. 00:31:26.998 [2024-06-07 21:48:27.096164] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.998 [2024-06-07 21:48:27.096240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.998 [2024-06-07 21:48:27.096255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.998 [2024-06-07 21:48:27.096261] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.998 [2024-06-07 21:48:27.096267] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.998 [2024-06-07 21:48:27.096281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.998 qpair failed and we were unable to recover it. 00:31:26.998 [2024-06-07 21:48:27.106181] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.998 [2024-06-07 21:48:27.106267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.998 [2024-06-07 21:48:27.106282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.998 [2024-06-07 21:48:27.106288] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.998 [2024-06-07 21:48:27.106293] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.998 [2024-06-07 21:48:27.106307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.998 qpair failed and we were unable to recover it. 00:31:26.998 [2024-06-07 21:48:27.116225] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.998 [2024-06-07 21:48:27.116326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.998 [2024-06-07 21:48:27.116341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.998 [2024-06-07 21:48:27.116348] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.998 [2024-06-07 21:48:27.116353] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.998 [2024-06-07 21:48:27.116367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.998 qpair failed and we were unable to recover it. 00:31:26.998 [2024-06-07 21:48:27.126244] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.998 [2024-06-07 21:48:27.126345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.998 [2024-06-07 21:48:27.126359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.998 [2024-06-07 21:48:27.126366] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.998 [2024-06-07 21:48:27.126374] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.998 [2024-06-07 21:48:27.126388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.998 qpair failed and we were unable to recover it. 00:31:26.998 [2024-06-07 21:48:27.136288] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.998 [2024-06-07 21:48:27.136372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.998 [2024-06-07 21:48:27.136386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.998 [2024-06-07 21:48:27.136392] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.998 [2024-06-07 21:48:27.136398] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.998 [2024-06-07 21:48:27.136411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.998 qpair failed and we were unable to recover it. 00:31:26.998 [2024-06-07 21:48:27.146301] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.998 [2024-06-07 21:48:27.146386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.998 [2024-06-07 21:48:27.146400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.998 [2024-06-07 21:48:27.146407] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.998 [2024-06-07 21:48:27.146412] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.998 [2024-06-07 21:48:27.146426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.998 qpair failed and we were unable to recover it. 00:31:26.998 [2024-06-07 21:48:27.156274] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.998 [2024-06-07 21:48:27.156390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.998 [2024-06-07 21:48:27.156406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.998 [2024-06-07 21:48:27.156412] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.998 [2024-06-07 21:48:27.156417] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.998 [2024-06-07 21:48:27.156431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.998 qpair failed and we were unable to recover it. 00:31:26.998 [2024-06-07 21:48:27.166342] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.998 [2024-06-07 21:48:27.166453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.998 [2024-06-07 21:48:27.166474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.998 [2024-06-07 21:48:27.166480] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.998 [2024-06-07 21:48:27.166486] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.998 [2024-06-07 21:48:27.166500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.998 qpair failed and we were unable to recover it. 00:31:26.998 [2024-06-07 21:48:27.176415] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.998 [2024-06-07 21:48:27.176501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.998 [2024-06-07 21:48:27.176515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.998 [2024-06-07 21:48:27.176522] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.998 [2024-06-07 21:48:27.176527] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.998 [2024-06-07 21:48:27.176541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.998 qpair failed and we were unable to recover it. 00:31:26.998 [2024-06-07 21:48:27.186413] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.998 [2024-06-07 21:48:27.186499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.998 [2024-06-07 21:48:27.186514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.998 [2024-06-07 21:48:27.186520] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.998 [2024-06-07 21:48:27.186525] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.998 [2024-06-07 21:48:27.186539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.998 qpair failed and we were unable to recover it. 00:31:26.998 [2024-06-07 21:48:27.196466] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.998 [2024-06-07 21:48:27.196551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.998 [2024-06-07 21:48:27.196565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.998 [2024-06-07 21:48:27.196571] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.998 [2024-06-07 21:48:27.196576] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.998 [2024-06-07 21:48:27.196590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.998 qpair failed and we were unable to recover it. 00:31:26.998 [2024-06-07 21:48:27.206470] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.998 [2024-06-07 21:48:27.206548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.998 [2024-06-07 21:48:27.206563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.998 [2024-06-07 21:48:27.206570] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.998 [2024-06-07 21:48:27.206575] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.998 [2024-06-07 21:48:27.206588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.998 qpair failed and we were unable to recover it. 00:31:26.998 [2024-06-07 21:48:27.216506] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.998 [2024-06-07 21:48:27.216585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.998 [2024-06-07 21:48:27.216599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.999 [2024-06-07 21:48:27.216608] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.999 [2024-06-07 21:48:27.216613] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.999 [2024-06-07 21:48:27.216627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.999 qpair failed and we were unable to recover it. 00:31:26.999 [2024-06-07 21:48:27.226539] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.999 [2024-06-07 21:48:27.226622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.999 [2024-06-07 21:48:27.226636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.999 [2024-06-07 21:48:27.226643] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.999 [2024-06-07 21:48:27.226648] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.999 [2024-06-07 21:48:27.226662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.999 qpair failed and we were unable to recover it. 00:31:26.999 [2024-06-07 21:48:27.236603] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.999 [2024-06-07 21:48:27.236722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.999 [2024-06-07 21:48:27.236738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.999 [2024-06-07 21:48:27.236744] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.999 [2024-06-07 21:48:27.236750] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.999 [2024-06-07 21:48:27.236764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.999 qpair failed and we were unable to recover it. 00:31:26.999 [2024-06-07 21:48:27.246622] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.999 [2024-06-07 21:48:27.246708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.999 [2024-06-07 21:48:27.246722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.999 [2024-06-07 21:48:27.246728] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.999 [2024-06-07 21:48:27.246733] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.999 [2024-06-07 21:48:27.246747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.999 qpair failed and we were unable to recover it. 00:31:26.999 [2024-06-07 21:48:27.256632] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.999 [2024-06-07 21:48:27.256713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.999 [2024-06-07 21:48:27.256728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.999 [2024-06-07 21:48:27.256734] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.999 [2024-06-07 21:48:27.256739] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:26.999 [2024-06-07 21:48:27.256752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.999 qpair failed and we were unable to recover it. 00:31:27.258 [2024-06-07 21:48:27.266640] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.259 [2024-06-07 21:48:27.266770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.259 [2024-06-07 21:48:27.266787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.259 [2024-06-07 21:48:27.266793] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.259 [2024-06-07 21:48:27.266799] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.259 [2024-06-07 21:48:27.266812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.259 qpair failed and we were unable to recover it. 00:31:27.259 [2024-06-07 21:48:27.276682] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.259 [2024-06-07 21:48:27.276767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.259 [2024-06-07 21:48:27.276782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.259 [2024-06-07 21:48:27.276789] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.259 [2024-06-07 21:48:27.276795] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.259 [2024-06-07 21:48:27.276809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.259 qpair failed and we were unable to recover it. 00:31:27.259 [2024-06-07 21:48:27.286722] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.259 [2024-06-07 21:48:27.286806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.259 [2024-06-07 21:48:27.286821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.259 [2024-06-07 21:48:27.286827] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.259 [2024-06-07 21:48:27.286832] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.259 [2024-06-07 21:48:27.286846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.259 qpair failed and we were unable to recover it. 00:31:27.259 [2024-06-07 21:48:27.296758] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.259 [2024-06-07 21:48:27.296855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.259 [2024-06-07 21:48:27.296870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.259 [2024-06-07 21:48:27.296876] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.259 [2024-06-07 21:48:27.296881] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.259 [2024-06-07 21:48:27.296895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.259 qpair failed and we were unable to recover it. 00:31:27.259 [2024-06-07 21:48:27.306857] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.259 [2024-06-07 21:48:27.306941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.259 [2024-06-07 21:48:27.306959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.259 [2024-06-07 21:48:27.306966] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.259 [2024-06-07 21:48:27.306971] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.259 [2024-06-07 21:48:27.306985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.259 qpair failed and we were unable to recover it. 00:31:27.259 [2024-06-07 21:48:27.316804] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.259 [2024-06-07 21:48:27.316909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.259 [2024-06-07 21:48:27.316923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.259 [2024-06-07 21:48:27.316930] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.259 [2024-06-07 21:48:27.316935] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.259 [2024-06-07 21:48:27.316949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.259 qpair failed and we were unable to recover it. 00:31:27.259 [2024-06-07 21:48:27.326779] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.259 [2024-06-07 21:48:27.326872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.259 [2024-06-07 21:48:27.326886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.259 [2024-06-07 21:48:27.326892] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.259 [2024-06-07 21:48:27.326897] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.259 [2024-06-07 21:48:27.326912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.259 qpair failed and we were unable to recover it. 00:31:27.259 [2024-06-07 21:48:27.336782] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.259 [2024-06-07 21:48:27.336857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.259 [2024-06-07 21:48:27.336871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.259 [2024-06-07 21:48:27.336877] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.259 [2024-06-07 21:48:27.336882] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.259 [2024-06-07 21:48:27.336896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.259 qpair failed and we were unable to recover it. 00:31:27.259 [2024-06-07 21:48:27.346929] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.259 [2024-06-07 21:48:27.347007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.259 [2024-06-07 21:48:27.347022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.259 [2024-06-07 21:48:27.347033] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.259 [2024-06-07 21:48:27.347038] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.259 [2024-06-07 21:48:27.347055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.259 qpair failed and we were unable to recover it. 00:31:27.259 [2024-06-07 21:48:27.356928] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.259 [2024-06-07 21:48:27.357008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.259 [2024-06-07 21:48:27.357023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.259 [2024-06-07 21:48:27.357036] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.259 [2024-06-07 21:48:27.357041] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.259 [2024-06-07 21:48:27.357056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.259 qpair failed and we were unable to recover it. 00:31:27.259 [2024-06-07 21:48:27.366957] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.259 [2024-06-07 21:48:27.367085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.259 [2024-06-07 21:48:27.367101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.259 [2024-06-07 21:48:27.367107] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.259 [2024-06-07 21:48:27.367113] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.259 [2024-06-07 21:48:27.367127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.259 qpair failed and we were unable to recover it. 00:31:27.259 [2024-06-07 21:48:27.377062] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.259 [2024-06-07 21:48:27.377171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.259 [2024-06-07 21:48:27.377186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.259 [2024-06-07 21:48:27.377192] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.259 [2024-06-07 21:48:27.377198] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.259 [2024-06-07 21:48:27.377212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.259 qpair failed and we were unable to recover it. 00:31:27.259 [2024-06-07 21:48:27.387009] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.259 [2024-06-07 21:48:27.387097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.259 [2024-06-07 21:48:27.387112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.259 [2024-06-07 21:48:27.387118] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.259 [2024-06-07 21:48:27.387123] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.259 [2024-06-07 21:48:27.387137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.259 qpair failed and we were unable to recover it. 00:31:27.259 [2024-06-07 21:48:27.397049] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.259 [2024-06-07 21:48:27.397133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.260 [2024-06-07 21:48:27.397153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.260 [2024-06-07 21:48:27.397159] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.260 [2024-06-07 21:48:27.397164] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.260 [2024-06-07 21:48:27.397178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.260 qpair failed and we were unable to recover it. 00:31:27.260 [2024-06-07 21:48:27.407074] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.260 [2024-06-07 21:48:27.407152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.260 [2024-06-07 21:48:27.407166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.260 [2024-06-07 21:48:27.407172] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.260 [2024-06-07 21:48:27.407177] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.260 [2024-06-07 21:48:27.407191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.260 qpair failed and we were unable to recover it. 00:31:27.260 [2024-06-07 21:48:27.417017] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.260 [2024-06-07 21:48:27.417136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.260 [2024-06-07 21:48:27.417150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.260 [2024-06-07 21:48:27.417157] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.260 [2024-06-07 21:48:27.417162] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.260 [2024-06-07 21:48:27.417176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.260 qpair failed and we were unable to recover it. 00:31:27.260 [2024-06-07 21:48:27.427138] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.260 [2024-06-07 21:48:27.427251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.260 [2024-06-07 21:48:27.427265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.260 [2024-06-07 21:48:27.427272] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.260 [2024-06-07 21:48:27.427277] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.260 [2024-06-07 21:48:27.427292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.260 qpair failed and we were unable to recover it. 00:31:27.260 [2024-06-07 21:48:27.437156] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.260 [2024-06-07 21:48:27.437241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.260 [2024-06-07 21:48:27.437255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.260 [2024-06-07 21:48:27.437261] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.260 [2024-06-07 21:48:27.437266] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.260 [2024-06-07 21:48:27.437284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.260 qpair failed and we were unable to recover it. 00:31:27.260 [2024-06-07 21:48:27.447112] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.260 [2024-06-07 21:48:27.447199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.260 [2024-06-07 21:48:27.447214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.260 [2024-06-07 21:48:27.447220] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.260 [2024-06-07 21:48:27.447225] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.260 [2024-06-07 21:48:27.447238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.260 qpair failed and we were unable to recover it. 00:31:27.260 [2024-06-07 21:48:27.457248] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.260 [2024-06-07 21:48:27.457334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.260 [2024-06-07 21:48:27.457348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.260 [2024-06-07 21:48:27.457354] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.260 [2024-06-07 21:48:27.457359] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.260 [2024-06-07 21:48:27.457373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.260 qpair failed and we were unable to recover it. 00:31:27.260 [2024-06-07 21:48:27.467248] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.260 [2024-06-07 21:48:27.467330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.260 [2024-06-07 21:48:27.467345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.260 [2024-06-07 21:48:27.467351] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.260 [2024-06-07 21:48:27.467357] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.260 [2024-06-07 21:48:27.467370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.260 qpair failed and we were unable to recover it. 00:31:27.260 [2024-06-07 21:48:27.477289] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.260 [2024-06-07 21:48:27.477368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.260 [2024-06-07 21:48:27.477383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.260 [2024-06-07 21:48:27.477389] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.260 [2024-06-07 21:48:27.477394] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.260 [2024-06-07 21:48:27.477408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.260 qpair failed and we were unable to recover it. 00:31:27.260 [2024-06-07 21:48:27.487313] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.260 [2024-06-07 21:48:27.487394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.260 [2024-06-07 21:48:27.487409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.260 [2024-06-07 21:48:27.487415] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.260 [2024-06-07 21:48:27.487421] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.260 [2024-06-07 21:48:27.487434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.260 qpair failed and we were unable to recover it. 00:31:27.260 [2024-06-07 21:48:27.497343] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.260 [2024-06-07 21:48:27.497424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.260 [2024-06-07 21:48:27.497439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.260 [2024-06-07 21:48:27.497445] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.260 [2024-06-07 21:48:27.497450] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.260 [2024-06-07 21:48:27.497464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.260 qpair failed and we were unable to recover it. 00:31:27.260 [2024-06-07 21:48:27.507370] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.260 [2024-06-07 21:48:27.507454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.260 [2024-06-07 21:48:27.507469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.260 [2024-06-07 21:48:27.507475] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.260 [2024-06-07 21:48:27.507480] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.260 [2024-06-07 21:48:27.507494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.260 qpair failed and we were unable to recover it. 00:31:27.260 [2024-06-07 21:48:27.517344] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.260 [2024-06-07 21:48:27.517438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.260 [2024-06-07 21:48:27.517453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.260 [2024-06-07 21:48:27.517459] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.260 [2024-06-07 21:48:27.517464] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.260 [2024-06-07 21:48:27.517478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.260 qpair failed and we were unable to recover it. 00:31:27.520 [2024-06-07 21:48:27.527408] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.520 [2024-06-07 21:48:27.527491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.520 [2024-06-07 21:48:27.527506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.520 [2024-06-07 21:48:27.527512] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.520 [2024-06-07 21:48:27.527521] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.520 [2024-06-07 21:48:27.527535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.520 qpair failed and we were unable to recover it. 00:31:27.520 [2024-06-07 21:48:27.537481] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.520 [2024-06-07 21:48:27.537575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.520 [2024-06-07 21:48:27.537589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.520 [2024-06-07 21:48:27.537595] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.520 [2024-06-07 21:48:27.537600] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.520 [2024-06-07 21:48:27.537614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.520 qpair failed and we were unable to recover it. 00:31:27.520 [2024-06-07 21:48:27.547498] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.520 [2024-06-07 21:48:27.547596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.520 [2024-06-07 21:48:27.547611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.520 [2024-06-07 21:48:27.547617] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.520 [2024-06-07 21:48:27.547622] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.520 [2024-06-07 21:48:27.547636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.520 qpair failed and we were unable to recover it. 00:31:27.520 [2024-06-07 21:48:27.557519] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.520 [2024-06-07 21:48:27.557603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.520 [2024-06-07 21:48:27.557618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.520 [2024-06-07 21:48:27.557625] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.520 [2024-06-07 21:48:27.557631] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.520 [2024-06-07 21:48:27.557644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.520 qpair failed and we were unable to recover it. 00:31:27.520 [2024-06-07 21:48:27.567563] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.520 [2024-06-07 21:48:27.567649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.520 [2024-06-07 21:48:27.567664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.520 [2024-06-07 21:48:27.567672] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.520 [2024-06-07 21:48:27.567678] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.520 [2024-06-07 21:48:27.567692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.520 qpair failed and we were unable to recover it. 00:31:27.520 [2024-06-07 21:48:27.577495] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.520 [2024-06-07 21:48:27.577580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.520 [2024-06-07 21:48:27.577594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.520 [2024-06-07 21:48:27.577601] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.520 [2024-06-07 21:48:27.577606] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.520 [2024-06-07 21:48:27.577620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.520 qpair failed and we were unable to recover it. 00:31:27.520 [2024-06-07 21:48:27.587589] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.520 [2024-06-07 21:48:27.587669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.520 [2024-06-07 21:48:27.587684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.520 [2024-06-07 21:48:27.587690] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.520 [2024-06-07 21:48:27.587696] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.520 [2024-06-07 21:48:27.587709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.520 qpair failed and we were unable to recover it. 00:31:27.520 [2024-06-07 21:48:27.597566] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.520 [2024-06-07 21:48:27.597681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.520 [2024-06-07 21:48:27.597701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.520 [2024-06-07 21:48:27.597707] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.520 [2024-06-07 21:48:27.597713] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.520 [2024-06-07 21:48:27.597727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.520 qpair failed and we were unable to recover it. 00:31:27.520 [2024-06-07 21:48:27.607662] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.521 [2024-06-07 21:48:27.607744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.521 [2024-06-07 21:48:27.607758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.521 [2024-06-07 21:48:27.607765] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.521 [2024-06-07 21:48:27.607769] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.521 [2024-06-07 21:48:27.607783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.521 qpair failed and we were unable to recover it. 00:31:27.521 [2024-06-07 21:48:27.617625] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.521 [2024-06-07 21:48:27.617704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.521 [2024-06-07 21:48:27.617719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.521 [2024-06-07 21:48:27.617728] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.521 [2024-06-07 21:48:27.617733] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.521 [2024-06-07 21:48:27.617747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.521 qpair failed and we were unable to recover it. 00:31:27.521 [2024-06-07 21:48:27.627713] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.521 [2024-06-07 21:48:27.627796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.521 [2024-06-07 21:48:27.627811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.521 [2024-06-07 21:48:27.627817] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.521 [2024-06-07 21:48:27.627822] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.521 [2024-06-07 21:48:27.627836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.521 qpair failed and we were unable to recover it. 00:31:27.521 [2024-06-07 21:48:27.637742] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.521 [2024-06-07 21:48:27.637857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.521 [2024-06-07 21:48:27.637877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.521 [2024-06-07 21:48:27.637884] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.521 [2024-06-07 21:48:27.637890] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.521 [2024-06-07 21:48:27.637904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.521 qpair failed and we were unable to recover it. 00:31:27.521 [2024-06-07 21:48:27.647793] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.521 [2024-06-07 21:48:27.647874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.521 [2024-06-07 21:48:27.647890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.521 [2024-06-07 21:48:27.647896] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.521 [2024-06-07 21:48:27.647901] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.521 [2024-06-07 21:48:27.647915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.521 qpair failed and we were unable to recover it. 00:31:27.521 [2024-06-07 21:48:27.657804] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.521 [2024-06-07 21:48:27.657909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.521 [2024-06-07 21:48:27.657923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.521 [2024-06-07 21:48:27.657930] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.521 [2024-06-07 21:48:27.657935] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.521 [2024-06-07 21:48:27.657949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.521 qpair failed and we were unable to recover it. 00:31:27.521 [2024-06-07 21:48:27.667827] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.521 [2024-06-07 21:48:27.667910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.521 [2024-06-07 21:48:27.667925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.521 [2024-06-07 21:48:27.667931] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.521 [2024-06-07 21:48:27.667937] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.521 [2024-06-07 21:48:27.667950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.521 qpair failed and we were unable to recover it. 00:31:27.521 [2024-06-07 21:48:27.677829] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.521 [2024-06-07 21:48:27.677910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.521 [2024-06-07 21:48:27.677925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.521 [2024-06-07 21:48:27.677931] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.521 [2024-06-07 21:48:27.677936] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.521 [2024-06-07 21:48:27.677950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.521 qpair failed and we were unable to recover it. 00:31:27.521 [2024-06-07 21:48:27.687897] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.521 [2024-06-07 21:48:27.687977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.521 [2024-06-07 21:48:27.687993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.521 [2024-06-07 21:48:27.687999] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.521 [2024-06-07 21:48:27.688004] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.521 [2024-06-07 21:48:27.688018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.521 qpair failed and we were unable to recover it. 00:31:27.521 [2024-06-07 21:48:27.697921] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.521 [2024-06-07 21:48:27.698007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.521 [2024-06-07 21:48:27.698022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.521 [2024-06-07 21:48:27.698035] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.521 [2024-06-07 21:48:27.698041] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.521 [2024-06-07 21:48:27.698055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.521 qpair failed and we were unable to recover it. 00:31:27.521 [2024-06-07 21:48:27.707940] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.521 [2024-06-07 21:48:27.708031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.521 [2024-06-07 21:48:27.708049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.521 [2024-06-07 21:48:27.708055] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.521 [2024-06-07 21:48:27.708061] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.521 [2024-06-07 21:48:27.708075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.521 qpair failed and we were unable to recover it. 00:31:27.521 [2024-06-07 21:48:27.717970] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.521 [2024-06-07 21:48:27.718056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.521 [2024-06-07 21:48:27.718071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.521 [2024-06-07 21:48:27.718077] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.521 [2024-06-07 21:48:27.718082] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.521 [2024-06-07 21:48:27.718097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.521 qpair failed and we were unable to recover it. 00:31:27.521 [2024-06-07 21:48:27.728049] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.521 [2024-06-07 21:48:27.728175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.521 [2024-06-07 21:48:27.728190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.521 [2024-06-07 21:48:27.728197] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.521 [2024-06-07 21:48:27.728202] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.521 [2024-06-07 21:48:27.728216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.521 qpair failed and we were unable to recover it. 00:31:27.521 [2024-06-07 21:48:27.738057] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.521 [2024-06-07 21:48:27.738134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.521 [2024-06-07 21:48:27.738149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.521 [2024-06-07 21:48:27.738155] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.522 [2024-06-07 21:48:27.738161] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.522 [2024-06-07 21:48:27.738174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.522 qpair failed and we were unable to recover it. 00:31:27.522 [2024-06-07 21:48:27.748067] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.522 [2024-06-07 21:48:27.748151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.522 [2024-06-07 21:48:27.748166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.522 [2024-06-07 21:48:27.748172] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.522 [2024-06-07 21:48:27.748177] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.522 [2024-06-07 21:48:27.748191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.522 qpair failed and we were unable to recover it. 00:31:27.522 [2024-06-07 21:48:27.758090] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.522 [2024-06-07 21:48:27.758168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.522 [2024-06-07 21:48:27.758183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.522 [2024-06-07 21:48:27.758189] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.522 [2024-06-07 21:48:27.758195] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.522 [2024-06-07 21:48:27.758209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.522 qpair failed and we were unable to recover it. 00:31:27.522 [2024-06-07 21:48:27.768125] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.522 [2024-06-07 21:48:27.768209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.522 [2024-06-07 21:48:27.768224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.522 [2024-06-07 21:48:27.768230] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.522 [2024-06-07 21:48:27.768235] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.522 [2024-06-07 21:48:27.768249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.522 qpair failed and we were unable to recover it. 00:31:27.522 [2024-06-07 21:48:27.778194] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.522 [2024-06-07 21:48:27.778274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.522 [2024-06-07 21:48:27.778289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.522 [2024-06-07 21:48:27.778295] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.522 [2024-06-07 21:48:27.778300] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.522 [2024-06-07 21:48:27.778314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.522 qpair failed and we were unable to recover it. 00:31:27.781 [2024-06-07 21:48:27.788226] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.781 [2024-06-07 21:48:27.788310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.782 [2024-06-07 21:48:27.788324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.782 [2024-06-07 21:48:27.788330] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.782 [2024-06-07 21:48:27.788336] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.782 [2024-06-07 21:48:27.788349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.782 qpair failed and we were unable to recover it. 00:31:27.782 [2024-06-07 21:48:27.798220] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.782 [2024-06-07 21:48:27.798302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.782 [2024-06-07 21:48:27.798320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.782 [2024-06-07 21:48:27.798326] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.782 [2024-06-07 21:48:27.798331] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.782 [2024-06-07 21:48:27.798345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.782 qpair failed and we were unable to recover it. 00:31:27.782 [2024-06-07 21:48:27.808224] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.782 [2024-06-07 21:48:27.808310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.782 [2024-06-07 21:48:27.808324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.782 [2024-06-07 21:48:27.808331] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.782 [2024-06-07 21:48:27.808336] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.782 [2024-06-07 21:48:27.808350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.782 qpair failed and we were unable to recover it. 00:31:27.782 [2024-06-07 21:48:27.818283] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.782 [2024-06-07 21:48:27.818364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.782 [2024-06-07 21:48:27.818379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.782 [2024-06-07 21:48:27.818385] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.782 [2024-06-07 21:48:27.818390] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.782 [2024-06-07 21:48:27.818405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.782 qpair failed and we were unable to recover it. 00:31:27.782 [2024-06-07 21:48:27.828316] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.782 [2024-06-07 21:48:27.828396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.782 [2024-06-07 21:48:27.828411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.782 [2024-06-07 21:48:27.828417] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.782 [2024-06-07 21:48:27.828422] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.782 [2024-06-07 21:48:27.828436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.782 qpair failed and we were unable to recover it. 00:31:27.782 [2024-06-07 21:48:27.838370] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.782 [2024-06-07 21:48:27.838479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.782 [2024-06-07 21:48:27.838493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.782 [2024-06-07 21:48:27.838499] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.782 [2024-06-07 21:48:27.838504] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.782 [2024-06-07 21:48:27.838521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.782 qpair failed and we were unable to recover it. 00:31:27.782 [2024-06-07 21:48:27.848381] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.782 [2024-06-07 21:48:27.848455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.782 [2024-06-07 21:48:27.848470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.782 [2024-06-07 21:48:27.848476] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.782 [2024-06-07 21:48:27.848481] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.782 [2024-06-07 21:48:27.848495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.782 qpair failed and we were unable to recover it. 00:31:27.782 [2024-06-07 21:48:27.858387] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.782 [2024-06-07 21:48:27.858466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.782 [2024-06-07 21:48:27.858481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.782 [2024-06-07 21:48:27.858487] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.782 [2024-06-07 21:48:27.858492] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.782 [2024-06-07 21:48:27.858506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.782 qpair failed and we were unable to recover it. 00:31:27.782 [2024-06-07 21:48:27.868388] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.782 [2024-06-07 21:48:27.868477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.782 [2024-06-07 21:48:27.868491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.782 [2024-06-07 21:48:27.868498] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.782 [2024-06-07 21:48:27.868503] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.782 [2024-06-07 21:48:27.868516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.782 qpair failed and we were unable to recover it. 00:31:27.782 [2024-06-07 21:48:27.878451] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.782 [2024-06-07 21:48:27.878528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.782 [2024-06-07 21:48:27.878543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.782 [2024-06-07 21:48:27.878550] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.782 [2024-06-07 21:48:27.878555] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.782 [2024-06-07 21:48:27.878569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.782 qpair failed and we were unable to recover it. 00:31:27.782 [2024-06-07 21:48:27.888406] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.782 [2024-06-07 21:48:27.888487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.782 [2024-06-07 21:48:27.888505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.782 [2024-06-07 21:48:27.888511] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.782 [2024-06-07 21:48:27.888516] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.782 [2024-06-07 21:48:27.888530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.782 qpair failed and we were unable to recover it. 00:31:27.782 [2024-06-07 21:48:27.898447] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.782 [2024-06-07 21:48:27.898530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.782 [2024-06-07 21:48:27.898544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.782 [2024-06-07 21:48:27.898550] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.782 [2024-06-07 21:48:27.898556] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.782 [2024-06-07 21:48:27.898569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.782 qpair failed and we were unable to recover it. 00:31:27.782 [2024-06-07 21:48:27.908489] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.782 [2024-06-07 21:48:27.908570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.782 [2024-06-07 21:48:27.908584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.782 [2024-06-07 21:48:27.908590] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.782 [2024-06-07 21:48:27.908596] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.782 [2024-06-07 21:48:27.908609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.782 qpair failed and we were unable to recover it. 00:31:27.782 [2024-06-07 21:48:27.918576] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.782 [2024-06-07 21:48:27.918654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.782 [2024-06-07 21:48:27.918669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.782 [2024-06-07 21:48:27.918675] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.782 [2024-06-07 21:48:27.918680] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.782 [2024-06-07 21:48:27.918694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.782 qpair failed and we were unable to recover it. 00:31:27.782 [2024-06-07 21:48:27.928599] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.782 [2024-06-07 21:48:27.928695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.782 [2024-06-07 21:48:27.928709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.782 [2024-06-07 21:48:27.928715] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.782 [2024-06-07 21:48:27.928724] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.782 [2024-06-07 21:48:27.928738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.782 qpair failed and we were unable to recover it. 00:31:27.782 [2024-06-07 21:48:27.938571] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.782 [2024-06-07 21:48:27.938653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.782 [2024-06-07 21:48:27.938668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.782 [2024-06-07 21:48:27.938674] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.782 [2024-06-07 21:48:27.938679] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.782 [2024-06-07 21:48:27.938693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.782 qpair failed and we were unable to recover it. 00:31:27.782 [2024-06-07 21:48:27.948674] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.782 [2024-06-07 21:48:27.948777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.782 [2024-06-07 21:48:27.948791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.782 [2024-06-07 21:48:27.948797] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.782 [2024-06-07 21:48:27.948802] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.782 [2024-06-07 21:48:27.948816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.782 qpair failed and we were unable to recover it. 00:31:27.782 [2024-06-07 21:48:27.958626] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.782 [2024-06-07 21:48:27.958706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.782 [2024-06-07 21:48:27.958720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.782 [2024-06-07 21:48:27.958727] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.782 [2024-06-07 21:48:27.958732] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.782 [2024-06-07 21:48:27.958746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.782 qpair failed and we were unable to recover it. 00:31:27.782 [2024-06-07 21:48:27.968634] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.782 [2024-06-07 21:48:27.968728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.782 [2024-06-07 21:48:27.968743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.782 [2024-06-07 21:48:27.968748] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.782 [2024-06-07 21:48:27.968754] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.782 [2024-06-07 21:48:27.968767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.782 qpair failed and we were unable to recover it. 00:31:27.782 [2024-06-07 21:48:27.978745] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.782 [2024-06-07 21:48:27.978828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.782 [2024-06-07 21:48:27.978843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.782 [2024-06-07 21:48:27.978849] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.782 [2024-06-07 21:48:27.978854] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.782 [2024-06-07 21:48:27.978868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.782 qpair failed and we were unable to recover it. 00:31:27.782 [2024-06-07 21:48:27.988758] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.782 [2024-06-07 21:48:27.988869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.782 [2024-06-07 21:48:27.988884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.782 [2024-06-07 21:48:27.988890] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.782 [2024-06-07 21:48:27.988895] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.782 [2024-06-07 21:48:27.988909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.782 qpair failed and we were unable to recover it. 00:31:27.782 [2024-06-07 21:48:27.998775] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.782 [2024-06-07 21:48:27.998856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.782 [2024-06-07 21:48:27.998871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.782 [2024-06-07 21:48:27.998877] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.782 [2024-06-07 21:48:27.998882] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.782 [2024-06-07 21:48:27.998896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.782 qpair failed and we were unable to recover it. 00:31:27.782 [2024-06-07 21:48:28.008819] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.782 [2024-06-07 21:48:28.008897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.782 [2024-06-07 21:48:28.008912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.782 [2024-06-07 21:48:28.008918] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.782 [2024-06-07 21:48:28.008923] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.782 [2024-06-07 21:48:28.008937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.782 qpair failed and we were unable to recover it. 00:31:27.782 [2024-06-07 21:48:28.018858] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.782 [2024-06-07 21:48:28.018961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.782 [2024-06-07 21:48:28.018976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.782 [2024-06-07 21:48:28.018985] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.782 [2024-06-07 21:48:28.018990] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.782 [2024-06-07 21:48:28.019003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.782 qpair failed and we were unable to recover it. 00:31:27.783 [2024-06-07 21:48:28.028803] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.783 [2024-06-07 21:48:28.028890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.783 [2024-06-07 21:48:28.028905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.783 [2024-06-07 21:48:28.028911] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.783 [2024-06-07 21:48:28.028916] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.783 [2024-06-07 21:48:28.028929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.783 qpair failed and we were unable to recover it. 00:31:27.783 [2024-06-07 21:48:28.038875] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.783 [2024-06-07 21:48:28.038958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.783 [2024-06-07 21:48:28.038972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.783 [2024-06-07 21:48:28.038979] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.783 [2024-06-07 21:48:28.038984] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.783 [2024-06-07 21:48:28.038998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.783 qpair failed and we were unable to recover it. 00:31:27.783 [2024-06-07 21:48:28.048862] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.783 [2024-06-07 21:48:28.048944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.783 [2024-06-07 21:48:28.048959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.783 [2024-06-07 21:48:28.048965] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.783 [2024-06-07 21:48:28.048970] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:27.783 [2024-06-07 21:48:28.048984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.783 qpair failed and we were unable to recover it. 00:31:28.041 [2024-06-07 21:48:28.058907] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.041 [2024-06-07 21:48:28.058987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.041 [2024-06-07 21:48:28.059002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.041 [2024-06-07 21:48:28.059008] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.041 [2024-06-07 21:48:28.059013] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.041 [2024-06-07 21:48:28.059033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.041 qpair failed and we were unable to recover it. 00:31:28.041 [2024-06-07 21:48:28.069030] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.041 [2024-06-07 21:48:28.069111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.042 [2024-06-07 21:48:28.069126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.042 [2024-06-07 21:48:28.069132] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.042 [2024-06-07 21:48:28.069137] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.042 [2024-06-07 21:48:28.069151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.042 qpair failed and we were unable to recover it. 00:31:28.042 [2024-06-07 21:48:28.079061] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.042 [2024-06-07 21:48:28.079143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.042 [2024-06-07 21:48:28.079158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.042 [2024-06-07 21:48:28.079165] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.042 [2024-06-07 21:48:28.079170] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.042 [2024-06-07 21:48:28.079184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.042 qpair failed and we were unable to recover it. 00:31:28.042 [2024-06-07 21:48:28.089073] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.042 [2024-06-07 21:48:28.089153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.042 [2024-06-07 21:48:28.089167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.042 [2024-06-07 21:48:28.089173] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.042 [2024-06-07 21:48:28.089179] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.042 [2024-06-07 21:48:28.089193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.042 qpair failed and we were unable to recover it. 00:31:28.042 [2024-06-07 21:48:28.099119] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.042 [2024-06-07 21:48:28.099209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.042 [2024-06-07 21:48:28.099223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.042 [2024-06-07 21:48:28.099229] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.042 [2024-06-07 21:48:28.099234] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.042 [2024-06-07 21:48:28.099248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.042 qpair failed and we were unable to recover it. 00:31:28.042 [2024-06-07 21:48:28.109113] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.042 [2024-06-07 21:48:28.109196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.042 [2024-06-07 21:48:28.109211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.042 [2024-06-07 21:48:28.109220] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.042 [2024-06-07 21:48:28.109225] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.042 [2024-06-07 21:48:28.109239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.042 qpair failed and we were unable to recover it. 00:31:28.042 [2024-06-07 21:48:28.119130] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.042 [2024-06-07 21:48:28.119218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.042 [2024-06-07 21:48:28.119233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.042 [2024-06-07 21:48:28.119238] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.042 [2024-06-07 21:48:28.119244] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.042 [2024-06-07 21:48:28.119258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.042 qpair failed and we were unable to recover it. 00:31:28.042 [2024-06-07 21:48:28.129194] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.042 [2024-06-07 21:48:28.129270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.042 [2024-06-07 21:48:28.129285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.042 [2024-06-07 21:48:28.129291] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.042 [2024-06-07 21:48:28.129296] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.042 [2024-06-07 21:48:28.129310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.042 qpair failed and we were unable to recover it. 00:31:28.042 [2024-06-07 21:48:28.139138] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.042 [2024-06-07 21:48:28.139216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.042 [2024-06-07 21:48:28.139231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.042 [2024-06-07 21:48:28.139237] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.042 [2024-06-07 21:48:28.139242] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.042 [2024-06-07 21:48:28.139256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.042 qpair failed and we were unable to recover it. 00:31:28.042 [2024-06-07 21:48:28.149210] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.042 [2024-06-07 21:48:28.149291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.042 [2024-06-07 21:48:28.149306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.042 [2024-06-07 21:48:28.149312] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.042 [2024-06-07 21:48:28.149317] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.042 [2024-06-07 21:48:28.149331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.042 qpair failed and we were unable to recover it. 00:31:28.042 [2024-06-07 21:48:28.159196] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.042 [2024-06-07 21:48:28.159283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.042 [2024-06-07 21:48:28.159298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.042 [2024-06-07 21:48:28.159304] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.042 [2024-06-07 21:48:28.159309] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.042 [2024-06-07 21:48:28.159322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.042 qpair failed and we were unable to recover it. 00:31:28.042 [2024-06-07 21:48:28.169311] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.042 [2024-06-07 21:48:28.169387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.042 [2024-06-07 21:48:28.169402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.042 [2024-06-07 21:48:28.169408] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.042 [2024-06-07 21:48:28.169414] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.042 [2024-06-07 21:48:28.169427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.042 qpair failed and we were unable to recover it. 00:31:28.042 [2024-06-07 21:48:28.179331] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.042 [2024-06-07 21:48:28.179414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.042 [2024-06-07 21:48:28.179428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.042 [2024-06-07 21:48:28.179434] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.042 [2024-06-07 21:48:28.179440] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.042 [2024-06-07 21:48:28.179454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.042 qpair failed and we were unable to recover it. 00:31:28.042 [2024-06-07 21:48:28.189376] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.042 [2024-06-07 21:48:28.189491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.043 [2024-06-07 21:48:28.189506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.043 [2024-06-07 21:48:28.189512] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.043 [2024-06-07 21:48:28.189518] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.043 [2024-06-07 21:48:28.189531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.043 qpair failed and we were unable to recover it. 00:31:28.043 [2024-06-07 21:48:28.199336] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.043 [2024-06-07 21:48:28.199416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.043 [2024-06-07 21:48:28.199433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.043 [2024-06-07 21:48:28.199440] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.043 [2024-06-07 21:48:28.199445] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.043 [2024-06-07 21:48:28.199459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.043 qpair failed and we were unable to recover it. 00:31:28.043 [2024-06-07 21:48:28.209336] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.043 [2024-06-07 21:48:28.209417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.043 [2024-06-07 21:48:28.209432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.043 [2024-06-07 21:48:28.209439] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.043 [2024-06-07 21:48:28.209444] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.043 [2024-06-07 21:48:28.209458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.043 qpair failed and we were unable to recover it. 00:31:28.043 [2024-06-07 21:48:28.219490] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.043 [2024-06-07 21:48:28.219569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.043 [2024-06-07 21:48:28.219584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.043 [2024-06-07 21:48:28.219590] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.043 [2024-06-07 21:48:28.219595] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.043 [2024-06-07 21:48:28.219609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.043 qpair failed and we were unable to recover it. 00:31:28.043 [2024-06-07 21:48:28.229496] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.043 [2024-06-07 21:48:28.229586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.043 [2024-06-07 21:48:28.229600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.043 [2024-06-07 21:48:28.229606] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.043 [2024-06-07 21:48:28.229612] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.043 [2024-06-07 21:48:28.229625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.043 qpair failed and we were unable to recover it. 00:31:28.043 [2024-06-07 21:48:28.239521] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.043 [2024-06-07 21:48:28.239605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.043 [2024-06-07 21:48:28.239619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.043 [2024-06-07 21:48:28.239626] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.043 [2024-06-07 21:48:28.239631] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.043 [2024-06-07 21:48:28.239647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.043 qpair failed and we were unable to recover it. 00:31:28.043 [2024-06-07 21:48:28.249553] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.043 [2024-06-07 21:48:28.249636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.043 [2024-06-07 21:48:28.249651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.043 [2024-06-07 21:48:28.249658] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.043 [2024-06-07 21:48:28.249662] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.043 [2024-06-07 21:48:28.249676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.043 qpair failed and we were unable to recover it. 00:31:28.043 [2024-06-07 21:48:28.259548] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.043 [2024-06-07 21:48:28.259701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.043 [2024-06-07 21:48:28.259715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.043 [2024-06-07 21:48:28.259721] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.043 [2024-06-07 21:48:28.259727] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.043 [2024-06-07 21:48:28.259741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.043 qpair failed and we were unable to recover it. 00:31:28.043 [2024-06-07 21:48:28.269604] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.043 [2024-06-07 21:48:28.269685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.043 [2024-06-07 21:48:28.269700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.043 [2024-06-07 21:48:28.269706] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.043 [2024-06-07 21:48:28.269711] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.043 [2024-06-07 21:48:28.269724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.043 qpair failed and we were unable to recover it. 00:31:28.043 [2024-06-07 21:48:28.279573] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.043 [2024-06-07 21:48:28.279653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.043 [2024-06-07 21:48:28.279667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.043 [2024-06-07 21:48:28.279673] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.043 [2024-06-07 21:48:28.279678] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.043 [2024-06-07 21:48:28.279692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.043 qpair failed and we were unable to recover it. 00:31:28.043 [2024-06-07 21:48:28.289705] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.043 [2024-06-07 21:48:28.289796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.043 [2024-06-07 21:48:28.289814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.043 [2024-06-07 21:48:28.289820] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.043 [2024-06-07 21:48:28.289825] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.043 [2024-06-07 21:48:28.289839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.043 qpair failed and we were unable to recover it. 00:31:28.043 [2024-06-07 21:48:28.299710] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.043 [2024-06-07 21:48:28.299786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.043 [2024-06-07 21:48:28.299801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.043 [2024-06-07 21:48:28.299807] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.043 [2024-06-07 21:48:28.299812] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.043 [2024-06-07 21:48:28.299825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.043 qpair failed and we were unable to recover it. 00:31:28.302 [2024-06-07 21:48:28.309740] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.302 [2024-06-07 21:48:28.309853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.302 [2024-06-07 21:48:28.309869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.302 [2024-06-07 21:48:28.309875] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.302 [2024-06-07 21:48:28.309881] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.302 [2024-06-07 21:48:28.309895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.303 qpair failed and we were unable to recover it. 00:31:28.303 [2024-06-07 21:48:28.319759] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.303 [2024-06-07 21:48:28.319837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.303 [2024-06-07 21:48:28.319851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.303 [2024-06-07 21:48:28.319858] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.303 [2024-06-07 21:48:28.319863] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.303 [2024-06-07 21:48:28.319877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.303 qpair failed and we were unable to recover it. 00:31:28.303 [2024-06-07 21:48:28.329766] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.303 [2024-06-07 21:48:28.329855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.303 [2024-06-07 21:48:28.329869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.303 [2024-06-07 21:48:28.329875] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.303 [2024-06-07 21:48:28.329883] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.303 [2024-06-07 21:48:28.329897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.303 qpair failed and we were unable to recover it. 00:31:28.303 [2024-06-07 21:48:28.339832] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.303 [2024-06-07 21:48:28.339909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.303 [2024-06-07 21:48:28.339924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.303 [2024-06-07 21:48:28.339930] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.303 [2024-06-07 21:48:28.339936] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.303 [2024-06-07 21:48:28.339949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.303 qpair failed and we were unable to recover it. 00:31:28.303 [2024-06-07 21:48:28.349842] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.303 [2024-06-07 21:48:28.349926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.303 [2024-06-07 21:48:28.349940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.303 [2024-06-07 21:48:28.349946] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.303 [2024-06-07 21:48:28.349951] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.303 [2024-06-07 21:48:28.349965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.303 qpair failed and we were unable to recover it. 00:31:28.303 [2024-06-07 21:48:28.359875] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.303 [2024-06-07 21:48:28.359990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.303 [2024-06-07 21:48:28.360004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.303 [2024-06-07 21:48:28.360010] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.303 [2024-06-07 21:48:28.360016] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.303 [2024-06-07 21:48:28.360035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.303 qpair failed and we were unable to recover it. 00:31:28.303 [2024-06-07 21:48:28.369957] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.303 [2024-06-07 21:48:28.370049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.303 [2024-06-07 21:48:28.370063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.303 [2024-06-07 21:48:28.370070] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.303 [2024-06-07 21:48:28.370075] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.303 [2024-06-07 21:48:28.370089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.303 qpair failed and we were unable to recover it. 00:31:28.303 [2024-06-07 21:48:28.379930] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.303 [2024-06-07 21:48:28.380012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.303 [2024-06-07 21:48:28.380031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.303 [2024-06-07 21:48:28.380037] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.303 [2024-06-07 21:48:28.380043] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.303 [2024-06-07 21:48:28.380056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.303 qpair failed and we were unable to recover it. 00:31:28.303 [2024-06-07 21:48:28.389951] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.303 [2024-06-07 21:48:28.390034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.303 [2024-06-07 21:48:28.390049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.303 [2024-06-07 21:48:28.390055] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.303 [2024-06-07 21:48:28.390060] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.303 [2024-06-07 21:48:28.390074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.303 qpair failed and we were unable to recover it. 00:31:28.303 [2024-06-07 21:48:28.399918] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.303 [2024-06-07 21:48:28.400047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.303 [2024-06-07 21:48:28.400062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.303 [2024-06-07 21:48:28.400068] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.303 [2024-06-07 21:48:28.400073] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.303 [2024-06-07 21:48:28.400087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.303 qpair failed and we were unable to recover it. 00:31:28.303 [2024-06-07 21:48:28.410082] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.303 [2024-06-07 21:48:28.410165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.303 [2024-06-07 21:48:28.410180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.303 [2024-06-07 21:48:28.410186] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.303 [2024-06-07 21:48:28.410191] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.303 [2024-06-07 21:48:28.410205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.303 qpair failed and we were unable to recover it. 00:31:28.303 [2024-06-07 21:48:28.420061] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.303 [2024-06-07 21:48:28.420137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.303 [2024-06-07 21:48:28.420151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.303 [2024-06-07 21:48:28.420158] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.303 [2024-06-07 21:48:28.420166] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.303 [2024-06-07 21:48:28.420180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.303 qpair failed and we were unable to recover it. 00:31:28.303 [2024-06-07 21:48:28.430080] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.303 [2024-06-07 21:48:28.430158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.303 [2024-06-07 21:48:28.430173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.303 [2024-06-07 21:48:28.430179] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.303 [2024-06-07 21:48:28.430185] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.303 [2024-06-07 21:48:28.430198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.303 qpair failed and we were unable to recover it. 00:31:28.303 [2024-06-07 21:48:28.440110] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.303 [2024-06-07 21:48:28.440193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.304 [2024-06-07 21:48:28.440207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.304 [2024-06-07 21:48:28.440214] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.304 [2024-06-07 21:48:28.440219] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.304 [2024-06-07 21:48:28.440233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.304 qpair failed and we were unable to recover it. 00:31:28.304 [2024-06-07 21:48:28.450143] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.304 [2024-06-07 21:48:28.450221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.304 [2024-06-07 21:48:28.450236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.304 [2024-06-07 21:48:28.450242] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.304 [2024-06-07 21:48:28.450247] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.304 [2024-06-07 21:48:28.450261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.304 qpair failed and we were unable to recover it. 00:31:28.304 [2024-06-07 21:48:28.460173] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.304 [2024-06-07 21:48:28.460257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.304 [2024-06-07 21:48:28.460272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.304 [2024-06-07 21:48:28.460278] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.304 [2024-06-07 21:48:28.460283] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.304 [2024-06-07 21:48:28.460297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.304 qpair failed and we were unable to recover it. 00:31:28.304 [2024-06-07 21:48:28.470213] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.304 [2024-06-07 21:48:28.470299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.304 [2024-06-07 21:48:28.470313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.304 [2024-06-07 21:48:28.470319] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.304 [2024-06-07 21:48:28.470325] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.304 [2024-06-07 21:48:28.470339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.304 qpair failed and we were unable to recover it. 00:31:28.304 [2024-06-07 21:48:28.480244] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.304 [2024-06-07 21:48:28.480397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.304 [2024-06-07 21:48:28.480411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.304 [2024-06-07 21:48:28.480417] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.304 [2024-06-07 21:48:28.480422] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.304 [2024-06-07 21:48:28.480437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.304 qpair failed and we were unable to recover it. 00:31:28.304 [2024-06-07 21:48:28.490201] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.304 [2024-06-07 21:48:28.490278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.304 [2024-06-07 21:48:28.490293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.304 [2024-06-07 21:48:28.490299] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.304 [2024-06-07 21:48:28.490304] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.304 [2024-06-07 21:48:28.490318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.304 qpair failed and we were unable to recover it. 00:31:28.304 [2024-06-07 21:48:28.500289] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.304 [2024-06-07 21:48:28.500369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.304 [2024-06-07 21:48:28.500383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.304 [2024-06-07 21:48:28.500390] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.304 [2024-06-07 21:48:28.500395] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.304 [2024-06-07 21:48:28.500409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.304 qpair failed and we were unable to recover it. 00:31:28.304 [2024-06-07 21:48:28.510277] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.304 [2024-06-07 21:48:28.510360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.304 [2024-06-07 21:48:28.510376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.304 [2024-06-07 21:48:28.510385] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.304 [2024-06-07 21:48:28.510390] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.304 [2024-06-07 21:48:28.510404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.304 qpair failed and we were unable to recover it. 00:31:28.304 [2024-06-07 21:48:28.520357] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.304 [2024-06-07 21:48:28.520436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.304 [2024-06-07 21:48:28.520451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.304 [2024-06-07 21:48:28.520457] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.304 [2024-06-07 21:48:28.520462] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.304 [2024-06-07 21:48:28.520477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.304 qpair failed and we were unable to recover it. 00:31:28.304 [2024-06-07 21:48:28.530377] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.304 [2024-06-07 21:48:28.530460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.304 [2024-06-07 21:48:28.530475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.304 [2024-06-07 21:48:28.530481] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.304 [2024-06-07 21:48:28.530486] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.304 [2024-06-07 21:48:28.530500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.304 qpair failed and we were unable to recover it. 00:31:28.304 [2024-06-07 21:48:28.540432] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.304 [2024-06-07 21:48:28.540522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.304 [2024-06-07 21:48:28.540536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.304 [2024-06-07 21:48:28.540543] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.304 [2024-06-07 21:48:28.540548] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.304 [2024-06-07 21:48:28.540561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.304 qpair failed and we were unable to recover it. 00:31:28.304 [2024-06-07 21:48:28.550434] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.304 [2024-06-07 21:48:28.550514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.304 [2024-06-07 21:48:28.550529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.304 [2024-06-07 21:48:28.550535] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.304 [2024-06-07 21:48:28.550540] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.304 [2024-06-07 21:48:28.550554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.304 qpair failed and we were unable to recover it. 00:31:28.304 [2024-06-07 21:48:28.560493] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.304 [2024-06-07 21:48:28.560655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.304 [2024-06-07 21:48:28.560669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.304 [2024-06-07 21:48:28.560675] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.304 [2024-06-07 21:48:28.560680] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.304 [2024-06-07 21:48:28.560695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.304 qpair failed and we were unable to recover it. 00:31:28.564 [2024-06-07 21:48:28.570523] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.564 [2024-06-07 21:48:28.570647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.564 [2024-06-07 21:48:28.570662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.564 [2024-06-07 21:48:28.570668] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.564 [2024-06-07 21:48:28.570673] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.564 [2024-06-07 21:48:28.570687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.564 qpair failed and we were unable to recover it. 00:31:28.564 [2024-06-07 21:48:28.580600] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.564 [2024-06-07 21:48:28.580679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.564 [2024-06-07 21:48:28.580693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.564 [2024-06-07 21:48:28.580699] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.564 [2024-06-07 21:48:28.580705] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.564 [2024-06-07 21:48:28.580718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.564 qpair failed and we were unable to recover it. 00:31:28.564 [2024-06-07 21:48:28.590620] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.564 [2024-06-07 21:48:28.590704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.564 [2024-06-07 21:48:28.590719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.564 [2024-06-07 21:48:28.590725] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.564 [2024-06-07 21:48:28.590730] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.564 [2024-06-07 21:48:28.590744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.564 qpair failed and we were unable to recover it. 00:31:28.564 [2024-06-07 21:48:28.600603] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.564 [2024-06-07 21:48:28.600681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.564 [2024-06-07 21:48:28.600698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.564 [2024-06-07 21:48:28.600705] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.564 [2024-06-07 21:48:28.600710] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.564 [2024-06-07 21:48:28.600724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.564 qpair failed and we were unable to recover it. 00:31:28.564 [2024-06-07 21:48:28.610635] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.564 [2024-06-07 21:48:28.610716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.564 [2024-06-07 21:48:28.610731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.564 [2024-06-07 21:48:28.610737] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.564 [2024-06-07 21:48:28.610742] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.564 [2024-06-07 21:48:28.610755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.564 qpair failed and we were unable to recover it. 00:31:28.564 [2024-06-07 21:48:28.620669] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.564 [2024-06-07 21:48:28.620745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.564 [2024-06-07 21:48:28.620760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.564 [2024-06-07 21:48:28.620766] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.564 [2024-06-07 21:48:28.620771] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.564 [2024-06-07 21:48:28.620784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.564 qpair failed and we were unable to recover it. 00:31:28.564 [2024-06-07 21:48:28.630686] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.564 [2024-06-07 21:48:28.630767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.564 [2024-06-07 21:48:28.630782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.564 [2024-06-07 21:48:28.630788] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.564 [2024-06-07 21:48:28.630793] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.564 [2024-06-07 21:48:28.630807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.564 qpair failed and we were unable to recover it. 00:31:28.564 [2024-06-07 21:48:28.640745] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.564 [2024-06-07 21:48:28.640828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.564 [2024-06-07 21:48:28.640842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.564 [2024-06-07 21:48:28.640848] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.564 [2024-06-07 21:48:28.640853] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.564 [2024-06-07 21:48:28.640870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.564 qpair failed and we were unable to recover it. 00:31:28.564 [2024-06-07 21:48:28.650763] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.564 [2024-06-07 21:48:28.650844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.564 [2024-06-07 21:48:28.650858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.564 [2024-06-07 21:48:28.650864] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.564 [2024-06-07 21:48:28.650869] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.564 [2024-06-07 21:48:28.650883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.564 qpair failed and we were unable to recover it. 00:31:28.564 [2024-06-07 21:48:28.660793] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.564 [2024-06-07 21:48:28.660869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.564 [2024-06-07 21:48:28.660883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.564 [2024-06-07 21:48:28.660890] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.564 [2024-06-07 21:48:28.660895] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.564 [2024-06-07 21:48:28.660908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.564 qpair failed and we were unable to recover it. 00:31:28.564 [2024-06-07 21:48:28.670814] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.564 [2024-06-07 21:48:28.670894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.564 [2024-06-07 21:48:28.670909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.564 [2024-06-07 21:48:28.670915] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.564 [2024-06-07 21:48:28.670920] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.564 [2024-06-07 21:48:28.670933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.564 qpair failed and we were unable to recover it. 00:31:28.564 [2024-06-07 21:48:28.680861] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.564 [2024-06-07 21:48:28.680945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.564 [2024-06-07 21:48:28.680959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.564 [2024-06-07 21:48:28.680965] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.564 [2024-06-07 21:48:28.680971] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.565 [2024-06-07 21:48:28.680984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.565 qpair failed and we were unable to recover it. 00:31:28.565 [2024-06-07 21:48:28.690938] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.565 [2024-06-07 21:48:28.691063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.565 [2024-06-07 21:48:28.691080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.565 [2024-06-07 21:48:28.691088] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.565 [2024-06-07 21:48:28.691093] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.565 [2024-06-07 21:48:28.691107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.565 qpair failed and we were unable to recover it. 00:31:28.565 [2024-06-07 21:48:28.700930] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.565 [2024-06-07 21:48:28.701014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.565 [2024-06-07 21:48:28.701033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.565 [2024-06-07 21:48:28.701040] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.565 [2024-06-07 21:48:28.701045] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.565 [2024-06-07 21:48:28.701059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.565 qpair failed and we were unable to recover it. 00:31:28.565 [2024-06-07 21:48:28.710975] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.565 [2024-06-07 21:48:28.711061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.565 [2024-06-07 21:48:28.711077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.565 [2024-06-07 21:48:28.711083] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.565 [2024-06-07 21:48:28.711088] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.565 [2024-06-07 21:48:28.711102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.565 qpair failed and we were unable to recover it. 00:31:28.565 [2024-06-07 21:48:28.720973] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.565 [2024-06-07 21:48:28.721057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.565 [2024-06-07 21:48:28.721072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.565 [2024-06-07 21:48:28.721078] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.565 [2024-06-07 21:48:28.721083] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.565 [2024-06-07 21:48:28.721097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.565 qpair failed and we were unable to recover it. 00:31:28.565 [2024-06-07 21:48:28.731019] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.565 [2024-06-07 21:48:28.731108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.565 [2024-06-07 21:48:28.731123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.565 [2024-06-07 21:48:28.731129] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.565 [2024-06-07 21:48:28.731137] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.565 [2024-06-07 21:48:28.731151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.565 qpair failed and we were unable to recover it. 00:31:28.565 [2024-06-07 21:48:28.741034] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.565 [2024-06-07 21:48:28.741130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.565 [2024-06-07 21:48:28.741145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.565 [2024-06-07 21:48:28.741151] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.565 [2024-06-07 21:48:28.741156] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.565 [2024-06-07 21:48:28.741170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.565 qpair failed and we were unable to recover it. 00:31:28.565 [2024-06-07 21:48:28.751005] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.565 [2024-06-07 21:48:28.751091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.565 [2024-06-07 21:48:28.751106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.565 [2024-06-07 21:48:28.751112] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.565 [2024-06-07 21:48:28.751117] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.565 [2024-06-07 21:48:28.751131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.565 qpair failed and we were unable to recover it. 00:31:28.565 [2024-06-07 21:48:28.761059] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.565 [2024-06-07 21:48:28.761140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.565 [2024-06-07 21:48:28.761155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.565 [2024-06-07 21:48:28.761161] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.565 [2024-06-07 21:48:28.761167] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.565 [2024-06-07 21:48:28.761181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.565 qpair failed and we were unable to recover it. 00:31:28.565 [2024-06-07 21:48:28.771126] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.565 [2024-06-07 21:48:28.771205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.565 [2024-06-07 21:48:28.771221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.565 [2024-06-07 21:48:28.771227] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.565 [2024-06-07 21:48:28.771233] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.565 [2024-06-07 21:48:28.771247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.565 qpair failed and we were unable to recover it. 00:31:28.565 [2024-06-07 21:48:28.781201] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.565 [2024-06-07 21:48:28.781291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.565 [2024-06-07 21:48:28.781306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.565 [2024-06-07 21:48:28.781312] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.565 [2024-06-07 21:48:28.781317] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.565 [2024-06-07 21:48:28.781331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.565 qpair failed and we were unable to recover it. 00:31:28.565 [2024-06-07 21:48:28.791229] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.565 [2024-06-07 21:48:28.791348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.565 [2024-06-07 21:48:28.791362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.565 [2024-06-07 21:48:28.791369] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.565 [2024-06-07 21:48:28.791374] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.565 [2024-06-07 21:48:28.791387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.565 qpair failed and we were unable to recover it. 00:31:28.565 [2024-06-07 21:48:28.801214] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.565 [2024-06-07 21:48:28.801298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.565 [2024-06-07 21:48:28.801312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.565 [2024-06-07 21:48:28.801318] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.565 [2024-06-07 21:48:28.801323] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.565 [2024-06-07 21:48:28.801337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.565 qpair failed and we were unable to recover it. 00:31:28.565 [2024-06-07 21:48:28.811276] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.565 [2024-06-07 21:48:28.811359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.565 [2024-06-07 21:48:28.811373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.565 [2024-06-07 21:48:28.811380] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.565 [2024-06-07 21:48:28.811385] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.565 [2024-06-07 21:48:28.811398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.565 qpair failed and we were unable to recover it. 00:31:28.565 [2024-06-07 21:48:28.821272] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.566 [2024-06-07 21:48:28.821349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.566 [2024-06-07 21:48:28.821364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.566 [2024-06-07 21:48:28.821370] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.566 [2024-06-07 21:48:28.821379] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.566 [2024-06-07 21:48:28.821392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.566 qpair failed and we were unable to recover it. 00:31:28.824 [2024-06-07 21:48:28.831222] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.824 [2024-06-07 21:48:28.831302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.824 [2024-06-07 21:48:28.831317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.824 [2024-06-07 21:48:28.831323] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.824 [2024-06-07 21:48:28.831327] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.824 [2024-06-07 21:48:28.831341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.824 qpair failed and we were unable to recover it. 00:31:28.824 [2024-06-07 21:48:28.841339] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.824 [2024-06-07 21:48:28.841419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.824 [2024-06-07 21:48:28.841433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.824 [2024-06-07 21:48:28.841439] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.824 [2024-06-07 21:48:28.841444] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.825 [2024-06-07 21:48:28.841457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.825 qpair failed and we were unable to recover it. 00:31:28.825 [2024-06-07 21:48:28.851375] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.825 [2024-06-07 21:48:28.851456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.825 [2024-06-07 21:48:28.851471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.825 [2024-06-07 21:48:28.851477] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.825 [2024-06-07 21:48:28.851482] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.825 [2024-06-07 21:48:28.851495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.825 qpair failed and we were unable to recover it. 00:31:28.825 [2024-06-07 21:48:28.861408] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.825 [2024-06-07 21:48:28.861494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.825 [2024-06-07 21:48:28.861508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.825 [2024-06-07 21:48:28.861514] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.825 [2024-06-07 21:48:28.861519] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.825 [2024-06-07 21:48:28.861533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.825 qpair failed and we were unable to recover it. 00:31:28.825 [2024-06-07 21:48:28.871427] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.825 [2024-06-07 21:48:28.871509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.825 [2024-06-07 21:48:28.871523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.825 [2024-06-07 21:48:28.871530] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.825 [2024-06-07 21:48:28.871535] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.825 [2024-06-07 21:48:28.871549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.825 qpair failed and we were unable to recover it. 00:31:28.825 [2024-06-07 21:48:28.881458] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.825 [2024-06-07 21:48:28.881542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.825 [2024-06-07 21:48:28.881557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.825 [2024-06-07 21:48:28.881563] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.825 [2024-06-07 21:48:28.881568] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.825 [2024-06-07 21:48:28.881582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.825 qpair failed and we were unable to recover it. 00:31:28.825 [2024-06-07 21:48:28.891485] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.825 [2024-06-07 21:48:28.891562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.825 [2024-06-07 21:48:28.891576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.825 [2024-06-07 21:48:28.891582] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.825 [2024-06-07 21:48:28.891588] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.825 [2024-06-07 21:48:28.891601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.825 qpair failed and we were unable to recover it. 00:31:28.825 [2024-06-07 21:48:28.901526] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.825 [2024-06-07 21:48:28.901603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.825 [2024-06-07 21:48:28.901617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.825 [2024-06-07 21:48:28.901624] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.825 [2024-06-07 21:48:28.901629] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.825 [2024-06-07 21:48:28.901643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.825 qpair failed and we were unable to recover it. 00:31:28.825 [2024-06-07 21:48:28.911633] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.825 [2024-06-07 21:48:28.911712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.825 [2024-06-07 21:48:28.911727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.825 [2024-06-07 21:48:28.911737] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.825 [2024-06-07 21:48:28.911743] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.825 [2024-06-07 21:48:28.911756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.825 qpair failed and we were unable to recover it. 00:31:28.825 [2024-06-07 21:48:28.921503] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.825 [2024-06-07 21:48:28.921583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.825 [2024-06-07 21:48:28.921597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.825 [2024-06-07 21:48:28.921604] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.825 [2024-06-07 21:48:28.921609] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.825 [2024-06-07 21:48:28.921623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.825 qpair failed and we were unable to recover it. 00:31:28.825 [2024-06-07 21:48:28.931603] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.825 [2024-06-07 21:48:28.931676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.825 [2024-06-07 21:48:28.931690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.825 [2024-06-07 21:48:28.931696] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.825 [2024-06-07 21:48:28.931702] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.825 [2024-06-07 21:48:28.931715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.825 qpair failed and we were unable to recover it. 00:31:28.825 [2024-06-07 21:48:28.941579] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.825 [2024-06-07 21:48:28.941657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.825 [2024-06-07 21:48:28.941671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.825 [2024-06-07 21:48:28.941677] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.825 [2024-06-07 21:48:28.941683] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7bc000b90 00:31:28.825 [2024-06-07 21:48:28.941697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.825 qpair failed and we were unable to recover it. 00:31:28.825 [2024-06-07 21:48:28.951689] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.825 [2024-06-07 21:48:28.951815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.825 [2024-06-07 21:48:28.951848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.825 [2024-06-07 21:48:28.951861] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.825 [2024-06-07 21:48:28.951872] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c4000b90 00:31:28.825 [2024-06-07 21:48:28.951901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.825 qpair failed and we were unable to recover it. 00:31:28.825 [2024-06-07 21:48:28.961703] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.825 [2024-06-07 21:48:28.961809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.825 [2024-06-07 21:48:28.961831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.825 [2024-06-07 21:48:28.961841] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.825 [2024-06-07 21:48:28.961849] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c4000b90 00:31:28.825 [2024-06-07 21:48:28.961871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.826 qpair failed and we were unable to recover it. 00:31:28.826 [2024-06-07 21:48:28.971803] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.826 [2024-06-07 21:48:28.971983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.826 [2024-06-07 21:48:28.972051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.826 [2024-06-07 21:48:28.972078] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.826 [2024-06-07 21:48:28.972098] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7b4000b90 00:31:28.826 [2024-06-07 21:48:28.972147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:28.826 qpair failed and we were unable to recover it. 00:31:28.826 [2024-06-07 21:48:28.981799] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.826 [2024-06-07 21:48:28.981928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.826 [2024-06-07 21:48:28.981957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.826 [2024-06-07 21:48:28.981971] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.826 [2024-06-07 21:48:28.981984] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7b4000b90 00:31:28.826 [2024-06-07 21:48:28.982013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:28.826 qpair failed and we were unable to recover it. 00:31:28.826 [2024-06-07 21:48:28.982183] nvme_ctrlr.c:4341:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:31:28.826 A controller has encountered a failure and is being reset. 00:31:28.826 [2024-06-07 21:48:28.991869] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.826 [2024-06-07 21:48:28.992061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.826 [2024-06-07 21:48:28.992120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.826 [2024-06-07 21:48:28.992145] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.826 [2024-06-07 21:48:28.992164] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13b4d60 00:31:28.826 [2024-06-07 21:48:28.992213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:28.826 qpair failed and we were unable to recover it. 00:31:28.826 [2024-06-07 21:48:29.001872] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.826 [2024-06-07 21:48:29.002010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.826 [2024-06-07 21:48:29.002052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.826 [2024-06-07 21:48:29.002067] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.826 [2024-06-07 21:48:29.002079] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13b4d60 00:31:28.826 [2024-06-07 21:48:29.002111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:28.826 qpair failed and we were unable to recover it. 00:31:28.826 [2024-06-07 21:48:29.002223] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c2a00 (9): Bad file descriptor 00:31:28.826 Controller properly reset. 00:31:28.826 Initializing NVMe Controllers 00:31:28.826 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:28.826 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:28.826 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:31:28.826 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:31:28.826 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:31:28.826 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:31:28.826 Initialization complete. Launching workers. 00:31:28.826 Starting thread on core 1 00:31:28.826 Starting thread on core 2 00:31:28.826 Starting thread on core 3 00:31:28.826 Starting thread on core 0 00:31:28.826 21:48:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:31:28.826 00:31:28.826 real 0m11.501s 00:31:28.826 user 0m21.363s 00:31:28.826 sys 0m4.454s 00:31:28.826 21:48:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:28.826 21:48:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:28.826 ************************************ 00:31:28.826 END TEST nvmf_target_disconnect_tc2 00:31:28.826 ************************************ 00:31:29.084 21:48:29 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:31:29.084 21:48:29 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:31:29.084 21:48:29 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:31:29.084 21:48:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:29.084 21:48:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:31:29.084 21:48:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:29.084 21:48:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:31:29.084 21:48:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:29.084 21:48:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:29.084 rmmod nvme_tcp 00:31:29.084 rmmod nvme_fabrics 00:31:29.084 rmmod nvme_keyring 00:31:29.084 21:48:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:29.084 21:48:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:31:29.084 21:48:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:31:29.084 21:48:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1639278 ']' 00:31:29.084 21:48:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1639278 00:31:29.084 21:48:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@949 -- # '[' -z 1639278 ']' 00:31:29.084 21:48:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # kill -0 1639278 00:31:29.084 21:48:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # uname 00:31:29.084 21:48:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:29.084 21:48:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1639278 00:31:29.084 21:48:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # process_name=reactor_4 00:31:29.084 21:48:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' reactor_4 = sudo ']' 00:31:29.084 21:48:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1639278' 00:31:29.084 killing process with pid 1639278 00:31:29.084 21:48:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # kill 1639278 00:31:29.084 21:48:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # wait 1639278 00:31:29.343 21:48:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:29.343 21:48:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:29.343 21:48:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:29.343 21:48:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:29.343 21:48:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:29.343 21:48:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:29.343 21:48:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:29.343 21:48:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:31.881 21:48:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:31.881 00:31:31.881 real 0m20.717s 00:31:31.881 user 0m48.883s 00:31:31.881 sys 0m9.799s 00:31:31.881 21:48:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:31.881 21:48:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:31.881 ************************************ 00:31:31.881 END TEST nvmf_target_disconnect 00:31:31.881 ************************************ 00:31:31.881 21:48:31 nvmf_tcp -- nvmf/nvmf.sh@125 -- # timing_exit host 00:31:31.881 21:48:31 nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:31.881 21:48:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:31.881 21:48:31 nvmf_tcp -- nvmf/nvmf.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:31:31.881 00:31:31.881 real 23m3.987s 00:31:31.881 user 50m13.275s 00:31:31.881 sys 7m4.426s 00:31:31.881 21:48:31 nvmf_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:31.881 21:48:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:31.881 ************************************ 00:31:31.881 END TEST nvmf_tcp 00:31:31.881 ************************************ 00:31:31.881 21:48:31 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:31:31.881 21:48:31 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:31.881 21:48:31 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:31:31.881 21:48:31 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:31.881 21:48:31 -- common/autotest_common.sh@10 -- # set +x 00:31:31.881 ************************************ 00:31:31.881 START TEST spdkcli_nvmf_tcp 00:31:31.881 ************************************ 00:31:31.881 21:48:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:31.881 * Looking for test storage... 00:31:31.881 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:31:31.881 21:48:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:31:31.881 21:48:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:31:31.881 21:48:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:31:31.881 21:48:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:31.881 21:48:31 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:31:31.881 21:48:31 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:31.881 21:48:31 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:31.881 21:48:31 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:31.881 21:48:31 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:31.881 21:48:31 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:31.881 21:48:31 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:31.881 21:48:31 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:31.881 21:48:31 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:31.881 21:48:31 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:31.881 21:48:31 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:31.881 21:48:31 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:31:31.881 21:48:31 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:31:31.881 21:48:31 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:31.881 21:48:31 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:31.881 21:48:31 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:31.881 21:48:31 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:31.881 21:48:31 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:31.881 21:48:31 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:31.881 21:48:31 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:31.881 21:48:31 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:31.881 21:48:31 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.881 21:48:31 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.881 21:48:31 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.881 21:48:31 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:31:31.881 21:48:31 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.881 21:48:31 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:31:31.881 21:48:31 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:31.881 21:48:31 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:31.881 21:48:31 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:31.881 21:48:31 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:31.881 21:48:31 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:31.881 21:48:31 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:31.881 21:48:31 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:31.881 21:48:31 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:31.881 21:48:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:31:31.881 21:48:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:31:31.881 21:48:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:31:31.881 21:48:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:31:31.881 21:48:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:31.882 21:48:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:31.882 21:48:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:31:31.882 21:48:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1641000 00:31:31.882 21:48:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1641000 00:31:31.882 21:48:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@830 -- # '[' -z 1641000 ']' 00:31:31.882 21:48:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:31:31.882 21:48:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:31.882 21:48:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:31.882 21:48:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:31.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:31.882 21:48:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:31.882 21:48:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:31.882 [2024-06-07 21:48:31.830771] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:31:31.882 [2024-06-07 21:48:31.830832] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1641000 ] 00:31:31.882 EAL: No free 2048 kB hugepages reported on node 1 00:31:31.882 [2024-06-07 21:48:31.910146] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:31.882 [2024-06-07 21:48:32.007048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:31.882 [2024-06-07 21:48:32.007053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:31.882 21:48:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:31.882 21:48:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@863 -- # return 0 00:31:31.882 21:48:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:31:31.882 21:48:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:31.882 21:48:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:31.882 21:48:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:31:31.882 21:48:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:31:31.882 21:48:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:31:31.882 21:48:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:31.882 21:48:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:31.882 21:48:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:31:31.882 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:31:31.882 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:31:31.882 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:31:31.882 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:31:31.882 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:31:31.882 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:31:31.882 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:31.882 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:31:31.882 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:31:31.882 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:31.882 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:31.882 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:31:31.882 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:31.882 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:31.882 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:31:31.882 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:31.882 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:31.882 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:31.882 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:31.882 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:31:31.882 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:31:31.882 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:31.882 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:31:31.882 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:31.882 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:31:31.882 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:31:31.882 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:31:31.882 ' 00:31:34.416 [2024-06-07 21:48:34.564595] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:35.799 [2024-06-07 21:48:35.740826] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:31:37.702 [2024-06-07 21:48:37.903867] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:31:39.607 [2024-06-07 21:48:39.761972] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:31:40.994 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:31:40.994 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:31:40.994 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:31:40.994 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:31:40.994 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:31:40.994 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:31:40.994 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:31:40.994 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:40.994 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:31:40.994 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:31:40.994 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:40.994 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:40.994 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:31:40.994 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:40.995 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:40.995 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:31:40.995 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:40.995 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:40.995 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:40.995 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:40.995 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:31:40.995 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:31:40.995 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:40.995 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:31:40.995 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:40.995 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:31:40.995 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:31:40.995 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:31:41.308 21:48:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:31:41.308 21:48:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:41.308 21:48:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:41.308 21:48:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:31:41.308 21:48:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:41.308 21:48:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:41.308 21:48:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:31:41.308 21:48:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:31:41.567 21:48:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:31:41.567 21:48:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:31:41.567 21:48:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:31:41.567 21:48:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:41.567 21:48:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:41.567 21:48:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:31:41.567 21:48:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:41.567 21:48:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:41.567 21:48:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:31:41.567 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:31:41.567 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:41.567 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:31:41.567 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:31:41.567 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:31:41.567 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:31:41.567 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:41.567 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:31:41.567 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:31:41.567 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:31:41.567 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:31:41.567 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:31:41.567 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:31:41.567 ' 00:31:48.135 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:31:48.135 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:31:48.135 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:48.135 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:31:48.135 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:31:48.135 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:31:48.135 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:31:48.135 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:48.135 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:31:48.135 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:31:48.135 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:31:48.135 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:31:48.135 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:31:48.135 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:31:48.136 21:48:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:31:48.136 21:48:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:48.136 21:48:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:48.136 21:48:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1641000 00:31:48.136 21:48:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@949 -- # '[' -z 1641000 ']' 00:31:48.136 21:48:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # kill -0 1641000 00:31:48.136 21:48:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # uname 00:31:48.136 21:48:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:48.136 21:48:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1641000 00:31:48.136 21:48:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:31:48.136 21:48:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:31:48.136 21:48:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1641000' 00:31:48.136 killing process with pid 1641000 00:31:48.136 21:48:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # kill 1641000 00:31:48.136 21:48:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # wait 1641000 00:31:48.136 21:48:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:31:48.136 21:48:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:31:48.136 21:48:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1641000 ']' 00:31:48.136 21:48:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1641000 00:31:48.136 21:48:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@949 -- # '[' -z 1641000 ']' 00:31:48.136 21:48:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # kill -0 1641000 00:31:48.136 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (1641000) - No such process 00:31:48.136 21:48:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # echo 'Process with pid 1641000 is not found' 00:31:48.136 Process with pid 1641000 is not found 00:31:48.136 21:48:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:31:48.136 21:48:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:31:48.136 21:48:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:31:48.136 00:31:48.136 real 0m15.866s 00:31:48.136 user 0m33.342s 00:31:48.136 sys 0m0.767s 00:31:48.136 21:48:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:48.136 21:48:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:48.136 ************************************ 00:31:48.136 END TEST spdkcli_nvmf_tcp 00:31:48.136 ************************************ 00:31:48.136 21:48:47 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:48.136 21:48:47 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:31:48.136 21:48:47 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:48.136 21:48:47 -- common/autotest_common.sh@10 -- # set +x 00:31:48.136 ************************************ 00:31:48.136 START TEST nvmf_identify_passthru 00:31:48.136 ************************************ 00:31:48.136 21:48:47 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:48.136 * Looking for test storage... 00:31:48.136 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:48.136 21:48:47 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:48.136 21:48:47 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:31:48.136 21:48:47 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:48.136 21:48:47 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:48.136 21:48:47 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:48.136 21:48:47 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:48.136 21:48:47 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:48.136 21:48:47 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:48.136 21:48:47 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:48.136 21:48:47 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:48.136 21:48:47 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:48.136 21:48:47 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:48.136 21:48:47 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:31:48.136 21:48:47 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:31:48.136 21:48:47 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:48.136 21:48:47 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:48.136 21:48:47 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:48.136 21:48:47 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:48.136 21:48:47 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:48.136 21:48:47 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:48.136 21:48:47 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:48.136 21:48:47 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:48.136 21:48:47 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.136 21:48:47 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.136 21:48:47 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.136 21:48:47 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:31:48.136 21:48:47 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.136 21:48:47 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:31:48.136 21:48:47 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:48.136 21:48:47 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:48.136 21:48:47 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:48.136 21:48:47 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:48.136 21:48:47 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:48.136 21:48:47 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:48.136 21:48:47 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:48.136 21:48:47 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:48.136 21:48:47 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:48.136 21:48:47 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:48.136 21:48:47 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:48.136 21:48:47 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:48.136 21:48:47 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.136 21:48:47 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.136 21:48:47 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.136 21:48:47 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:31:48.136 21:48:47 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.136 21:48:47 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:31:48.136 21:48:47 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:48.136 21:48:47 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:48.136 21:48:47 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:48.136 21:48:47 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:48.136 21:48:47 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:48.136 21:48:47 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:48.137 21:48:47 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:48.137 21:48:47 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:48.137 21:48:47 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:48.137 21:48:47 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:48.137 21:48:47 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:31:48.137 21:48:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:54.710 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:54.710 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:54.710 Found net devices under 0000:af:00.0: cvl_0_0 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:54.710 Found net devices under 0000:af:00.1: cvl_0_1 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:54.710 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:54.711 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:54.711 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:54.711 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:54.711 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:54.711 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:54.711 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:54.711 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:54.711 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:54.711 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:54.711 21:48:53 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:54.711 21:48:54 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:54.711 21:48:54 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:54.711 21:48:54 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:54.711 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:54.711 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:31:54.711 00:31:54.711 --- 10.0.0.2 ping statistics --- 00:31:54.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:54.711 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:31:54.711 21:48:54 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:54.711 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:54.711 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:31:54.711 00:31:54.711 --- 10.0.0.1 ping statistics --- 00:31:54.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:54.711 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:31:54.711 21:48:54 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:54.711 21:48:54 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:31:54.711 21:48:54 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:54.711 21:48:54 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:54.711 21:48:54 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:54.711 21:48:54 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:54.711 21:48:54 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:54.711 21:48:54 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:54.711 21:48:54 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:54.711 21:48:54 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:31:54.711 21:48:54 nvmf_identify_passthru -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:54.711 21:48:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:54.711 21:48:54 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:31:54.711 21:48:54 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # bdfs=() 00:31:54.711 21:48:54 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # local bdfs 00:31:54.711 21:48:54 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=($(get_nvme_bdfs)) 00:31:54.711 21:48:54 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # get_nvme_bdfs 00:31:54.711 21:48:54 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # bdfs=() 00:31:54.711 21:48:54 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # local bdfs 00:31:54.711 21:48:54 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:54.711 21:48:54 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:54.711 21:48:54 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:31:54.711 21:48:54 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:31:54.711 21:48:54 nvmf_identify_passthru -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:86:00.0 00:31:54.711 21:48:54 nvmf_identify_passthru -- common/autotest_common.sh@1526 -- # echo 0000:86:00.0 00:31:54.711 21:48:54 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:86:00.0 00:31:54.711 21:48:54 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:86:00.0 ']' 00:31:54.711 21:48:54 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:86:00.0' -i 0 00:31:54.711 21:48:54 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:31:54.711 21:48:54 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:31:54.711 EAL: No free 2048 kB hugepages reported on node 1 00:31:58.902 21:48:58 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ916308MR1P0FGN 00:31:58.902 21:48:58 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:86:00.0' -i 0 00:31:58.902 21:48:58 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:31:58.902 21:48:58 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:31:58.902 EAL: No free 2048 kB hugepages reported on node 1 00:32:03.093 21:49:02 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:32:03.093 21:49:02 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:32:03.093 21:49:02 nvmf_identify_passthru -- common/autotest_common.sh@729 -- # xtrace_disable 00:32:03.093 21:49:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:03.093 21:49:02 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:32:03.093 21:49:02 nvmf_identify_passthru -- common/autotest_common.sh@723 -- # xtrace_disable 00:32:03.093 21:49:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:03.093 21:49:02 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1648821 00:32:03.093 21:49:02 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:03.093 21:49:02 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:32:03.093 21:49:02 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1648821 00:32:03.093 21:49:02 nvmf_identify_passthru -- common/autotest_common.sh@830 -- # '[' -z 1648821 ']' 00:32:03.093 21:49:02 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:03.093 21:49:02 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:03.093 21:49:02 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:03.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:03.093 21:49:02 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:03.093 21:49:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:03.093 [2024-06-07 21:49:02.889033] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:32:03.093 [2024-06-07 21:49:02.889098] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:03.093 EAL: No free 2048 kB hugepages reported on node 1 00:32:03.093 [2024-06-07 21:49:02.984526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:03.093 [2024-06-07 21:49:03.076413] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:03.093 [2024-06-07 21:49:03.076459] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:03.093 [2024-06-07 21:49:03.076469] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:03.093 [2024-06-07 21:49:03.076478] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:03.093 [2024-06-07 21:49:03.076485] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:03.093 [2024-06-07 21:49:03.076539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:03.093 [2024-06-07 21:49:03.076639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:03.093 [2024-06-07 21:49:03.076733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:03.093 [2024-06-07 21:49:03.076734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:03.661 21:49:03 nvmf_identify_passthru -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:03.661 21:49:03 nvmf_identify_passthru -- common/autotest_common.sh@863 -- # return 0 00:32:03.661 21:49:03 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:32:03.661 21:49:03 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:03.661 21:49:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:03.661 INFO: Log level set to 20 00:32:03.661 INFO: Requests: 00:32:03.661 { 00:32:03.661 "jsonrpc": "2.0", 00:32:03.661 "method": "nvmf_set_config", 00:32:03.661 "id": 1, 00:32:03.661 "params": { 00:32:03.661 "admin_cmd_passthru": { 00:32:03.661 "identify_ctrlr": true 00:32:03.661 } 00:32:03.661 } 00:32:03.661 } 00:32:03.661 00:32:03.661 INFO: response: 00:32:03.661 { 00:32:03.661 "jsonrpc": "2.0", 00:32:03.661 "id": 1, 00:32:03.661 "result": true 00:32:03.661 } 00:32:03.661 00:32:03.661 21:49:03 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:03.661 21:49:03 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:32:03.661 21:49:03 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:03.661 21:49:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:03.661 INFO: Setting log level to 20 00:32:03.661 INFO: Setting log level to 20 00:32:03.661 INFO: Log level set to 20 00:32:03.661 INFO: Log level set to 20 00:32:03.661 INFO: Requests: 00:32:03.661 { 00:32:03.661 "jsonrpc": "2.0", 00:32:03.661 "method": "framework_start_init", 00:32:03.661 "id": 1 00:32:03.661 } 00:32:03.661 00:32:03.661 INFO: Requests: 00:32:03.661 { 00:32:03.661 "jsonrpc": "2.0", 00:32:03.661 "method": "framework_start_init", 00:32:03.661 "id": 1 00:32:03.661 } 00:32:03.661 00:32:03.920 [2024-06-07 21:49:03.942184] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:32:03.920 INFO: response: 00:32:03.920 { 00:32:03.920 "jsonrpc": "2.0", 00:32:03.920 "id": 1, 00:32:03.920 "result": true 00:32:03.920 } 00:32:03.920 00:32:03.920 INFO: response: 00:32:03.920 { 00:32:03.920 "jsonrpc": "2.0", 00:32:03.920 "id": 1, 00:32:03.920 "result": true 00:32:03.920 } 00:32:03.920 00:32:03.920 21:49:03 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:03.920 21:49:03 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:03.920 21:49:03 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:03.920 21:49:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:03.920 INFO: Setting log level to 40 00:32:03.920 INFO: Setting log level to 40 00:32:03.920 INFO: Setting log level to 40 00:32:03.920 [2024-06-07 21:49:03.956527] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:03.920 21:49:03 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:03.920 21:49:03 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:32:03.920 21:49:03 nvmf_identify_passthru -- common/autotest_common.sh@729 -- # xtrace_disable 00:32:03.920 21:49:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:03.920 21:49:04 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:86:00.0 00:32:03.920 21:49:04 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:03.920 21:49:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:07.209 Nvme0n1 00:32:07.209 21:49:06 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:07.209 21:49:06 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:32:07.209 21:49:06 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:07.209 21:49:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:07.209 21:49:06 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:07.210 21:49:06 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:32:07.210 21:49:06 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:07.210 21:49:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:07.210 21:49:06 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:07.210 21:49:06 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:07.210 21:49:06 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:07.210 21:49:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:07.210 [2024-06-07 21:49:06.888614] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:07.210 21:49:06 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:07.210 21:49:06 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:32:07.210 21:49:06 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:07.210 21:49:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:07.210 [ 00:32:07.210 { 00:32:07.210 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:07.210 "subtype": "Discovery", 00:32:07.210 "listen_addresses": [], 00:32:07.210 "allow_any_host": true, 00:32:07.210 "hosts": [] 00:32:07.210 }, 00:32:07.210 { 00:32:07.210 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:07.210 "subtype": "NVMe", 00:32:07.210 "listen_addresses": [ 00:32:07.210 { 00:32:07.210 "trtype": "TCP", 00:32:07.210 "adrfam": "IPv4", 00:32:07.210 "traddr": "10.0.0.2", 00:32:07.210 "trsvcid": "4420" 00:32:07.210 } 00:32:07.210 ], 00:32:07.210 "allow_any_host": true, 00:32:07.210 "hosts": [], 00:32:07.210 "serial_number": "SPDK00000000000001", 00:32:07.210 "model_number": "SPDK bdev Controller", 00:32:07.210 "max_namespaces": 1, 00:32:07.210 "min_cntlid": 1, 00:32:07.210 "max_cntlid": 65519, 00:32:07.210 "namespaces": [ 00:32:07.210 { 00:32:07.210 "nsid": 1, 00:32:07.210 "bdev_name": "Nvme0n1", 00:32:07.210 "name": "Nvme0n1", 00:32:07.210 "nguid": "272A9F51CC8942E6A6C5540FC4E42FCE", 00:32:07.210 "uuid": "272a9f51-cc89-42e6-a6c5-540fc4e42fce" 00:32:07.210 } 00:32:07.210 ] 00:32:07.210 } 00:32:07.210 ] 00:32:07.210 21:49:06 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:07.210 21:49:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:07.210 21:49:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:32:07.210 21:49:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:32:07.210 EAL: No free 2048 kB hugepages reported on node 1 00:32:07.210 21:49:07 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ916308MR1P0FGN 00:32:07.210 21:49:07 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:07.210 21:49:07 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:32:07.210 21:49:07 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:32:07.210 EAL: No free 2048 kB hugepages reported on node 1 00:32:07.210 21:49:07 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:32:07.210 21:49:07 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ916308MR1P0FGN '!=' BTLJ916308MR1P0FGN ']' 00:32:07.210 21:49:07 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:32:07.210 21:49:07 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:07.210 21:49:07 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:07.210 21:49:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:07.210 21:49:07 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:07.210 21:49:07 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:32:07.210 21:49:07 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:32:07.210 21:49:07 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:07.210 21:49:07 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:32:07.210 21:49:07 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:07.210 21:49:07 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:32:07.210 21:49:07 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:07.210 21:49:07 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:07.210 rmmod nvme_tcp 00:32:07.210 rmmod nvme_fabrics 00:32:07.210 rmmod nvme_keyring 00:32:07.210 21:49:07 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:07.210 21:49:07 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:32:07.210 21:49:07 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:32:07.210 21:49:07 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1648821 ']' 00:32:07.210 21:49:07 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1648821 00:32:07.210 21:49:07 nvmf_identify_passthru -- common/autotest_common.sh@949 -- # '[' -z 1648821 ']' 00:32:07.210 21:49:07 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # kill -0 1648821 00:32:07.210 21:49:07 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # uname 00:32:07.210 21:49:07 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:07.210 21:49:07 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1648821 00:32:07.210 21:49:07 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:32:07.210 21:49:07 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:32:07.210 21:49:07 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1648821' 00:32:07.210 killing process with pid 1648821 00:32:07.210 21:49:07 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # kill 1648821 00:32:07.210 21:49:07 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # wait 1648821 00:32:09.119 21:49:08 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:09.119 21:49:08 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:09.119 21:49:08 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:09.119 21:49:08 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:09.119 21:49:08 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:09.119 21:49:08 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:09.119 21:49:08 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:09.119 21:49:08 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:11.026 21:49:10 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:11.026 00:32:11.026 real 0m23.367s 00:32:11.026 user 0m31.188s 00:32:11.026 sys 0m5.761s 00:32:11.026 21:49:10 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:11.026 21:49:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:11.026 ************************************ 00:32:11.026 END TEST nvmf_identify_passthru 00:32:11.026 ************************************ 00:32:11.026 21:49:10 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:32:11.026 21:49:11 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:32:11.026 21:49:11 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:32:11.027 21:49:11 -- common/autotest_common.sh@10 -- # set +x 00:32:11.027 ************************************ 00:32:11.027 START TEST nvmf_dif 00:32:11.027 ************************************ 00:32:11.027 21:49:11 nvmf_dif -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:32:11.027 * Looking for test storage... 00:32:11.027 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:11.027 21:49:11 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:11.027 21:49:11 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:32:11.027 21:49:11 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:11.027 21:49:11 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:11.027 21:49:11 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:11.027 21:49:11 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:11.027 21:49:11 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:11.027 21:49:11 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:11.027 21:49:11 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:11.027 21:49:11 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:11.027 21:49:11 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:11.027 21:49:11 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:11.027 21:49:11 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:32:11.027 21:49:11 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:32:11.027 21:49:11 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:11.027 21:49:11 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:11.027 21:49:11 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:11.027 21:49:11 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:11.027 21:49:11 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:11.027 21:49:11 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:11.027 21:49:11 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:11.027 21:49:11 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:11.027 21:49:11 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.027 21:49:11 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.027 21:49:11 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.027 21:49:11 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:32:11.027 21:49:11 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.027 21:49:11 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:32:11.027 21:49:11 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:11.027 21:49:11 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:11.027 21:49:11 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:11.027 21:49:11 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:11.027 21:49:11 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:11.027 21:49:11 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:11.027 21:49:11 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:11.027 21:49:11 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:11.027 21:49:11 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:32:11.027 21:49:11 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:32:11.027 21:49:11 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:32:11.027 21:49:11 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:32:11.027 21:49:11 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:32:11.027 21:49:11 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:11.027 21:49:11 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:11.027 21:49:11 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:11.027 21:49:11 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:11.027 21:49:11 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:11.027 21:49:11 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:11.027 21:49:11 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:11.027 21:49:11 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:11.027 21:49:11 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:11.027 21:49:11 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:11.027 21:49:11 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:32:11.027 21:49:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:17.609 21:49:17 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:17.609 21:49:17 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:32:17.609 21:49:17 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:17.609 21:49:17 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:17.610 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:17.610 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:17.610 Found net devices under 0000:af:00.0: cvl_0_0 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:17.610 Found net devices under 0000:af:00.1: cvl_0_1 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:17.610 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:17.610 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:32:17.610 00:32:17.610 --- 10.0.0.2 ping statistics --- 00:32:17.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:17.610 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:17.610 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:17.610 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:32:17.610 00:32:17.610 --- 10.0.0.1 ping statistics --- 00:32:17.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:17.610 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:32:17.610 21:49:17 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:20.146 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:32:20.146 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:32:20.146 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:32:20.146 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:32:20.146 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:32:20.146 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:32:20.146 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:32:20.146 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:32:20.146 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:32:20.146 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:32:20.146 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:32:20.146 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:32:20.146 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:32:20.146 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:32:20.146 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:32:20.146 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:32:20.146 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:32:20.405 21:49:20 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:20.405 21:49:20 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:20.405 21:49:20 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:20.405 21:49:20 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:20.405 21:49:20 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:20.405 21:49:20 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:20.405 21:49:20 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:32:20.405 21:49:20 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:32:20.405 21:49:20 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:20.405 21:49:20 nvmf_dif -- common/autotest_common.sh@723 -- # xtrace_disable 00:32:20.405 21:49:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:20.405 21:49:20 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1655203 00:32:20.405 21:49:20 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:32:20.405 21:49:20 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1655203 00:32:20.405 21:49:20 nvmf_dif -- common/autotest_common.sh@830 -- # '[' -z 1655203 ']' 00:32:20.405 21:49:20 nvmf_dif -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:20.405 21:49:20 nvmf_dif -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:20.405 21:49:20 nvmf_dif -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:20.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:20.405 21:49:20 nvmf_dif -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:20.405 21:49:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:20.405 [2024-06-07 21:49:20.593395] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:32:20.405 [2024-06-07 21:49:20.593454] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:20.405 EAL: No free 2048 kB hugepages reported on node 1 00:32:20.665 [2024-06-07 21:49:20.690573] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:20.665 [2024-06-07 21:49:20.781826] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:20.665 [2024-06-07 21:49:20.781865] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:20.665 [2024-06-07 21:49:20.781876] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:20.665 [2024-06-07 21:49:20.781884] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:20.665 [2024-06-07 21:49:20.781891] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:20.666 [2024-06-07 21:49:20.781913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:21.601 21:49:21 nvmf_dif -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:21.601 21:49:21 nvmf_dif -- common/autotest_common.sh@863 -- # return 0 00:32:21.601 21:49:21 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:21.601 21:49:21 nvmf_dif -- common/autotest_common.sh@729 -- # xtrace_disable 00:32:21.601 21:49:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:21.601 21:49:21 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:21.601 21:49:21 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:32:21.601 21:49:21 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:32:21.601 21:49:21 nvmf_dif -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:21.601 21:49:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:21.601 [2024-06-07 21:49:21.567894] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:21.601 21:49:21 nvmf_dif -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:21.601 21:49:21 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:32:21.601 21:49:21 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:32:21.601 21:49:21 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:32:21.601 21:49:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:21.601 ************************************ 00:32:21.601 START TEST fio_dif_1_default 00:32:21.601 ************************************ 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # fio_dif_1 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:21.601 bdev_null0 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:21.601 [2024-06-07 21:49:21.632212] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:21.601 { 00:32:21.601 "params": { 00:32:21.601 "name": "Nvme$subsystem", 00:32:21.601 "trtype": "$TEST_TRANSPORT", 00:32:21.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:21.601 "adrfam": "ipv4", 00:32:21.601 "trsvcid": "$NVMF_PORT", 00:32:21.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:21.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:21.601 "hdgst": ${hdgst:-false}, 00:32:21.601 "ddgst": ${ddgst:-false} 00:32:21.601 }, 00:32:21.601 "method": "bdev_nvme_attach_controller" 00:32:21.601 } 00:32:21.601 EOF 00:32:21.601 )") 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # local sanitizers 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # shift 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local asan_lib= 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # grep libasan 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:21.601 "params": { 00:32:21.601 "name": "Nvme0", 00:32:21.601 "trtype": "tcp", 00:32:21.601 "traddr": "10.0.0.2", 00:32:21.601 "adrfam": "ipv4", 00:32:21.601 "trsvcid": "4420", 00:32:21.601 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:21.601 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:21.601 "hdgst": false, 00:32:21.601 "ddgst": false 00:32:21.601 }, 00:32:21.601 "method": "bdev_nvme_attach_controller" 00:32:21.601 }' 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # asan_lib= 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # asan_lib= 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:21.601 21:49:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:21.860 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:21.860 fio-3.35 00:32:21.860 Starting 1 thread 00:32:21.860 EAL: No free 2048 kB hugepages reported on node 1 00:32:34.065 00:32:34.065 filename0: (groupid=0, jobs=1): err= 0: pid=1655634: Fri Jun 7 21:49:32 2024 00:32:34.065 read: IOPS=185, BW=743KiB/s (761kB/s)(7440KiB/10013msec) 00:32:34.065 slat (nsec): min=5491, max=49013, avg=5789.34, stdev=1362.98 00:32:34.065 clat (usec): min=861, max=42804, avg=21516.97, stdev=20591.46 00:32:34.065 lat (usec): min=866, max=42838, avg=21522.76, stdev=20591.39 00:32:34.065 clat percentiles (usec): 00:32:34.065 | 1.00th=[ 865], 5.00th=[ 873], 10.00th=[ 873], 20.00th=[ 881], 00:32:34.065 | 30.00th=[ 889], 40.00th=[ 898], 50.00th=[41157], 60.00th=[42206], 00:32:34.065 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:32:34.065 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:32:34.065 | 99.99th=[42730] 00:32:34.065 bw ( KiB/s): min= 704, max= 768, per=99.86%, avg=742.40, stdev=30.45, samples=20 00:32:34.065 iops : min= 176, max= 192, avg=185.60, stdev= 7.61, samples=20 00:32:34.065 lat (usec) : 1000=49.89% 00:32:34.065 lat (msec) : 50=50.11% 00:32:34.065 cpu : usr=93.85%, sys=5.89%, ctx=17, majf=0, minf=242 00:32:34.065 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:34.065 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:34.065 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:34.065 issued rwts: total=1860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:34.065 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:34.065 00:32:34.065 Run status group 0 (all jobs): 00:32:34.065 READ: bw=743KiB/s (761kB/s), 743KiB/s-743KiB/s (761kB/s-761kB/s), io=7440KiB (7619kB), run=10013-10013msec 00:32:34.065 21:49:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:32:34.065 21:49:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:32:34.065 21:49:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:32:34.065 21:49:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:34.065 21:49:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:32:34.065 21:49:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:34.065 21:49:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:34.065 21:49:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:34.065 21:49:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:34.065 21:49:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:34.065 21:49:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:34.065 21:49:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:34.065 21:49:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:34.065 00:32:34.065 real 0m11.318s 00:32:34.065 user 0m20.725s 00:32:34.065 sys 0m0.933s 00:32:34.065 21:49:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:34.065 21:49:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:34.065 ************************************ 00:32:34.065 END TEST fio_dif_1_default 00:32:34.065 ************************************ 00:32:34.065 21:49:32 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:32:34.065 21:49:32 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:32:34.065 21:49:32 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:32:34.065 21:49:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:34.065 ************************************ 00:32:34.065 START TEST fio_dif_1_multi_subsystems 00:32:34.065 ************************************ 00:32:34.065 21:49:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # fio_dif_1_multi_subsystems 00:32:34.065 21:49:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:32:34.065 21:49:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:32:34.065 21:49:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:32:34.065 21:49:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:32:34.065 21:49:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:32:34.065 21:49:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:32:34.065 21:49:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:34.065 21:49:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:34.065 21:49:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:34.065 bdev_null0 00:32:34.065 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:34.065 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:34.065 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:34.065 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:34.065 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:34.065 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:34.065 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:34.065 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:34.065 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:34.065 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:34.065 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:34.065 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:34.065 [2024-06-07 21:49:33.027318] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:34.065 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:34.065 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:32:34.065 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:32:34.065 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:32:34.065 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:34.065 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:34.065 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:34.065 bdev_null1 00:32:34.065 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:34.065 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:34.065 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:34.065 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:34.065 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:34.065 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:34.065 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:34.065 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:34.065 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:34.065 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:34.065 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:34.065 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:34.065 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:34.065 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:32:34.065 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:32:34.065 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:34.065 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:32:34.065 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:34.065 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:32:34.065 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:34.065 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:34.065 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:32:34.065 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:34.065 { 00:32:34.065 "params": { 00:32:34.065 "name": "Nvme$subsystem", 00:32:34.065 "trtype": "$TEST_TRANSPORT", 00:32:34.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:34.065 "adrfam": "ipv4", 00:32:34.065 "trsvcid": "$NVMF_PORT", 00:32:34.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:34.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:34.065 "hdgst": ${hdgst:-false}, 00:32:34.065 "ddgst": ${ddgst:-false} 00:32:34.065 }, 00:32:34.065 "method": "bdev_nvme_attach_controller" 00:32:34.065 } 00:32:34.065 EOF 00:32:34.065 )") 00:32:34.065 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:32:34.065 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:32:34.065 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:34.065 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:32:34.065 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # local sanitizers 00:32:34.065 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:34.066 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # shift 00:32:34.066 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local asan_lib= 00:32:34.066 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:32:34.066 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:32:34.066 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:32:34.066 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:34.066 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:32:34.066 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:32:34.066 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # grep libasan 00:32:34.066 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:32:34.066 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:34.066 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:34.066 { 00:32:34.066 "params": { 00:32:34.066 "name": "Nvme$subsystem", 00:32:34.066 "trtype": "$TEST_TRANSPORT", 00:32:34.066 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:34.066 "adrfam": "ipv4", 00:32:34.066 "trsvcid": "$NVMF_PORT", 00:32:34.066 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:34.066 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:34.066 "hdgst": ${hdgst:-false}, 00:32:34.066 "ddgst": ${ddgst:-false} 00:32:34.066 }, 00:32:34.066 "method": "bdev_nvme_attach_controller" 00:32:34.066 } 00:32:34.066 EOF 00:32:34.066 )") 00:32:34.066 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:32:34.066 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:32:34.066 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:32:34.066 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:32:34.066 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:32:34.066 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:34.066 "params": { 00:32:34.066 "name": "Nvme0", 00:32:34.066 "trtype": "tcp", 00:32:34.066 "traddr": "10.0.0.2", 00:32:34.066 "adrfam": "ipv4", 00:32:34.066 "trsvcid": "4420", 00:32:34.066 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:34.066 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:34.066 "hdgst": false, 00:32:34.066 "ddgst": false 00:32:34.066 }, 00:32:34.066 "method": "bdev_nvme_attach_controller" 00:32:34.066 },{ 00:32:34.066 "params": { 00:32:34.066 "name": "Nvme1", 00:32:34.066 "trtype": "tcp", 00:32:34.066 "traddr": "10.0.0.2", 00:32:34.066 "adrfam": "ipv4", 00:32:34.066 "trsvcid": "4420", 00:32:34.066 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:34.066 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:34.066 "hdgst": false, 00:32:34.066 "ddgst": false 00:32:34.066 }, 00:32:34.066 "method": "bdev_nvme_attach_controller" 00:32:34.066 }' 00:32:34.066 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # asan_lib= 00:32:34.066 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:32:34.066 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:32:34.066 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:34.066 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:32:34.066 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:32:34.066 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # asan_lib= 00:32:34.066 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:32:34.066 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:34.066 21:49:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:34.066 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:34.066 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:34.066 fio-3.35 00:32:34.066 Starting 2 threads 00:32:34.066 EAL: No free 2048 kB hugepages reported on node 1 00:32:46.337 00:32:46.337 filename0: (groupid=0, jobs=1): err= 0: pid=1657867: Fri Jun 7 21:49:44 2024 00:32:46.337 read: IOPS=94, BW=380KiB/s (389kB/s)(3808KiB/10024msec) 00:32:46.337 slat (nsec): min=9311, max=35629, avg=11385.41, stdev=3249.89 00:32:46.337 clat (usec): min=41037, max=43024, avg=42080.36, stdev=330.11 00:32:46.337 lat (usec): min=41046, max=43040, avg=42091.75, stdev=330.21 00:32:46.337 clat percentiles (usec): 00:32:46.337 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:32:46.337 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:32:46.337 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:32:46.337 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:32:46.337 | 99.99th=[43254] 00:32:46.337 bw ( KiB/s): min= 352, max= 384, per=49.80%, avg=379.20, stdev=11.72, samples=20 00:32:46.337 iops : min= 88, max= 96, avg=94.80, stdev= 2.93, samples=20 00:32:46.337 lat (msec) : 50=100.00% 00:32:46.337 cpu : usr=97.53%, sys=2.14%, ctx=14, majf=0, minf=127 00:32:46.337 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:46.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:46.337 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:46.337 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:46.337 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:46.337 filename1: (groupid=0, jobs=1): err= 0: pid=1657868: Fri Jun 7 21:49:44 2024 00:32:46.337 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10029msec) 00:32:46.337 slat (nsec): min=9324, max=44102, avg=11149.48, stdev=2991.71 00:32:46.337 clat (usec): min=40934, max=44065, avg=41926.23, stdev=258.87 00:32:46.337 lat (usec): min=40944, max=44090, avg=41937.38, stdev=259.15 00:32:46.337 clat percentiles (usec): 00:32:46.337 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[42206], 00:32:46.337 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:32:46.337 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:32:46.337 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:32:46.337 | 99.99th=[44303] 00:32:46.337 bw ( KiB/s): min= 352, max= 384, per=49.93%, avg=380.80, stdev= 9.85, samples=20 00:32:46.337 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:32:46.337 lat (msec) : 50=100.00% 00:32:46.337 cpu : usr=97.41%, sys=2.26%, ctx=14, majf=0, minf=135 00:32:46.337 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:46.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:46.337 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:46.337 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:46.337 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:46.337 00:32:46.337 Run status group 0 (all jobs): 00:32:46.337 READ: bw=761KiB/s (779kB/s), 380KiB/s-381KiB/s (389kB/s-390kB/s), io=7632KiB (7815kB), run=10024-10029msec 00:32:46.337 21:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:32:46.337 21:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:32:46.337 21:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:32:46.337 21:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:46.337 21:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:32:46.337 21:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:46.337 21:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:46.337 21:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:46.337 21:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:46.337 21:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:46.337 21:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:46.337 21:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:46.337 21:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:46.337 21:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:32:46.337 21:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:46.338 21:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:32:46.338 21:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:46.338 21:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:46.338 21:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:46.338 21:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:46.338 21:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:46.338 21:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:46.338 21:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:46.338 21:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:46.338 00:32:46.338 real 0m11.682s 00:32:46.338 user 0m31.441s 00:32:46.338 sys 0m0.839s 00:32:46.338 21:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:46.338 21:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:46.338 ************************************ 00:32:46.338 END TEST fio_dif_1_multi_subsystems 00:32:46.338 ************************************ 00:32:46.338 21:49:44 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:32:46.338 21:49:44 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:32:46.338 21:49:44 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:32:46.338 21:49:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:46.338 ************************************ 00:32:46.338 START TEST fio_dif_rand_params 00:32:46.338 ************************************ 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # fio_dif_rand_params 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:46.338 bdev_null0 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:46.338 [2024-06-07 21:49:44.776500] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:46.338 { 00:32:46.338 "params": { 00:32:46.338 "name": "Nvme$subsystem", 00:32:46.338 "trtype": "$TEST_TRANSPORT", 00:32:46.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:46.338 "adrfam": "ipv4", 00:32:46.338 "trsvcid": "$NVMF_PORT", 00:32:46.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:46.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:46.338 "hdgst": ${hdgst:-false}, 00:32:46.338 "ddgst": ${ddgst:-false} 00:32:46.338 }, 00:32:46.338 "method": "bdev_nvme_attach_controller" 00:32:46.338 } 00:32:46.338 EOF 00:32:46.338 )") 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:46.338 "params": { 00:32:46.338 "name": "Nvme0", 00:32:46.338 "trtype": "tcp", 00:32:46.338 "traddr": "10.0.0.2", 00:32:46.338 "adrfam": "ipv4", 00:32:46.338 "trsvcid": "4420", 00:32:46.338 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:46.338 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:46.338 "hdgst": false, 00:32:46.338 "ddgst": false 00:32:46.338 }, 00:32:46.338 "method": "bdev_nvme_attach_controller" 00:32:46.338 }' 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:46.338 21:49:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:46.338 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:46.338 ... 00:32:46.338 fio-3.35 00:32:46.338 Starting 3 threads 00:32:46.338 EAL: No free 2048 kB hugepages reported on node 1 00:32:50.527 00:32:50.527 filename0: (groupid=0, jobs=1): err= 0: pid=1659966: Fri Jun 7 21:49:50 2024 00:32:50.527 read: IOPS=183, BW=22.9MiB/s (24.0MB/s)(115MiB/5004msec) 00:32:50.527 slat (nsec): min=9496, max=36220, avg=14327.84, stdev=6017.33 00:32:50.527 clat (usec): min=6202, max=58538, avg=16366.64, stdev=14368.11 00:32:50.527 lat (usec): min=6212, max=58556, avg=16380.97, stdev=14368.32 00:32:50.527 clat percentiles (usec): 00:32:50.527 | 1.00th=[ 6521], 5.00th=[ 6915], 10.00th=[ 7963], 20.00th=[ 9110], 00:32:50.527 | 30.00th=[ 9765], 40.00th=[10159], 50.00th=[10945], 60.00th=[12256], 00:32:50.527 | 70.00th=[13304], 80.00th=[14615], 90.00th=[51119], 95.00th=[54264], 00:32:50.527 | 99.00th=[56361], 99.50th=[57410], 99.90th=[58459], 99.95th=[58459], 00:32:50.527 | 99.99th=[58459] 00:32:50.527 bw ( KiB/s): min=14592, max=31488, per=32.01%, avg=23398.40, stdev=5171.23, samples=10 00:32:50.527 iops : min= 114, max= 246, avg=182.80, stdev=40.40, samples=10 00:32:50.527 lat (msec) : 10=33.62%, 20=53.60%, 50=0.33%, 100=12.45% 00:32:50.527 cpu : usr=96.70%, sys=2.96%, ctx=12, majf=0, minf=49 00:32:50.527 IO depths : 1=5.2%, 2=94.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:50.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:50.527 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:50.527 issued rwts: total=916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:50.527 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:50.527 filename0: (groupid=0, jobs=1): err= 0: pid=1659968: Fri Jun 7 21:49:50 2024 00:32:50.527 read: IOPS=231, BW=28.9MiB/s (30.3MB/s)(146MiB/5045msec) 00:32:50.527 slat (nsec): min=5878, max=71802, avg=14931.83, stdev=5306.06 00:32:50.527 clat (usec): min=5489, max=57465, avg=12923.02, stdev=9540.50 00:32:50.527 lat (usec): min=5500, max=57482, avg=12937.95, stdev=9540.76 00:32:50.527 clat percentiles (usec): 00:32:50.527 | 1.00th=[ 6128], 5.00th=[ 6587], 10.00th=[ 6980], 20.00th=[ 9110], 00:32:50.527 | 30.00th=[ 9765], 40.00th=[10159], 50.00th=[10552], 60.00th=[11207], 00:32:50.527 | 70.00th=[12256], 80.00th=[13698], 90.00th=[15270], 95.00th=[47449], 00:32:50.527 | 99.00th=[54789], 99.50th=[55837], 99.90th=[57410], 99.95th=[57410], 00:32:50.527 | 99.99th=[57410] 00:32:50.527 bw ( KiB/s): min=24576, max=35072, per=40.77%, avg=29798.40, stdev=4105.23, samples=10 00:32:50.527 iops : min= 192, max= 274, avg=232.80, stdev=32.07, samples=10 00:32:50.527 lat (msec) : 10=35.85%, 20=59.09%, 50=0.34%, 100=4.72% 00:32:50.527 cpu : usr=95.44%, sys=4.16%, ctx=29, majf=0, minf=133 00:32:50.527 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:50.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:50.527 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:50.527 issued rwts: total=1166,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:50.527 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:50.527 filename0: (groupid=0, jobs=1): err= 0: pid=1659969: Fri Jun 7 21:49:50 2024 00:32:50.527 read: IOPS=158, BW=19.8MiB/s (20.8MB/s)(99.9MiB/5040msec) 00:32:50.527 slat (usec): min=9, max=106, avg=15.00, stdev= 7.15 00:32:50.527 clat (usec): min=6348, max=97888, avg=18896.56, stdev=17136.19 00:32:50.527 lat (usec): min=6358, max=97903, avg=18911.56, stdev=17136.26 00:32:50.527 clat percentiles (usec): 00:32:50.527 | 1.00th=[ 6521], 5.00th=[ 7046], 10.00th=[ 8356], 20.00th=[ 9634], 00:32:50.527 | 30.00th=[10159], 40.00th=[10814], 50.00th=[11863], 60.00th=[13304], 00:32:50.527 | 70.00th=[14353], 80.00th=[16188], 90.00th=[53216], 95.00th=[55837], 00:32:50.527 | 99.00th=[58983], 99.50th=[93848], 99.90th=[98042], 99.95th=[98042], 00:32:50.527 | 99.99th=[98042] 00:32:50.527 bw ( KiB/s): min=12312, max=29184, per=27.88%, avg=20380.00, stdev=5274.46, samples=10 00:32:50.527 iops : min= 96, max= 228, avg=159.20, stdev=41.24, samples=10 00:32:50.527 lat (msec) : 10=26.16%, 20=56.82%, 50=0.63%, 100=16.40% 00:32:50.527 cpu : usr=96.63%, sys=3.00%, ctx=11, majf=0, minf=184 00:32:50.527 IO depths : 1=4.5%, 2=95.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:50.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:50.527 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:50.527 issued rwts: total=799,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:50.527 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:50.527 00:32:50.527 Run status group 0 (all jobs): 00:32:50.527 READ: bw=71.4MiB/s (74.8MB/s), 19.8MiB/s-28.9MiB/s (20.8MB/s-30.3MB/s), io=360MiB (378MB), run=5004-5045msec 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:50.787 bdev_null0 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:50.787 [2024-06-07 21:49:50.974903] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:50.787 bdev_null1 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:50.787 21:49:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:50.787 21:49:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:50.787 21:49:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:50.787 21:49:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:50.787 21:49:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:50.787 21:49:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:50.787 21:49:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:50.787 21:49:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:32:50.787 21:49:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:32:50.787 21:49:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:32:50.787 21:49:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:50.787 21:49:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:50.787 bdev_null2 00:32:50.787 21:49:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:50.787 21:49:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:32:50.787 21:49:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:50.787 21:49:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:50.787 21:49:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:50.787 21:49:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:32:50.787 21:49:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:50.787 21:49:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:50.787 21:49:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:50.787 21:49:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:50.787 21:49:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:50.787 21:49:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:50.787 21:49:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:50.787 21:49:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:32:50.787 21:49:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:32:50.787 21:49:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:32:50.787 21:49:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:32:50.787 21:49:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:50.787 21:49:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:32:50.787 21:49:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:50.787 21:49:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:50.787 21:49:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:50.787 { 00:32:50.787 "params": { 00:32:50.787 "name": "Nvme$subsystem", 00:32:50.787 "trtype": "$TEST_TRANSPORT", 00:32:50.787 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:50.787 "adrfam": "ipv4", 00:32:50.787 "trsvcid": "$NVMF_PORT", 00:32:50.787 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:50.787 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:50.787 "hdgst": ${hdgst:-false}, 00:32:50.787 "ddgst": ${ddgst:-false} 00:32:50.787 }, 00:32:50.787 "method": "bdev_nvme_attach_controller" 00:32:50.787 } 00:32:50.787 EOF 00:32:50.787 )") 00:32:50.787 21:49:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:51.047 21:49:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:32:51.047 21:49:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:51.047 21:49:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:51.047 21:49:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 00:32:51.047 21:49:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:51.047 21:49:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:51.047 21:49:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 00:32:51.047 21:49:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 00:32:51.047 21:49:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:32:51.047 21:49:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:51.047 21:49:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:51.047 21:49:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:51.047 21:49:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:51.047 21:49:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 00:32:51.047 21:49:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:51.047 21:49:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:32:51.047 21:49:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:51.047 21:49:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:51.047 { 00:32:51.047 "params": { 00:32:51.047 "name": "Nvme$subsystem", 00:32:51.047 "trtype": "$TEST_TRANSPORT", 00:32:51.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:51.047 "adrfam": "ipv4", 00:32:51.047 "trsvcid": "$NVMF_PORT", 00:32:51.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:51.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:51.047 "hdgst": ${hdgst:-false}, 00:32:51.047 "ddgst": ${ddgst:-false} 00:32:51.047 }, 00:32:51.047 "method": "bdev_nvme_attach_controller" 00:32:51.047 } 00:32:51.047 EOF 00:32:51.047 )") 00:32:51.047 21:49:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:51.047 21:49:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:51.047 21:49:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:51.047 21:49:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:51.047 21:49:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:51.047 21:49:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:51.047 21:49:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:51.047 21:49:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:51.047 { 00:32:51.047 "params": { 00:32:51.047 "name": "Nvme$subsystem", 00:32:51.047 "trtype": "$TEST_TRANSPORT", 00:32:51.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:51.047 "adrfam": "ipv4", 00:32:51.047 "trsvcid": "$NVMF_PORT", 00:32:51.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:51.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:51.047 "hdgst": ${hdgst:-false}, 00:32:51.047 "ddgst": ${ddgst:-false} 00:32:51.047 }, 00:32:51.047 "method": "bdev_nvme_attach_controller" 00:32:51.047 } 00:32:51.047 EOF 00:32:51.047 )") 00:32:51.047 21:49:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:51.047 21:49:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:32:51.047 21:49:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:32:51.047 21:49:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:51.047 "params": { 00:32:51.047 "name": "Nvme0", 00:32:51.047 "trtype": "tcp", 00:32:51.047 "traddr": "10.0.0.2", 00:32:51.047 "adrfam": "ipv4", 00:32:51.047 "trsvcid": "4420", 00:32:51.047 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:51.047 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:51.047 "hdgst": false, 00:32:51.047 "ddgst": false 00:32:51.047 }, 00:32:51.047 "method": "bdev_nvme_attach_controller" 00:32:51.047 },{ 00:32:51.047 "params": { 00:32:51.047 "name": "Nvme1", 00:32:51.047 "trtype": "tcp", 00:32:51.047 "traddr": "10.0.0.2", 00:32:51.047 "adrfam": "ipv4", 00:32:51.047 "trsvcid": "4420", 00:32:51.047 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:51.047 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:51.047 "hdgst": false, 00:32:51.047 "ddgst": false 00:32:51.047 }, 00:32:51.047 "method": "bdev_nvme_attach_controller" 00:32:51.047 },{ 00:32:51.047 "params": { 00:32:51.047 "name": "Nvme2", 00:32:51.047 "trtype": "tcp", 00:32:51.047 "traddr": "10.0.0.2", 00:32:51.047 "adrfam": "ipv4", 00:32:51.047 "trsvcid": "4420", 00:32:51.047 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:32:51.047 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:32:51.047 "hdgst": false, 00:32:51.048 "ddgst": false 00:32:51.048 }, 00:32:51.048 "method": "bdev_nvme_attach_controller" 00:32:51.048 }' 00:32:51.048 21:49:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:32:51.048 21:49:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:32:51.048 21:49:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:32:51.048 21:49:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:51.048 21:49:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:32:51.048 21:49:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:32:51.048 21:49:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:32:51.048 21:49:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:32:51.048 21:49:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:51.048 21:49:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:51.306 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:51.306 ... 00:32:51.306 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:51.306 ... 00:32:51.306 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:51.306 ... 00:32:51.306 fio-3.35 00:32:51.306 Starting 24 threads 00:32:51.306 EAL: No free 2048 kB hugepages reported on node 1 00:33:03.506 00:33:03.506 filename0: (groupid=0, jobs=1): err= 0: pid=1661303: Fri Jun 7 21:50:02 2024 00:33:03.506 read: IOPS=410, BW=1643KiB/s (1683kB/s)(16.1MiB/10010msec) 00:33:03.506 slat (nsec): min=9748, max=87663, avg=29540.40, stdev=19206.35 00:33:03.506 clat (msec): min=3, max=322, avg=38.72, stdev=21.05 00:33:03.506 lat (msec): min=3, max=322, avg=38.75, stdev=21.05 00:33:03.506 clat percentiles (msec): 00:33:03.506 | 1.00th=[ 10], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 37], 00:33:03.506 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 38], 60.00th=[ 38], 00:33:03.506 | 70.00th=[ 38], 80.00th=[ 38], 90.00th=[ 38], 95.00th=[ 39], 00:33:03.506 | 99.00th=[ 126], 99.50th=[ 186], 99.90th=[ 321], 99.95th=[ 321], 00:33:03.506 | 99.99th=[ 321] 00:33:03.506 bw ( KiB/s): min= 384, max= 2043, per=4.24%, avg=1636.37, stdev=334.36, samples=19 00:33:03.506 iops : min= 96, max= 510, avg=409.05, stdev=83.54, samples=19 00:33:03.506 lat (msec) : 4=0.39%, 10=0.73%, 20=0.15%, 50=97.57%, 250=0.78% 00:33:03.506 lat (msec) : 500=0.39% 00:33:03.506 cpu : usr=96.60%, sys=1.75%, ctx=41, majf=0, minf=74 00:33:03.506 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:33:03.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.506 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.506 issued rwts: total=4112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:03.506 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:03.506 filename0: (groupid=0, jobs=1): err= 0: pid=1661304: Fri Jun 7 21:50:02 2024 00:33:03.506 read: IOPS=403, BW=1616KiB/s (1655kB/s)(15.8MiB/10021msec) 00:33:03.507 slat (usec): min=8, max=133, avg=63.50, stdev=21.22 00:33:03.507 clat (msec): min=28, max=531, avg=39.08, stdev=27.79 00:33:03.507 lat (msec): min=28, max=531, avg=39.15, stdev=27.79 00:33:03.507 clat percentiles (msec): 00:33:03.507 | 1.00th=[ 36], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 37], 00:33:03.507 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:33:03.507 | 70.00th=[ 38], 80.00th=[ 38], 90.00th=[ 38], 95.00th=[ 39], 00:33:03.507 | 99.00th=[ 66], 99.50th=[ 130], 99.90th=[ 460], 99.95th=[ 460], 00:33:03.507 | 99.99th=[ 531] 00:33:03.507 bw ( KiB/s): min= 256, max= 1792, per=4.18%, avg=1612.20, stdev=340.85, samples=20 00:33:03.507 iops : min= 64, max= 448, avg=403.05, stdev=85.21, samples=20 00:33:03.507 lat (msec) : 50=98.42%, 100=0.84%, 250=0.35%, 500=0.35%, 750=0.05% 00:33:03.507 cpu : usr=97.65%, sys=1.48%, ctx=29, majf=0, minf=26 00:33:03.507 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:03.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.507 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.507 issued rwts: total=4048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:03.507 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:03.507 filename0: (groupid=0, jobs=1): err= 0: pid=1661305: Fri Jun 7 21:50:02 2024 00:33:03.507 read: IOPS=404, BW=1616KiB/s (1655kB/s)(15.8MiB/10007msec) 00:33:03.507 slat (usec): min=9, max=129, avg=64.31, stdev=20.01 00:33:03.507 clat (msec): min=10, max=461, avg=39.00, stdev=27.41 00:33:03.507 lat (msec): min=10, max=461, avg=39.07, stdev=27.41 00:33:03.507 clat percentiles (msec): 00:33:03.507 | 1.00th=[ 36], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 37], 00:33:03.507 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:33:03.507 | 70.00th=[ 37], 80.00th=[ 38], 90.00th=[ 38], 95.00th=[ 38], 00:33:03.507 | 99.00th=[ 77], 99.50th=[ 129], 99.90th=[ 460], 99.95th=[ 460], 00:33:03.507 | 99.99th=[ 460] 00:33:03.507 bw ( KiB/s): min= 384, max= 1795, per=4.15%, avg=1603.26, stdev=317.44, samples=19 00:33:03.507 iops : min= 96, max= 448, avg=400.74, stdev=79.33, samples=19 00:33:03.507 lat (msec) : 20=0.45%, 50=97.92%, 100=0.84%, 250=0.40%, 500=0.40% 00:33:03.507 cpu : usr=96.51%, sys=1.84%, ctx=78, majf=0, minf=25 00:33:03.507 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:33:03.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.507 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.507 issued rwts: total=4044,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:03.507 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:03.507 filename0: (groupid=0, jobs=1): err= 0: pid=1661306: Fri Jun 7 21:50:02 2024 00:33:03.507 read: IOPS=410, BW=1644KiB/s (1683kB/s)(16.1MiB/10007msec) 00:33:03.507 slat (nsec): min=9594, max=87813, avg=33359.22, stdev=19017.73 00:33:03.507 clat (msec): min=3, max=322, avg=38.66, stdev=21.07 00:33:03.507 lat (msec): min=3, max=322, avg=38.70, stdev=21.07 00:33:03.507 clat percentiles (msec): 00:33:03.507 | 1.00th=[ 10], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 37], 00:33:03.507 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 38], 60.00th=[ 38], 00:33:03.507 | 70.00th=[ 38], 80.00th=[ 38], 90.00th=[ 38], 95.00th=[ 39], 00:33:03.507 | 99.00th=[ 126], 99.50th=[ 186], 99.90th=[ 321], 99.95th=[ 321], 00:33:03.507 | 99.99th=[ 321] 00:33:03.507 bw ( KiB/s): min= 384, max= 2048, per=4.24%, avg=1637.00, stdev=334.73, samples=19 00:33:03.507 iops : min= 96, max= 512, avg=409.21, stdev=83.68, samples=19 00:33:03.507 lat (msec) : 4=0.39%, 10=0.78%, 20=0.22%, 50=97.45%, 250=0.78% 00:33:03.507 lat (msec) : 500=0.39% 00:33:03.507 cpu : usr=98.59%, sys=0.88%, ctx=76, majf=0, minf=30 00:33:03.507 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:33:03.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.507 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.507 issued rwts: total=4112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:03.507 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:03.507 filename0: (groupid=0, jobs=1): err= 0: pid=1661307: Fri Jun 7 21:50:02 2024 00:33:03.507 read: IOPS=404, BW=1618KiB/s (1657kB/s)(15.8MiB/10006msec) 00:33:03.507 slat (usec): min=5, max=146, avg=65.30, stdev=19.46 00:33:03.507 clat (msec): min=10, max=532, avg=38.98, stdev=27.90 00:33:03.507 lat (msec): min=10, max=532, avg=39.05, stdev=27.90 00:33:03.507 clat percentiles (msec): 00:33:03.507 | 1.00th=[ 36], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 37], 00:33:03.507 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:33:03.507 | 70.00th=[ 37], 80.00th=[ 38], 90.00th=[ 38], 95.00th=[ 38], 00:33:03.507 | 99.00th=[ 77], 99.50th=[ 129], 99.90th=[ 460], 99.95th=[ 460], 00:33:03.507 | 99.99th=[ 531] 00:33:03.507 bw ( KiB/s): min= 384, max= 1792, per=4.15%, avg=1603.05, stdev=317.18, samples=19 00:33:03.507 iops : min= 96, max= 448, avg=400.68, stdev=79.29, samples=19 00:33:03.507 lat (msec) : 20=0.40%, 50=98.02%, 100=0.84%, 250=0.35%, 500=0.35% 00:33:03.507 lat (msec) : 750=0.05% 00:33:03.507 cpu : usr=98.78%, sys=0.79%, ctx=38, majf=0, minf=30 00:33:03.507 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:03.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.507 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.507 issued rwts: total=4048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:03.507 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:03.507 filename0: (groupid=0, jobs=1): err= 0: pid=1661308: Fri Jun 7 21:50:02 2024 00:33:03.507 read: IOPS=403, BW=1615KiB/s (1654kB/s)(15.8MiB/10023msec) 00:33:03.507 slat (usec): min=10, max=135, avg=64.06, stdev=20.28 00:33:03.507 clat (msec): min=28, max=460, avg=39.06, stdev=27.28 00:33:03.507 lat (msec): min=28, max=460, avg=39.13, stdev=27.28 00:33:03.507 clat percentiles (msec): 00:33:03.507 | 1.00th=[ 36], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 37], 00:33:03.507 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:33:03.507 | 70.00th=[ 37], 80.00th=[ 38], 90.00th=[ 38], 95.00th=[ 39], 00:33:03.507 | 99.00th=[ 66], 99.50th=[ 129], 99.90th=[ 460], 99.95th=[ 460], 00:33:03.507 | 99.99th=[ 460] 00:33:03.507 bw ( KiB/s): min= 256, max= 1792, per=4.18%, avg=1612.20, stdev=340.85, samples=20 00:33:03.507 iops : min= 64, max= 448, avg=403.05, stdev=85.21, samples=20 00:33:03.507 lat (msec) : 50=98.42%, 100=0.79%, 250=0.40%, 500=0.40% 00:33:03.507 cpu : usr=98.53%, sys=1.00%, ctx=20, majf=0, minf=29 00:33:03.507 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:03.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.507 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.507 issued rwts: total=4048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:03.507 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:03.507 filename0: (groupid=0, jobs=1): err= 0: pid=1661309: Fri Jun 7 21:50:02 2024 00:33:03.507 read: IOPS=336, BW=1345KiB/s (1378kB/s)(13.1MiB/10007msec) 00:33:03.507 slat (usec): min=5, max=274, avg=29.34, stdev=24.03 00:33:03.507 clat (msec): min=9, max=389, avg=47.38, stdev=25.32 00:33:03.507 lat (msec): min=9, max=389, avg=47.41, stdev=25.31 00:33:03.507 clat percentiles (msec): 00:33:03.507 | 1.00th=[ 24], 5.00th=[ 34], 10.00th=[ 38], 20.00th=[ 38], 00:33:03.507 | 30.00th=[ 39], 40.00th=[ 42], 50.00th=[ 43], 60.00th=[ 44], 00:33:03.507 | 70.00th=[ 46], 80.00th=[ 55], 90.00th=[ 65], 95.00th=[ 65], 00:33:03.507 | 99.00th=[ 192], 99.50th=[ 266], 99.90th=[ 388], 99.95th=[ 388], 00:33:03.507 | 99.99th=[ 388] 00:33:03.507 bw ( KiB/s): min= 464, max= 1680, per=3.47%, avg=1341.37, stdev=273.92, samples=19 00:33:03.507 iops : min= 116, max= 420, avg=335.26, stdev=68.49, samples=19 00:33:03.507 lat (msec) : 10=0.12%, 20=0.48%, 50=70.77%, 100=27.57%, 250=0.42% 00:33:03.507 lat (msec) : 500=0.65% 00:33:03.507 cpu : usr=98.82%, sys=0.82%, ctx=27, majf=0, minf=38 00:33:03.507 IO depths : 1=0.1%, 2=0.4%, 4=13.5%, 8=72.2%, 16=13.8%, 32=0.0%, >=64=0.0% 00:33:03.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.507 complete : 0=0.0%, 4=92.6%, 8=3.2%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.507 issued rwts: total=3366,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:03.507 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:03.507 filename0: (groupid=0, jobs=1): err= 0: pid=1661310: Fri Jun 7 21:50:02 2024 00:33:03.507 read: IOPS=404, BW=1618KiB/s (1657kB/s)(15.8MiB/10006msec) 00:33:03.507 slat (usec): min=6, max=105, avg=49.09, stdev=20.78 00:33:03.507 clat (msec): min=21, max=455, avg=39.14, stdev=23.18 00:33:03.507 lat (msec): min=21, max=455, avg=39.19, stdev=23.18 00:33:03.507 clat percentiles (msec): 00:33:03.507 | 1.00th=[ 36], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 37], 00:33:03.507 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:33:03.507 | 70.00th=[ 38], 80.00th=[ 38], 90.00th=[ 38], 95.00th=[ 39], 00:33:03.507 | 99.00th=[ 79], 99.50th=[ 213], 99.90th=[ 347], 99.95th=[ 347], 00:33:03.507 | 99.99th=[ 456] 00:33:03.507 bw ( KiB/s): min= 384, max= 1792, per=4.17%, avg=1609.47, stdev=319.99, samples=19 00:33:03.507 iops : min= 96, max= 448, avg=402.37, stdev=80.00, samples=19 00:33:03.507 lat (msec) : 50=98.47%, 100=0.74%, 250=0.40%, 500=0.40% 00:33:03.507 cpu : usr=99.23%, sys=0.44%, ctx=14, majf=0, minf=39 00:33:03.507 IO depths : 1=5.9%, 2=12.2%, 4=24.9%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:33:03.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.507 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.507 issued rwts: total=4048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:03.507 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:03.507 filename1: (groupid=0, jobs=1): err= 0: pid=1661311: Fri Jun 7 21:50:02 2024 00:33:03.507 read: IOPS=404, BW=1619KiB/s (1658kB/s)(15.8MiB/10002msec) 00:33:03.507 slat (nsec): min=5659, max=65251, avg=13669.34, stdev=9602.81 00:33:03.507 clat (msec): min=11, max=347, avg=39.40, stdev=22.54 00:33:03.507 lat (msec): min=11, max=347, avg=39.42, stdev=22.54 00:33:03.507 clat percentiles (msec): 00:33:03.507 | 1.00th=[ 36], 5.00th=[ 37], 10.00th=[ 37], 20.00th=[ 37], 00:33:03.508 | 30.00th=[ 37], 40.00th=[ 38], 50.00th=[ 38], 60.00th=[ 38], 00:33:03.508 | 70.00th=[ 38], 80.00th=[ 38], 90.00th=[ 39], 95.00th=[ 39], 00:33:03.508 | 99.00th=[ 73], 99.50th=[ 213], 99.90th=[ 347], 99.95th=[ 347], 00:33:03.508 | 99.99th=[ 347] 00:33:03.508 bw ( KiB/s): min= 384, max= 1792, per=4.17%, avg=1609.26, stdev=319.86, samples=19 00:33:03.508 iops : min= 96, max= 448, avg=402.32, stdev=79.97, samples=19 00:33:03.508 lat (msec) : 20=0.10%, 50=98.27%, 100=0.84%, 250=0.40%, 500=0.40% 00:33:03.508 cpu : usr=99.03%, sys=0.58%, ctx=28, majf=0, minf=29 00:33:03.508 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:03.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.508 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.508 issued rwts: total=4048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:03.508 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:03.508 filename1: (groupid=0, jobs=1): err= 0: pid=1661312: Fri Jun 7 21:50:02 2024 00:33:03.508 read: IOPS=406, BW=1626KiB/s (1665kB/s)(15.9MiB/10006msec) 00:33:03.508 slat (usec): min=9, max=127, avg=59.43, stdev=23.13 00:33:03.508 clat (msec): min=8, max=532, avg=38.82, stdev=27.97 00:33:03.508 lat (msec): min=8, max=532, avg=38.88, stdev=27.96 00:33:03.508 clat percentiles (msec): 00:33:03.508 | 1.00th=[ 22], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 37], 00:33:03.508 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:33:03.508 | 70.00th=[ 37], 80.00th=[ 38], 90.00th=[ 38], 95.00th=[ 39], 00:33:03.508 | 99.00th=[ 77], 99.50th=[ 129], 99.90th=[ 460], 99.95th=[ 460], 00:33:03.508 | 99.99th=[ 531] 00:33:03.508 bw ( KiB/s): min= 368, max= 1795, per=4.17%, avg=1611.68, stdev=324.20, samples=19 00:33:03.508 iops : min= 92, max= 448, avg=402.84, stdev=81.02, samples=19 00:33:03.508 lat (msec) : 10=0.10%, 20=0.39%, 50=97.10%, 100=1.67%, 250=0.34% 00:33:03.508 lat (msec) : 500=0.34%, 750=0.05% 00:33:03.508 cpu : usr=98.59%, sys=0.99%, ctx=30, majf=0, minf=23 00:33:03.508 IO depths : 1=5.6%, 2=11.3%, 4=23.3%, 8=52.7%, 16=7.1%, 32=0.0%, >=64=0.0% 00:33:03.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.508 complete : 0=0.0%, 4=93.7%, 8=0.7%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.508 issued rwts: total=4068,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:03.508 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:03.508 filename1: (groupid=0, jobs=1): err= 0: pid=1661313: Fri Jun 7 21:50:02 2024 00:33:03.508 read: IOPS=404, BW=1618KiB/s (1656kB/s)(15.8MiB/10010msec) 00:33:03.508 slat (usec): min=7, max=134, avg=60.24, stdev=23.85 00:33:03.508 clat (msec): min=20, max=468, avg=39.05, stdev=23.28 00:33:03.508 lat (msec): min=20, max=468, avg=39.11, stdev=23.28 00:33:03.508 clat percentiles (msec): 00:33:03.508 | 1.00th=[ 36], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 37], 00:33:03.508 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:33:03.508 | 70.00th=[ 38], 80.00th=[ 38], 90.00th=[ 38], 95.00th=[ 39], 00:33:03.508 | 99.00th=[ 82], 99.50th=[ 213], 99.90th=[ 347], 99.95th=[ 347], 00:33:03.508 | 99.99th=[ 468] 00:33:03.508 bw ( KiB/s): min= 383, max= 1792, per=4.17%, avg=1609.63, stdev=320.33, samples=19 00:33:03.508 iops : min= 95, max= 448, avg=402.37, stdev=80.24, samples=19 00:33:03.508 lat (msec) : 50=98.47%, 100=0.74%, 250=0.40%, 500=0.40% 00:33:03.508 cpu : usr=98.71%, sys=0.72%, ctx=104, majf=0, minf=36 00:33:03.508 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:03.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.508 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.508 issued rwts: total=4048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:03.508 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:03.508 filename1: (groupid=0, jobs=1): err= 0: pid=1661314: Fri Jun 7 21:50:02 2024 00:33:03.508 read: IOPS=405, BW=1622KiB/s (1661kB/s)(15.9MiB/10021msec) 00:33:03.508 slat (usec): min=9, max=131, avg=58.35, stdev=22.88 00:33:03.508 clat (msec): min=28, max=319, avg=38.98, stdev=21.00 00:33:03.508 lat (msec): min=28, max=319, avg=39.04, stdev=21.00 00:33:03.508 clat percentiles (msec): 00:33:03.508 | 1.00th=[ 36], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 37], 00:33:03.508 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:33:03.508 | 70.00th=[ 38], 80.00th=[ 38], 90.00th=[ 38], 95.00th=[ 39], 00:33:03.508 | 99.00th=[ 66], 99.50th=[ 213], 99.90th=[ 321], 99.95th=[ 321], 00:33:03.508 | 99.99th=[ 321] 00:33:03.508 bw ( KiB/s): min= 384, max= 1792, per=4.19%, avg=1618.60, stdev=314.20, samples=20 00:33:03.508 iops : min= 96, max= 448, avg=404.65, stdev=78.55, samples=20 00:33:03.508 lat (msec) : 50=98.03%, 100=1.18%, 250=0.39%, 500=0.39% 00:33:03.508 cpu : usr=98.77%, sys=0.73%, ctx=62, majf=0, minf=26 00:33:03.508 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:33:03.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.508 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.508 issued rwts: total=4064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:03.508 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:03.508 filename1: (groupid=0, jobs=1): err= 0: pid=1661315: Fri Jun 7 21:50:02 2024 00:33:03.508 read: IOPS=404, BW=1618KiB/s (1657kB/s)(15.8MiB/10009msec) 00:33:03.508 slat (usec): min=4, max=138, avg=64.11, stdev=20.92 00:33:03.508 clat (msec): min=10, max=461, avg=38.98, stdev=27.40 00:33:03.508 lat (msec): min=10, max=461, avg=39.04, stdev=27.40 00:33:03.508 clat percentiles (msec): 00:33:03.508 | 1.00th=[ 36], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 37], 00:33:03.508 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:33:03.508 | 70.00th=[ 37], 80.00th=[ 38], 90.00th=[ 38], 95.00th=[ 38], 00:33:03.508 | 99.00th=[ 79], 99.50th=[ 129], 99.90th=[ 460], 99.95th=[ 460], 00:33:03.508 | 99.99th=[ 460] 00:33:03.508 bw ( KiB/s): min= 384, max= 1792, per=4.15%, avg=1602.89, stdev=317.21, samples=19 00:33:03.508 iops : min= 96, max= 448, avg=400.68, stdev=79.29, samples=19 00:33:03.508 lat (msec) : 20=0.40%, 50=98.02%, 100=0.79%, 250=0.40%, 500=0.40% 00:33:03.508 cpu : usr=98.84%, sys=0.72%, ctx=35, majf=0, minf=29 00:33:03.508 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:03.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.508 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.508 issued rwts: total=4048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:03.508 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:03.508 filename1: (groupid=0, jobs=1): err= 0: pid=1661316: Fri Jun 7 21:50:02 2024 00:33:03.508 read: IOPS=403, BW=1616KiB/s (1655kB/s)(15.8MiB/10021msec) 00:33:03.508 slat (usec): min=11, max=215, avg=54.22, stdev=25.03 00:33:03.508 clat (msec): min=28, max=531, avg=39.20, stdev=27.78 00:33:03.508 lat (msec): min=28, max=531, avg=39.26, stdev=27.78 00:33:03.508 clat percentiles (msec): 00:33:03.508 | 1.00th=[ 36], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 37], 00:33:03.508 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 38], 00:33:03.508 | 70.00th=[ 38], 80.00th=[ 38], 90.00th=[ 38], 95.00th=[ 39], 00:33:03.508 | 99.00th=[ 66], 99.50th=[ 129], 99.90th=[ 460], 99.95th=[ 460], 00:33:03.508 | 99.99th=[ 531] 00:33:03.508 bw ( KiB/s): min= 256, max= 1792, per=4.18%, avg=1612.20, stdev=340.57, samples=20 00:33:03.508 iops : min= 64, max= 448, avg=403.05, stdev=85.14, samples=20 00:33:03.508 lat (msec) : 50=98.42%, 100=0.84%, 250=0.35%, 500=0.35%, 750=0.05% 00:33:03.508 cpu : usr=98.51%, sys=1.06%, ctx=27, majf=0, minf=26 00:33:03.508 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:03.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.508 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.508 issued rwts: total=4048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:03.508 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:03.508 filename1: (groupid=0, jobs=1): err= 0: pid=1661317: Fri Jun 7 21:50:02 2024 00:33:03.508 read: IOPS=406, BW=1625KiB/s (1664kB/s)(15.9MiB/10005msec) 00:33:03.508 slat (usec): min=8, max=139, avg=23.72, stdev=17.09 00:33:03.508 clat (msec): min=22, max=395, avg=39.20, stdev=21.36 00:33:03.508 lat (msec): min=22, max=395, avg=39.23, stdev=21.36 00:33:03.508 clat percentiles (msec): 00:33:03.508 | 1.00th=[ 36], 5.00th=[ 37], 10.00th=[ 37], 20.00th=[ 37], 00:33:03.508 | 30.00th=[ 37], 40.00th=[ 38], 50.00th=[ 38], 60.00th=[ 38], 00:33:03.508 | 70.00th=[ 38], 80.00th=[ 38], 90.00th=[ 38], 95.00th=[ 39], 00:33:03.508 | 99.00th=[ 59], 99.50th=[ 213], 99.90th=[ 321], 99.95th=[ 321], 00:33:03.508 | 99.99th=[ 397] 00:33:03.508 bw ( KiB/s): min= 384, max= 1792, per=4.19%, avg=1616.42, stdev=311.17, samples=19 00:33:03.508 iops : min= 96, max= 448, avg=404.11, stdev=77.79, samples=19 00:33:03.508 lat (msec) : 50=98.08%, 100=1.13%, 250=0.39%, 500=0.39% 00:33:03.508 cpu : usr=98.64%, sys=0.89%, ctx=26, majf=0, minf=19 00:33:03.508 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:03.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.508 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.508 issued rwts: total=4064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:03.508 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:03.508 filename1: (groupid=0, jobs=1): err= 0: pid=1661318: Fri Jun 7 21:50:02 2024 00:33:03.508 read: IOPS=407, BW=1631KiB/s (1670kB/s)(15.9MiB/10007msec) 00:33:03.508 slat (usec): min=7, max=127, avg=30.37, stdev=18.48 00:33:03.508 clat (msec): min=13, max=382, avg=38.97, stdev=21.31 00:33:03.508 lat (msec): min=13, max=382, avg=39.00, stdev=21.31 00:33:03.508 clat percentiles (msec): 00:33:03.508 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 37], 00:33:03.508 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 38], 00:33:03.508 | 70.00th=[ 38], 80.00th=[ 38], 90.00th=[ 38], 95.00th=[ 39], 00:33:03.508 | 99.00th=[ 125], 99.50th=[ 186], 99.90th=[ 321], 99.95th=[ 321], 00:33:03.508 | 99.99th=[ 384] 00:33:03.508 bw ( KiB/s): min= 384, max= 1792, per=4.21%, avg=1623.37, stdev=321.94, samples=19 00:33:03.508 iops : min= 96, max= 448, avg=405.84, stdev=80.48, samples=19 00:33:03.508 lat (msec) : 20=0.39%, 50=98.43%, 100=0.05%, 250=0.74%, 500=0.39% 00:33:03.508 cpu : usr=97.61%, sys=1.29%, ctx=39, majf=0, minf=28 00:33:03.508 IO depths : 1=4.8%, 2=11.0%, 4=24.8%, 8=51.6%, 16=7.7%, 32=0.0%, >=64=0.0% 00:33:03.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.508 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.509 issued rwts: total=4080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:03.509 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:03.509 filename2: (groupid=0, jobs=1): err= 0: pid=1661319: Fri Jun 7 21:50:02 2024 00:33:03.509 read: IOPS=404, BW=1619KiB/s (1658kB/s)(15.8MiB/10002msec) 00:33:03.509 slat (usec): min=6, max=123, avg=61.98, stdev=22.39 00:33:03.509 clat (msec): min=28, max=347, avg=38.99, stdev=22.49 00:33:03.509 lat (msec): min=28, max=347, avg=39.05, stdev=22.49 00:33:03.509 clat percentiles (msec): 00:33:03.509 | 1.00th=[ 36], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 37], 00:33:03.509 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:33:03.509 | 70.00th=[ 38], 80.00th=[ 38], 90.00th=[ 38], 95.00th=[ 39], 00:33:03.509 | 99.00th=[ 74], 99.50th=[ 213], 99.90th=[ 347], 99.95th=[ 347], 00:33:03.509 | 99.99th=[ 347] 00:33:03.509 bw ( KiB/s): min= 512, max= 1792, per=4.17%, avg=1609.68, stdev=290.19, samples=19 00:33:03.509 iops : min= 128, max= 448, avg=402.42, stdev=72.55, samples=19 00:33:03.509 lat (msec) : 50=98.42%, 100=0.79%, 250=0.40%, 500=0.40% 00:33:03.509 cpu : usr=99.03%, sys=0.56%, ctx=13, majf=0, minf=28 00:33:03.509 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:03.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.509 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.509 issued rwts: total=4048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:03.509 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:03.509 filename2: (groupid=0, jobs=1): err= 0: pid=1661320: Fri Jun 7 21:50:02 2024 00:33:03.509 read: IOPS=407, BW=1631KiB/s (1670kB/s)(15.9MiB/10007msec) 00:33:03.509 slat (nsec): min=6538, max=81775, avg=29470.65, stdev=15067.73 00:33:03.509 clat (msec): min=11, max=322, avg=39.00, stdev=21.11 00:33:03.509 lat (msec): min=11, max=322, avg=39.03, stdev=21.11 00:33:03.509 clat percentiles (msec): 00:33:03.509 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 37], 00:33:03.509 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 38], 60.00th=[ 38], 00:33:03.509 | 70.00th=[ 38], 80.00th=[ 38], 90.00th=[ 39], 95.00th=[ 39], 00:33:03.509 | 99.00th=[ 126], 99.50th=[ 186], 99.90th=[ 321], 99.95th=[ 321], 00:33:03.509 | 99.99th=[ 321] 00:33:03.509 bw ( KiB/s): min= 384, max= 1792, per=4.21%, avg=1623.37, stdev=321.19, samples=19 00:33:03.509 iops : min= 96, max= 448, avg=405.84, stdev=80.30, samples=19 00:33:03.509 lat (msec) : 20=0.44%, 50=98.38%, 100=0.05%, 250=0.69%, 500=0.44% 00:33:03.509 cpu : usr=97.91%, sys=1.23%, ctx=48, majf=0, minf=31 00:33:03.509 IO depths : 1=1.4%, 2=7.1%, 4=23.1%, 8=57.2%, 16=11.2%, 32=0.0%, >=64=0.0% 00:33:03.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.509 complete : 0=0.0%, 4=94.0%, 8=0.4%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.509 issued rwts: total=4080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:03.509 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:03.509 filename2: (groupid=0, jobs=1): err= 0: pid=1661321: Fri Jun 7 21:50:02 2024 00:33:03.509 read: IOPS=406, BW=1627KiB/s (1666kB/s)(15.9MiB/10033msec) 00:33:03.509 slat (usec): min=6, max=113, avg=27.55, stdev=15.43 00:33:03.509 clat (msec): min=22, max=319, avg=39.09, stdev=20.91 00:33:03.509 lat (msec): min=22, max=319, avg=39.12, stdev=20.91 00:33:03.509 clat percentiles (msec): 00:33:03.509 | 1.00th=[ 36], 5.00th=[ 37], 10.00th=[ 37], 20.00th=[ 37], 00:33:03.509 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 38], 60.00th=[ 38], 00:33:03.509 | 70.00th=[ 38], 80.00th=[ 38], 90.00th=[ 38], 95.00th=[ 39], 00:33:03.509 | 99.00th=[ 58], 99.50th=[ 213], 99.90th=[ 321], 99.95th=[ 321], 00:33:03.509 | 99.99th=[ 321] 00:33:03.509 bw ( KiB/s): min= 384, max= 1792, per=4.21%, avg=1625.35, stdev=305.43, samples=20 00:33:03.509 iops : min= 96, max= 448, avg=406.30, stdev=76.35, samples=20 00:33:03.509 lat (msec) : 50=98.04%, 100=1.18%, 250=0.39%, 500=0.39% 00:33:03.509 cpu : usr=97.51%, sys=1.42%, ctx=142, majf=0, minf=34 00:33:03.509 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:03.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.509 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.509 issued rwts: total=4080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:03.509 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:03.509 filename2: (groupid=0, jobs=1): err= 0: pid=1661322: Fri Jun 7 21:50:02 2024 00:33:03.509 read: IOPS=403, BW=1616KiB/s (1654kB/s)(15.8MiB/10022msec) 00:33:03.509 slat (usec): min=10, max=129, avg=64.09, stdev=19.70 00:33:03.509 clat (msec): min=28, max=461, avg=39.08, stdev=27.27 00:33:03.509 lat (msec): min=28, max=461, avg=39.14, stdev=27.27 00:33:03.509 clat percentiles (msec): 00:33:03.509 | 1.00th=[ 36], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 37], 00:33:03.509 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:33:03.509 | 70.00th=[ 38], 80.00th=[ 38], 90.00th=[ 38], 95.00th=[ 39], 00:33:03.509 | 99.00th=[ 66], 99.50th=[ 129], 99.90th=[ 460], 99.95th=[ 460], 00:33:03.509 | 99.99th=[ 460] 00:33:03.509 bw ( KiB/s): min= 256, max= 1792, per=4.18%, avg=1612.20, stdev=340.85, samples=20 00:33:03.509 iops : min= 64, max= 448, avg=403.05, stdev=85.21, samples=20 00:33:03.509 lat (msec) : 50=98.42%, 100=0.79%, 250=0.40%, 500=0.40% 00:33:03.509 cpu : usr=96.48%, sys=1.78%, ctx=177, majf=0, minf=29 00:33:03.509 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:03.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.509 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.509 issued rwts: total=4048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:03.509 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:03.509 filename2: (groupid=0, jobs=1): err= 0: pid=1661323: Fri Jun 7 21:50:02 2024 00:33:03.509 read: IOPS=404, BW=1617KiB/s (1656kB/s)(15.8MiB/10014msec) 00:33:03.509 slat (usec): min=9, max=131, avg=58.15, stdev=24.00 00:33:03.509 clat (msec): min=28, max=323, avg=39.09, stdev=21.10 00:33:03.509 lat (msec): min=28, max=323, avg=39.15, stdev=21.10 00:33:03.509 clat percentiles (msec): 00:33:03.509 | 1.00th=[ 36], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 37], 00:33:03.509 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:33:03.509 | 70.00th=[ 38], 80.00th=[ 38], 90.00th=[ 38], 95.00th=[ 39], 00:33:03.509 | 99.00th=[ 129], 99.50th=[ 188], 99.90th=[ 326], 99.95th=[ 326], 00:33:03.509 | 99.99th=[ 326] 00:33:03.509 bw ( KiB/s): min= 384, max= 1792, per=4.18%, avg=1612.60, stdev=322.72, samples=20 00:33:03.509 iops : min= 96, max= 448, avg=403.15, stdev=80.68, samples=20 00:33:03.509 lat (msec) : 50=98.42%, 100=0.40%, 250=0.79%, 500=0.40% 00:33:03.509 cpu : usr=98.80%, sys=0.81%, ctx=16, majf=0, minf=21 00:33:03.509 IO depths : 1=5.5%, 2=11.7%, 4=24.8%, 8=51.0%, 16=7.0%, 32=0.0%, >=64=0.0% 00:33:03.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.509 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.509 issued rwts: total=4048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:03.509 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:03.509 filename2: (groupid=0, jobs=1): err= 0: pid=1661324: Fri Jun 7 21:50:02 2024 00:33:03.509 read: IOPS=405, BW=1622KiB/s (1661kB/s)(15.9MiB/10021msec) 00:33:03.509 slat (usec): min=9, max=120, avg=51.69, stdev=23.58 00:33:03.509 clat (msec): min=28, max=319, avg=39.06, stdev=20.99 00:33:03.509 lat (msec): min=28, max=319, avg=39.11, stdev=20.98 00:33:03.509 clat percentiles (msec): 00:33:03.509 | 1.00th=[ 36], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 37], 00:33:03.509 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 38], 00:33:03.509 | 70.00th=[ 38], 80.00th=[ 38], 90.00th=[ 38], 95.00th=[ 39], 00:33:03.509 | 99.00th=[ 66], 99.50th=[ 213], 99.90th=[ 321], 99.95th=[ 321], 00:33:03.509 | 99.99th=[ 321] 00:33:03.509 bw ( KiB/s): min= 384, max= 1792, per=4.19%, avg=1618.60, stdev=314.20, samples=20 00:33:03.509 iops : min= 96, max= 448, avg=404.65, stdev=78.55, samples=20 00:33:03.509 lat (msec) : 50=98.03%, 100=1.18%, 250=0.39%, 500=0.39% 00:33:03.509 cpu : usr=96.51%, sys=1.90%, ctx=55, majf=0, minf=42 00:33:03.509 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:03.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.509 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.509 issued rwts: total=4064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:03.509 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:03.509 filename2: (groupid=0, jobs=1): err= 0: pid=1661325: Fri Jun 7 21:50:02 2024 00:33:03.509 read: IOPS=405, BW=1622KiB/s (1661kB/s)(15.9MiB/10021msec) 00:33:03.509 slat (usec): min=9, max=131, avg=56.14, stdev=23.58 00:33:03.509 clat (msec): min=28, max=319, avg=39.00, stdev=20.99 00:33:03.509 lat (msec): min=28, max=319, avg=39.05, stdev=20.98 00:33:03.509 clat percentiles (msec): 00:33:03.509 | 1.00th=[ 36], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 37], 00:33:03.509 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:33:03.509 | 70.00th=[ 38], 80.00th=[ 38], 90.00th=[ 38], 95.00th=[ 39], 00:33:03.509 | 99.00th=[ 65], 99.50th=[ 213], 99.90th=[ 321], 99.95th=[ 321], 00:33:03.509 | 99.99th=[ 321] 00:33:03.509 bw ( KiB/s): min= 384, max= 1792, per=4.19%, avg=1618.60, stdev=314.20, samples=20 00:33:03.509 iops : min= 96, max= 448, avg=404.65, stdev=78.55, samples=20 00:33:03.509 lat (msec) : 50=98.03%, 100=1.18%, 250=0.39%, 500=0.39% 00:33:03.509 cpu : usr=98.14%, sys=1.29%, ctx=131, majf=0, minf=28 00:33:03.509 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:03.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.509 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.509 issued rwts: total=4064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:03.509 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:03.509 filename2: (groupid=0, jobs=1): err= 0: pid=1661326: Fri Jun 7 21:50:02 2024 00:33:03.509 read: IOPS=406, BW=1626KiB/s (1665kB/s)(15.9MiB/10010msec) 00:33:03.509 slat (usec): min=4, max=129, avg=59.04, stdev=24.44 00:33:03.509 clat (msec): min=10, max=347, avg=38.81, stdev=22.65 00:33:03.509 lat (msec): min=10, max=347, avg=38.87, stdev=22.65 00:33:03.509 clat percentiles (msec): 00:33:03.509 | 1.00th=[ 23], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 37], 00:33:03.509 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:33:03.509 | 70.00th=[ 37], 80.00th=[ 38], 90.00th=[ 38], 95.00th=[ 39], 00:33:03.509 | 99.00th=[ 72], 99.50th=[ 213], 99.90th=[ 347], 99.95th=[ 347], 00:33:03.510 | 99.99th=[ 347] 00:33:03.510 bw ( KiB/s): min= 384, max= 1836, per=4.17%, avg=1611.95, stdev=321.50, samples=19 00:33:03.510 iops : min= 96, max= 459, avg=402.95, stdev=80.38, samples=19 00:33:03.510 lat (msec) : 20=0.39%, 50=97.84%, 100=0.98%, 250=0.39%, 500=0.39% 00:33:03.510 cpu : usr=96.88%, sys=1.54%, ctx=35, majf=0, minf=24 00:33:03.510 IO depths : 1=6.0%, 2=12.2%, 4=24.7%, 8=50.6%, 16=6.5%, 32=0.0%, >=64=0.0% 00:33:03.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.510 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.510 issued rwts: total=4070,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:03.510 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:03.510 00:33:03.510 Run status group 0 (all jobs): 00:33:03.510 READ: bw=37.7MiB/s (39.5MB/s), 1345KiB/s-1644KiB/s (1378kB/s-1683kB/s), io=378MiB (396MB), run=10002-10033msec 00:33:03.510 21:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:33:03.510 21:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:03.510 21:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:03.510 21:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:03.510 21:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:03.510 21:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:03.510 21:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:03.510 21:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:03.510 21:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:03.510 21:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:03.510 21:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:03.510 21:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:03.510 21:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:03.510 21:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:03.510 21:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:03.510 21:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:33:03.510 21:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:03.510 21:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:03.510 21:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:03.510 21:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:03.510 21:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:03.510 21:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:03.510 21:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:03.510 21:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:03.510 21:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:03.510 21:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:33:03.510 21:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:33:03.510 21:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:03.510 21:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:03.510 21:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:03.510 21:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:03.510 21:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:33:03.510 21:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:03.510 21:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:03.510 21:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:03.510 21:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:33:03.510 21:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:33:03.510 21:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:33:03.510 21:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:33:03.510 21:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:33:03.510 21:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:33:03.510 21:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:33:03.510 21:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:03.510 21:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:03.510 21:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:03.510 21:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:03.510 21:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:03.510 21:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:03.510 21:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:03.510 bdev_null0 00:33:03.510 21:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:03.510 21:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:03.510 21:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:03.510 21:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:03.510 21:50:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:03.510 21:50:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:03.510 21:50:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:03.510 21:50:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:03.510 21:50:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:03.510 21:50:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:03.510 21:50:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:03.510 21:50:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:03.510 [2024-06-07 21:50:03.014615] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:03.510 21:50:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:03.510 21:50:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:03.510 21:50:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:33:03.510 21:50:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:33:03.510 21:50:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:03.510 21:50:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:03.510 21:50:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:03.510 bdev_null1 00:33:03.510 21:50:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:03.510 21:50:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:03.510 21:50:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:03.510 21:50:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:03.510 21:50:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:03.510 21:50:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:03.510 21:50:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:03.510 21:50:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:03.510 21:50:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:03.510 21:50:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:03.510 21:50:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:03.510 21:50:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:03.510 21:50:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:03.510 21:50:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:33:03.510 21:50:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:33:03.510 21:50:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:03.510 21:50:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:33:03.510 21:50:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:03.510 21:50:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:33:03.510 21:50:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:03.510 21:50:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:03.510 21:50:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:03.510 21:50:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:03.510 { 00:33:03.510 "params": { 00:33:03.510 "name": "Nvme$subsystem", 00:33:03.510 "trtype": "$TEST_TRANSPORT", 00:33:03.510 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:03.510 "adrfam": "ipv4", 00:33:03.510 "trsvcid": "$NVMF_PORT", 00:33:03.510 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:03.510 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:03.510 "hdgst": ${hdgst:-false}, 00:33:03.510 "ddgst": ${ddgst:-false} 00:33:03.510 }, 00:33:03.510 "method": "bdev_nvme_attach_controller" 00:33:03.510 } 00:33:03.510 EOF 00:33:03.510 )") 00:33:03.510 21:50:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:33:03.510 21:50:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:03.510 21:50:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:03.510 21:50:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:03.510 21:50:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 00:33:03.510 21:50:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:03.510 21:50:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 00:33:03.510 21:50:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 00:33:03.511 21:50:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:33:03.511 21:50:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:33:03.511 21:50:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:03.511 21:50:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:03.511 21:50:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:03.511 21:50:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:03.511 21:50:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 00:33:03.511 21:50:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:33:03.511 21:50:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:03.511 21:50:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:03.511 { 00:33:03.511 "params": { 00:33:03.511 "name": "Nvme$subsystem", 00:33:03.511 "trtype": "$TEST_TRANSPORT", 00:33:03.511 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:03.511 "adrfam": "ipv4", 00:33:03.511 "trsvcid": "$NVMF_PORT", 00:33:03.511 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:03.511 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:03.511 "hdgst": ${hdgst:-false}, 00:33:03.511 "ddgst": ${ddgst:-false} 00:33:03.511 }, 00:33:03.511 "method": "bdev_nvme_attach_controller" 00:33:03.511 } 00:33:03.511 EOF 00:33:03.511 )") 00:33:03.511 21:50:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:03.511 21:50:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:33:03.511 21:50:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:03.511 21:50:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:33:03.511 21:50:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:33:03.511 21:50:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:03.511 "params": { 00:33:03.511 "name": "Nvme0", 00:33:03.511 "trtype": "tcp", 00:33:03.511 "traddr": "10.0.0.2", 00:33:03.511 "adrfam": "ipv4", 00:33:03.511 "trsvcid": "4420", 00:33:03.511 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:03.511 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:03.511 "hdgst": false, 00:33:03.511 "ddgst": false 00:33:03.511 }, 00:33:03.511 "method": "bdev_nvme_attach_controller" 00:33:03.511 },{ 00:33:03.511 "params": { 00:33:03.511 "name": "Nvme1", 00:33:03.511 "trtype": "tcp", 00:33:03.511 "traddr": "10.0.0.2", 00:33:03.511 "adrfam": "ipv4", 00:33:03.511 "trsvcid": "4420", 00:33:03.511 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:03.511 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:03.511 "hdgst": false, 00:33:03.511 "ddgst": false 00:33:03.511 }, 00:33:03.511 "method": "bdev_nvme_attach_controller" 00:33:03.511 }' 00:33:03.511 21:50:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:33:03.511 21:50:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:33:03.511 21:50:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:33:03.511 21:50:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:03.511 21:50:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:33:03.511 21:50:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:33:03.511 21:50:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:33:03.511 21:50:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:33:03.511 21:50:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:03.511 21:50:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:03.511 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:03.511 ... 00:33:03.511 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:03.511 ... 00:33:03.511 fio-3.35 00:33:03.511 Starting 4 threads 00:33:03.511 EAL: No free 2048 kB hugepages reported on node 1 00:33:10.076 00:33:10.076 filename0: (groupid=0, jobs=1): err= 0: pid=1663438: Fri Jun 7 21:50:09 2024 00:33:10.076 read: IOPS=1750, BW=13.7MiB/s (14.3MB/s)(68.4MiB/5003msec) 00:33:10.076 slat (nsec): min=6738, max=73364, avg=13515.05, stdev=6522.08 00:33:10.076 clat (usec): min=2202, max=43916, avg=4529.56, stdev=1391.62 00:33:10.076 lat (usec): min=2219, max=43942, avg=4543.07, stdev=1391.25 00:33:10.076 clat percentiles (usec): 00:33:10.076 | 1.00th=[ 3228], 5.00th=[ 3818], 10.00th=[ 3884], 20.00th=[ 4015], 00:33:10.076 | 30.00th=[ 4080], 40.00th=[ 4228], 50.00th=[ 4359], 60.00th=[ 4424], 00:33:10.076 | 70.00th=[ 4490], 80.00th=[ 4752], 90.00th=[ 6063], 95.00th=[ 6128], 00:33:10.076 | 99.00th=[ 6652], 99.50th=[ 7046], 99.90th=[ 7832], 99.95th=[43779], 00:33:10.076 | 99.99th=[43779] 00:33:10.076 bw ( KiB/s): min=13040, max=14304, per=24.91%, avg=13996.44, stdev=383.56, samples=9 00:33:10.076 iops : min= 1630, max= 1788, avg=1749.56, stdev=47.95, samples=9 00:33:10.076 lat (msec) : 4=19.71%, 10=80.20%, 50=0.09% 00:33:10.076 cpu : usr=97.82%, sys=1.76%, ctx=6, majf=0, minf=9 00:33:10.076 IO depths : 1=0.1%, 2=3.5%, 4=68.8%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:10.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.076 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.076 issued rwts: total=8756,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:10.076 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:10.076 filename0: (groupid=0, jobs=1): err= 0: pid=1663439: Fri Jun 7 21:50:09 2024 00:33:10.076 read: IOPS=1734, BW=13.6MiB/s (14.2MB/s)(67.8MiB/5001msec) 00:33:10.076 slat (nsec): min=8372, max=66473, avg=15020.35, stdev=7775.39 00:33:10.076 clat (usec): min=1777, max=48334, avg=4564.98, stdev=1521.33 00:33:10.076 lat (usec): min=1787, max=48356, avg=4580.00, stdev=1520.70 00:33:10.076 clat percentiles (usec): 00:33:10.076 | 1.00th=[ 3490], 5.00th=[ 3916], 10.00th=[ 3982], 20.00th=[ 4080], 00:33:10.076 | 30.00th=[ 4113], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4424], 00:33:10.076 | 70.00th=[ 4490], 80.00th=[ 4752], 90.00th=[ 6063], 95.00th=[ 6128], 00:33:10.076 | 99.00th=[ 6915], 99.50th=[ 7111], 99.90th=[ 8160], 99.95th=[48497], 00:33:10.076 | 99.99th=[48497] 00:33:10.076 bw ( KiB/s): min=12368, max=14256, per=24.66%, avg=13856.00, stdev=589.24, samples=9 00:33:10.076 iops : min= 1546, max= 1782, avg=1732.00, stdev=73.65, samples=9 00:33:10.076 lat (msec) : 2=0.07%, 4=13.05%, 10=86.79%, 50=0.09% 00:33:10.076 cpu : usr=97.48%, sys=2.10%, ctx=12, majf=0, minf=9 00:33:10.076 IO depths : 1=0.1%, 2=1.7%, 4=70.8%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:10.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.076 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.076 issued rwts: total=8676,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:10.076 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:10.076 filename1: (groupid=0, jobs=1): err= 0: pid=1663440: Fri Jun 7 21:50:09 2024 00:33:10.076 read: IOPS=1755, BW=13.7MiB/s (14.4MB/s)(68.6MiB/5002msec) 00:33:10.076 slat (usec): min=6, max=216, avg=13.63, stdev= 6.67 00:33:10.076 clat (usec): min=2204, max=45281, avg=4516.37, stdev=1424.67 00:33:10.076 lat (usec): min=2220, max=45306, avg=4529.99, stdev=1424.34 00:33:10.076 clat percentiles (usec): 00:33:10.076 | 1.00th=[ 3294], 5.00th=[ 3818], 10.00th=[ 4015], 20.00th=[ 4047], 00:33:10.076 | 30.00th=[ 4113], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4424], 00:33:10.076 | 70.00th=[ 4490], 80.00th=[ 4621], 90.00th=[ 6128], 95.00th=[ 6128], 00:33:10.076 | 99.00th=[ 6718], 99.50th=[ 6980], 99.90th=[ 8160], 99.95th=[45351], 00:33:10.076 | 99.99th=[45351] 00:33:10.076 bw ( KiB/s): min=12985, max=14384, per=25.00%, avg=14045.44, stdev=416.49, samples=9 00:33:10.076 iops : min= 1623, max= 1798, avg=1755.67, stdev=52.10, samples=9 00:33:10.076 lat (msec) : 4=8.96%, 10=90.95%, 50=0.09% 00:33:10.076 cpu : usr=97.46%, sys=2.12%, ctx=7, majf=0, minf=9 00:33:10.076 IO depths : 1=0.1%, 2=2.1%, 4=70.4%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:10.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.076 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.076 issued rwts: total=8781,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:10.076 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:10.076 filename1: (groupid=0, jobs=1): err= 0: pid=1663442: Fri Jun 7 21:50:09 2024 00:33:10.076 read: IOPS=1784, BW=13.9MiB/s (14.6MB/s)(69.7MiB/5003msec) 00:33:10.076 slat (nsec): min=5421, max=69658, avg=19897.42, stdev=11081.44 00:33:10.076 clat (usec): min=1317, max=45478, avg=4424.40, stdev=1420.97 00:33:10.076 lat (usec): min=1331, max=45494, avg=4444.30, stdev=1419.83 00:33:10.076 clat percentiles (usec): 00:33:10.076 | 1.00th=[ 2999], 5.00th=[ 3589], 10.00th=[ 3851], 20.00th=[ 3916], 00:33:10.076 | 30.00th=[ 4015], 40.00th=[ 4146], 50.00th=[ 4359], 60.00th=[ 4359], 00:33:10.076 | 70.00th=[ 4424], 80.00th=[ 4555], 90.00th=[ 5669], 95.00th=[ 6128], 00:33:10.076 | 99.00th=[ 6325], 99.50th=[ 6652], 99.90th=[ 8356], 99.95th=[45351], 00:33:10.076 | 99.99th=[45351] 00:33:10.076 bw ( KiB/s): min=13344, max=14672, per=25.35%, avg=14243.56, stdev=380.21, samples=9 00:33:10.076 iops : min= 1668, max= 1834, avg=1780.44, stdev=47.53, samples=9 00:33:10.076 lat (msec) : 2=0.36%, 4=26.01%, 10=73.54%, 50=0.09% 00:33:10.076 cpu : usr=97.48%, sys=2.10%, ctx=10, majf=0, minf=9 00:33:10.076 IO depths : 1=0.1%, 2=2.4%, 4=69.9%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:10.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.076 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.076 issued rwts: total=8927,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:10.076 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:10.076 00:33:10.076 Run status group 0 (all jobs): 00:33:10.076 READ: bw=54.9MiB/s (57.5MB/s), 13.6MiB/s-13.9MiB/s (14.2MB/s-14.6MB/s), io=275MiB (288MB), run=5001-5003msec 00:33:10.076 21:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:33:10.076 21:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:10.076 21:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:10.076 21:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:10.076 21:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:10.076 21:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:10.076 21:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:10.076 21:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:10.076 21:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:10.076 21:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:10.076 21:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:10.076 21:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:10.076 21:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:10.076 21:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:10.076 21:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:10.076 21:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:33:10.076 21:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:10.076 21:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:10.076 21:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:10.076 21:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:10.076 21:50:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:10.076 21:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:10.076 21:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:10.076 21:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:10.076 00:33:10.076 real 0m24.647s 00:33:10.076 user 5m6.947s 00:33:10.076 sys 0m4.776s 00:33:10.076 21:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # xtrace_disable 00:33:10.076 21:50:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:10.076 ************************************ 00:33:10.076 END TEST fio_dif_rand_params 00:33:10.076 ************************************ 00:33:10.076 21:50:09 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:33:10.076 21:50:09 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:33:10.076 21:50:09 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:33:10.076 21:50:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:10.076 ************************************ 00:33:10.076 START TEST fio_dif_digest 00:33:10.076 ************************************ 00:33:10.076 21:50:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # fio_dif_digest 00:33:10.076 21:50:09 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:33:10.076 21:50:09 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:33:10.076 21:50:09 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:33:10.076 21:50:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:33:10.076 21:50:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:33:10.076 21:50:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:33:10.076 21:50:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:33:10.076 21:50:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:33:10.076 21:50:09 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:33:10.076 21:50:09 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:33:10.076 21:50:09 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:33:10.076 21:50:09 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:33:10.076 21:50:09 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:33:10.076 21:50:09 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:33:10.076 21:50:09 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:33:10.076 21:50:09 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:10.076 21:50:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:10.076 21:50:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:10.076 bdev_null0 00:33:10.076 21:50:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:10.076 21:50:09 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:10.076 21:50:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:10.076 21:50:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:10.076 21:50:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:10.076 21:50:09 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:10.076 21:50:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:10.076 21:50:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:10.076 21:50:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:10.076 21:50:09 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:10.076 21:50:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:10.076 21:50:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:10.076 [2024-06-07 21:50:09.493239] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:10.076 21:50:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:10.076 21:50:09 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:33:10.076 21:50:09 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:33:10.076 21:50:09 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:10.076 21:50:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:33:10.076 21:50:09 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:10.076 21:50:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:33:10.076 21:50:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:10.076 21:50:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:10.076 21:50:09 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:33:10.076 21:50:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:10.076 { 00:33:10.076 "params": { 00:33:10.076 "name": "Nvme$subsystem", 00:33:10.076 "trtype": "$TEST_TRANSPORT", 00:33:10.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:10.076 "adrfam": "ipv4", 00:33:10.076 "trsvcid": "$NVMF_PORT", 00:33:10.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:10.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:10.076 "hdgst": ${hdgst:-false}, 00:33:10.076 "ddgst": ${ddgst:-false} 00:33:10.076 }, 00:33:10.076 "method": "bdev_nvme_attach_controller" 00:33:10.076 } 00:33:10.076 EOF 00:33:10.076 )") 00:33:10.076 21:50:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:33:10.076 21:50:09 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:33:10.076 21:50:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:10.076 21:50:09 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:33:10.076 21:50:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # local sanitizers 00:33:10.076 21:50:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:10.076 21:50:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # shift 00:33:10.076 21:50:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local asan_lib= 00:33:10.076 21:50:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:33:10.076 21:50:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:33:10.077 21:50:09 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:33:10.077 21:50:09 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:33:10.077 21:50:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:10.077 21:50:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # grep libasan 00:33:10.077 21:50:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:33:10.077 21:50:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:33:10.077 21:50:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:33:10.077 21:50:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:10.077 "params": { 00:33:10.077 "name": "Nvme0", 00:33:10.077 "trtype": "tcp", 00:33:10.077 "traddr": "10.0.0.2", 00:33:10.077 "adrfam": "ipv4", 00:33:10.077 "trsvcid": "4420", 00:33:10.077 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:10.077 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:10.077 "hdgst": true, 00:33:10.077 "ddgst": true 00:33:10.077 }, 00:33:10.077 "method": "bdev_nvme_attach_controller" 00:33:10.077 }' 00:33:10.077 21:50:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # asan_lib= 00:33:10.077 21:50:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:33:10.077 21:50:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:33:10.077 21:50:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:33:10.077 21:50:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:10.077 21:50:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:33:10.077 21:50:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # asan_lib= 00:33:10.077 21:50:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:33:10.077 21:50:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:10.077 21:50:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:10.077 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:10.077 ... 00:33:10.077 fio-3.35 00:33:10.077 Starting 3 threads 00:33:10.077 EAL: No free 2048 kB hugepages reported on node 1 00:33:22.282 00:33:22.282 filename0: (groupid=0, jobs=1): err= 0: pid=1664748: Fri Jun 7 21:50:20 2024 00:33:22.282 read: IOPS=191, BW=24.0MiB/s (25.2MB/s)(241MiB/10047msec) 00:33:22.282 slat (nsec): min=9850, max=80088, avg=19306.13, stdev=7093.28 00:33:22.282 clat (usec): min=9209, max=58518, avg=15585.15, stdev=2600.16 00:33:22.282 lat (usec): min=9243, max=58551, avg=15604.46, stdev=2600.34 00:33:22.282 clat percentiles (usec): 00:33:22.282 | 1.00th=[10421], 5.00th=[11731], 10.00th=[13698], 20.00th=[14484], 00:33:22.282 | 30.00th=[15008], 40.00th=[15401], 50.00th=[15664], 60.00th=[16057], 00:33:22.282 | 70.00th=[16319], 80.00th=[16712], 90.00th=[17171], 95.00th=[17695], 00:33:22.282 | 99.00th=[18482], 99.50th=[19006], 99.90th=[57410], 99.95th=[58459], 00:33:22.282 | 99.99th=[58459] 00:33:22.282 bw ( KiB/s): min=22528, max=26880, per=35.04%, avg=24652.80, stdev=947.36, samples=20 00:33:22.282 iops : min= 176, max= 210, avg=192.60, stdev= 7.40, samples=20 00:33:22.282 lat (msec) : 10=0.52%, 20=99.17%, 50=0.05%, 100=0.26% 00:33:22.282 cpu : usr=96.28%, sys=3.32%, ctx=14, majf=0, minf=180 00:33:22.282 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:22.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:22.282 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:22.282 issued rwts: total=1928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:22.282 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:22.282 filename0: (groupid=0, jobs=1): err= 0: pid=1664749: Fri Jun 7 21:50:20 2024 00:33:22.283 read: IOPS=178, BW=22.3MiB/s (23.4MB/s)(224MiB/10004msec) 00:33:22.283 slat (nsec): min=10106, max=51321, avg=21521.72, stdev=7368.21 00:33:22.283 clat (msec): min=9, max=101, avg=16.76, stdev= 4.68 00:33:22.283 lat (msec): min=9, max=101, avg=16.78, stdev= 4.68 00:33:22.283 clat percentiles (msec): 00:33:22.283 | 1.00th=[ 11], 5.00th=[ 14], 10.00th=[ 15], 20.00th=[ 16], 00:33:22.283 | 30.00th=[ 16], 40.00th=[ 17], 50.00th=[ 17], 60.00th=[ 17], 00:33:22.283 | 70.00th=[ 18], 80.00th=[ 18], 90.00th=[ 19], 95.00th=[ 19], 00:33:22.283 | 99.00th=[ 22], 99.50th=[ 59], 99.90th=[ 103], 99.95th=[ 103], 00:33:22.283 | 99.99th=[ 103] 00:33:22.283 bw ( KiB/s): min=16896, max=25088, per=32.50%, avg=22862.53, stdev=1835.43, samples=19 00:33:22.283 iops : min= 132, max= 196, avg=178.58, stdev=14.36, samples=19 00:33:22.283 lat (msec) : 10=0.11%, 20=98.71%, 50=0.45%, 100=0.62%, 250=0.11% 00:33:22.283 cpu : usr=95.69%, sys=3.93%, ctx=20, majf=0, minf=190 00:33:22.283 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:22.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:22.283 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:22.283 issued rwts: total=1788,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:22.283 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:22.283 filename0: (groupid=0, jobs=1): err= 0: pid=1664750: Fri Jun 7 21:50:20 2024 00:33:22.283 read: IOPS=179, BW=22.5MiB/s (23.6MB/s)(226MiB/10047msec) 00:33:22.283 slat (nsec): min=9992, max=50566, avg=19917.55, stdev=7683.54 00:33:22.283 clat (msec): min=9, max=100, avg=16.64, stdev= 5.54 00:33:22.283 lat (msec): min=9, max=100, avg=16.66, stdev= 5.54 00:33:22.283 clat percentiles (msec): 00:33:22.283 | 1.00th=[ 11], 5.00th=[ 14], 10.00th=[ 15], 20.00th=[ 16], 00:33:22.283 | 30.00th=[ 16], 40.00th=[ 16], 50.00th=[ 17], 60.00th=[ 17], 00:33:22.283 | 70.00th=[ 17], 80.00th=[ 18], 90.00th=[ 18], 95.00th=[ 19], 00:33:22.283 | 99.00th=[ 57], 99.50th=[ 59], 99.90th=[ 101], 99.95th=[ 101], 00:33:22.283 | 99.99th=[ 101] 00:33:22.283 bw ( KiB/s): min=19456, max=26368, per=32.83%, avg=23093.55, stdev=1515.95, samples=20 00:33:22.283 iops : min= 152, max= 206, avg=180.40, stdev=11.83, samples=20 00:33:22.283 lat (msec) : 10=0.11%, 20=98.50%, 50=0.11%, 100=1.16%, 250=0.11% 00:33:22.283 cpu : usr=96.67%, sys=2.95%, ctx=14, majf=0, minf=142 00:33:22.283 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:22.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:22.283 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:22.283 issued rwts: total=1806,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:22.283 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:22.283 00:33:22.283 Run status group 0 (all jobs): 00:33:22.283 READ: bw=68.7MiB/s (72.0MB/s), 22.3MiB/s-24.0MiB/s (23.4MB/s-25.2MB/s), io=690MiB (724MB), run=10004-10047msec 00:33:22.283 21:50:20 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:33:22.283 21:50:20 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:33:22.283 21:50:20 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:33:22.283 21:50:20 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:22.283 21:50:20 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:33:22.283 21:50:20 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:22.283 21:50:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:22.283 21:50:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:22.283 21:50:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:22.283 21:50:20 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:22.283 21:50:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:22.283 21:50:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:22.283 21:50:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:22.283 00:33:22.283 real 0m11.312s 00:33:22.283 user 0m40.009s 00:33:22.283 sys 0m1.376s 00:33:22.283 21:50:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # xtrace_disable 00:33:22.283 21:50:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:22.283 ************************************ 00:33:22.283 END TEST fio_dif_digest 00:33:22.283 ************************************ 00:33:22.283 21:50:20 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:33:22.283 21:50:20 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:33:22.283 21:50:20 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:22.283 21:50:20 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:33:22.283 21:50:20 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:22.283 21:50:20 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:33:22.283 21:50:20 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:22.283 21:50:20 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:22.283 rmmod nvme_tcp 00:33:22.283 rmmod nvme_fabrics 00:33:22.283 rmmod nvme_keyring 00:33:22.283 21:50:20 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:22.283 21:50:20 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:33:22.283 21:50:20 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:33:22.283 21:50:20 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1655203 ']' 00:33:22.283 21:50:20 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1655203 00:33:22.283 21:50:20 nvmf_dif -- common/autotest_common.sh@949 -- # '[' -z 1655203 ']' 00:33:22.283 21:50:20 nvmf_dif -- common/autotest_common.sh@953 -- # kill -0 1655203 00:33:22.283 21:50:20 nvmf_dif -- common/autotest_common.sh@954 -- # uname 00:33:22.283 21:50:20 nvmf_dif -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:33:22.283 21:50:20 nvmf_dif -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1655203 00:33:22.283 21:50:20 nvmf_dif -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:33:22.283 21:50:20 nvmf_dif -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:33:22.283 21:50:20 nvmf_dif -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1655203' 00:33:22.283 killing process with pid 1655203 00:33:22.283 21:50:20 nvmf_dif -- common/autotest_common.sh@968 -- # kill 1655203 00:33:22.283 21:50:20 nvmf_dif -- common/autotest_common.sh@973 -- # wait 1655203 00:33:22.283 21:50:21 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:33:22.283 21:50:21 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:24.189 Waiting for block devices as requested 00:33:24.189 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:33:24.189 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:24.189 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:24.189 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:24.448 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:24.448 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:24.448 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:24.707 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:24.707 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:24.707 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:24.707 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:24.967 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:24.967 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:24.967 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:25.226 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:25.226 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:25.226 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:25.485 21:50:25 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:25.485 21:50:25 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:25.485 21:50:25 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:25.485 21:50:25 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:25.485 21:50:25 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:25.485 21:50:25 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:25.485 21:50:25 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:27.389 21:50:27 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:27.389 00:33:27.389 real 1m16.539s 00:33:27.389 user 7m40.801s 00:33:27.389 sys 0m19.945s 00:33:27.389 21:50:27 nvmf_dif -- common/autotest_common.sh@1125 -- # xtrace_disable 00:33:27.389 21:50:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:27.389 ************************************ 00:33:27.389 END TEST nvmf_dif 00:33:27.389 ************************************ 00:33:27.389 21:50:27 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:33:27.389 21:50:27 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:33:27.389 21:50:27 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:33:27.389 21:50:27 -- common/autotest_common.sh@10 -- # set +x 00:33:27.389 ************************************ 00:33:27.389 START TEST nvmf_abort_qd_sizes 00:33:27.389 ************************************ 00:33:27.390 21:50:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:33:27.648 * Looking for test storage... 00:33:27.648 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:27.648 21:50:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:27.648 21:50:27 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:33:27.648 21:50:27 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:27.648 21:50:27 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:27.648 21:50:27 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:27.648 21:50:27 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:27.648 21:50:27 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:27.648 21:50:27 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:27.648 21:50:27 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:27.648 21:50:27 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:27.648 21:50:27 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:27.648 21:50:27 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:27.648 21:50:27 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:33:27.648 21:50:27 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:33:27.648 21:50:27 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:27.648 21:50:27 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:27.648 21:50:27 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:27.648 21:50:27 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:27.648 21:50:27 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:27.649 21:50:27 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:27.649 21:50:27 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:27.649 21:50:27 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:27.649 21:50:27 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.649 21:50:27 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.649 21:50:27 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.649 21:50:27 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:33:27.649 21:50:27 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.649 21:50:27 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:33:27.649 21:50:27 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:27.649 21:50:27 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:27.649 21:50:27 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:27.649 21:50:27 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:27.649 21:50:27 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:27.649 21:50:27 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:27.649 21:50:27 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:27.649 21:50:27 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:27.649 21:50:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:33:27.649 21:50:27 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:27.649 21:50:27 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:27.649 21:50:27 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:27.649 21:50:27 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:27.649 21:50:27 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:27.649 21:50:27 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:27.649 21:50:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:27.649 21:50:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:27.649 21:50:27 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:27.649 21:50:27 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:27.649 21:50:27 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:33:27.649 21:50:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:34.219 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:34.219 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:33:34.219 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:34.219 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:34.219 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:34.219 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:34.219 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:34.219 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:33:34.219 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:34.219 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:33:34.219 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:33:34.219 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:33:34.219 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:34.220 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:34.220 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:34.220 Found net devices under 0000:af:00.0: cvl_0_0 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:34.220 Found net devices under 0000:af:00.1: cvl_0_1 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:34.220 21:50:33 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:34.220 21:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:34.220 21:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:34.220 21:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:34.220 21:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:34.220 21:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:34.220 21:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:34.220 21:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:34.220 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:34.220 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:33:34.220 00:33:34.220 --- 10.0.0.2 ping statistics --- 00:33:34.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:34.220 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:33:34.220 21:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:34.220 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:34.220 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:33:34.220 00:33:34.220 --- 10.0.0.1 ping statistics --- 00:33:34.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:34.220 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:33:34.220 21:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:34.220 21:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:33:34.220 21:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:33:34.220 21:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:37.511 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:33:37.511 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:33:37.511 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:33:37.511 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:33:37.511 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:33:37.511 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:33:37.511 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:33:37.511 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:33:37.511 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:33:37.511 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:33:37.511 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:33:37.511 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:33:37.511 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:33:37.511 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:33:37.511 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:33:37.511 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:33:38.080 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:33:38.426 21:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:38.426 21:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:38.426 21:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:38.426 21:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:38.426 21:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:38.426 21:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:38.426 21:50:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:33:38.426 21:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:38.426 21:50:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@723 -- # xtrace_disable 00:33:38.426 21:50:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:38.426 21:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1673574 00:33:38.426 21:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1673574 00:33:38.426 21:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:33:38.426 21:50:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@830 -- # '[' -z 1673574 ']' 00:33:38.426 21:50:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:38.426 21:50:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local max_retries=100 00:33:38.426 21:50:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:38.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:38.426 21:50:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # xtrace_disable 00:33:38.426 21:50:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:38.426 [2024-06-07 21:50:38.491044] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:33:38.426 [2024-06-07 21:50:38.491097] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:38.426 EAL: No free 2048 kB hugepages reported on node 1 00:33:38.426 [2024-06-07 21:50:38.586781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:38.702 [2024-06-07 21:50:38.681337] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:38.702 [2024-06-07 21:50:38.681379] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:38.702 [2024-06-07 21:50:38.681389] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:38.702 [2024-06-07 21:50:38.681397] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:38.702 [2024-06-07 21:50:38.681405] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:38.702 [2024-06-07 21:50:38.681461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:38.702 [2024-06-07 21:50:38.681588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:38.702 [2024-06-07 21:50:38.681693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:38.702 [2024-06-07 21:50:38.681694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:39.270 21:50:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:33:39.270 21:50:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@863 -- # return 0 00:33:39.270 21:50:39 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:39.270 21:50:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@729 -- # xtrace_disable 00:33:39.270 21:50:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:39.270 21:50:39 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:39.270 21:50:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:33:39.270 21:50:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:33:39.270 21:50:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:33:39.270 21:50:39 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:33:39.270 21:50:39 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:33:39.270 21:50:39 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:86:00.0 ]] 00:33:39.270 21:50:39 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:33:39.270 21:50:39 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:33:39.270 21:50:39 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:86:00.0 ]] 00:33:39.270 21:50:39 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:33:39.270 21:50:39 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:33:39.270 21:50:39 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:33:39.270 21:50:39 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:33:39.270 21:50:39 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:86:00.0 00:33:39.270 21:50:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:33:39.270 21:50:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:86:00.0 00:33:39.270 21:50:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:33:39.270 21:50:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:33:39.270 21:50:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1106 -- # xtrace_disable 00:33:39.270 21:50:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:39.270 ************************************ 00:33:39.270 START TEST spdk_target_abort 00:33:39.270 ************************************ 00:33:39.270 21:50:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # spdk_target 00:33:39.270 21:50:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:33:39.270 21:50:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:86:00.0 -b spdk_target 00:33:39.270 21:50:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:39.270 21:50:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:42.560 spdk_targetn1 00:33:42.560 21:50:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:42.560 21:50:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:42.560 21:50:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:42.560 21:50:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:42.560 [2024-06-07 21:50:42.384342] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:42.560 21:50:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:42.560 21:50:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:33:42.560 21:50:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:42.560 21:50:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:42.560 21:50:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:42.560 21:50:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:33:42.560 21:50:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:42.560 21:50:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:42.560 21:50:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:42.560 21:50:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:33:42.560 21:50:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:42.560 21:50:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:42.560 [2024-06-07 21:50:42.420588] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:42.560 21:50:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:42.560 21:50:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:33:42.560 21:50:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:42.560 21:50:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:42.560 21:50:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:33:42.560 21:50:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:42.560 21:50:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:33:42.560 21:50:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:42.560 21:50:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:42.560 21:50:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:42.560 21:50:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:42.560 21:50:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:42.560 21:50:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:42.560 21:50:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:42.561 21:50:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:42.561 21:50:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:33:42.561 21:50:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:42.561 21:50:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:42.561 21:50:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:42.561 21:50:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:42.561 21:50:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:42.561 21:50:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:42.561 EAL: No free 2048 kB hugepages reported on node 1 00:33:45.847 Initializing NVMe Controllers 00:33:45.847 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:45.847 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:45.847 Initialization complete. Launching workers. 00:33:45.847 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10192, failed: 0 00:33:45.847 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1874, failed to submit 8318 00:33:45.847 success 808, unsuccess 1066, failed 0 00:33:45.847 21:50:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:45.847 21:50:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:45.847 EAL: No free 2048 kB hugepages reported on node 1 00:33:49.131 Initializing NVMe Controllers 00:33:49.131 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:49.131 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:49.131 Initialization complete. Launching workers. 00:33:49.131 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8512, failed: 0 00:33:49.131 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1218, failed to submit 7294 00:33:49.131 success 329, unsuccess 889, failed 0 00:33:49.131 21:50:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:49.131 21:50:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:49.131 EAL: No free 2048 kB hugepages reported on node 1 00:33:52.418 Initializing NVMe Controllers 00:33:52.418 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:52.418 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:52.418 Initialization complete. Launching workers. 00:33:52.418 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38410, failed: 0 00:33:52.418 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2601, failed to submit 35809 00:33:52.418 success 599, unsuccess 2002, failed 0 00:33:52.418 21:50:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:33:52.418 21:50:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:52.418 21:50:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:52.418 21:50:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:52.418 21:50:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:33:52.418 21:50:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:52.418 21:50:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:53.367 21:50:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:53.367 21:50:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1673574 00:33:53.367 21:50:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@949 -- # '[' -z 1673574 ']' 00:33:53.367 21:50:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # kill -0 1673574 00:33:53.367 21:50:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # uname 00:33:53.367 21:50:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:33:53.367 21:50:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1673574 00:33:53.367 21:50:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:33:53.367 21:50:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:33:53.367 21:50:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1673574' 00:33:53.367 killing process with pid 1673574 00:33:53.367 21:50:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # kill 1673574 00:33:53.367 21:50:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # wait 1673574 00:33:53.625 00:33:53.625 real 0m14.273s 00:33:53.625 user 0m57.317s 00:33:53.625 sys 0m2.174s 00:33:53.625 21:50:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:33:53.625 21:50:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:53.625 ************************************ 00:33:53.625 END TEST spdk_target_abort 00:33:53.625 ************************************ 00:33:53.625 21:50:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:33:53.625 21:50:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:33:53.625 21:50:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1106 -- # xtrace_disable 00:33:53.625 21:50:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:53.625 ************************************ 00:33:53.625 START TEST kernel_target_abort 00:33:53.625 ************************************ 00:33:53.625 21:50:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # kernel_target 00:33:53.625 21:50:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:33:53.625 21:50:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:33:53.625 21:50:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:53.625 21:50:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:53.625 21:50:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:53.625 21:50:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:53.625 21:50:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:53.625 21:50:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:53.625 21:50:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:53.625 21:50:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:53.625 21:50:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:53.625 21:50:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:53.625 21:50:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:53.625 21:50:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:33:53.625 21:50:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:53.625 21:50:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:53.625 21:50:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:53.626 21:50:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:33:53.626 21:50:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:33:53.626 21:50:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:33:53.883 21:50:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:53.883 21:50:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:57.166 Waiting for block devices as requested 00:33:57.166 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:33:57.166 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:57.166 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:57.166 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:57.166 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:57.166 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:57.166 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:57.425 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:57.425 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:57.425 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:57.425 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:57.683 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:57.683 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:57.683 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:57.940 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:57.940 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:57.940 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:57.940 21:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:33:57.940 21:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:57.940 21:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:33:57.940 21:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:33:57.940 21:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:57.940 21:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:33:57.940 21:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:33:57.940 21:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:33:57.940 21:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:58.198 No valid GPT data, bailing 00:33:58.198 21:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:58.198 21:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:33:58.198 21:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:33:58.198 21:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:33:58.198 21:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:33:58.198 21:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:58.198 21:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:58.198 21:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:58.198 21:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:58.198 21:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:33:58.198 21:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:33:58.198 21:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:33:58.198 21:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:33:58.198 21:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:33:58.198 21:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:33:58.198 21:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:33:58.198 21:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:58.198 21:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:33:58.198 00:33:58.198 Discovery Log Number of Records 2, Generation counter 2 00:33:58.198 =====Discovery Log Entry 0====== 00:33:58.198 trtype: tcp 00:33:58.198 adrfam: ipv4 00:33:58.198 subtype: current discovery subsystem 00:33:58.198 treq: not specified, sq flow control disable supported 00:33:58.198 portid: 1 00:33:58.198 trsvcid: 4420 00:33:58.198 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:58.198 traddr: 10.0.0.1 00:33:58.198 eflags: none 00:33:58.198 sectype: none 00:33:58.198 =====Discovery Log Entry 1====== 00:33:58.198 trtype: tcp 00:33:58.198 adrfam: ipv4 00:33:58.198 subtype: nvme subsystem 00:33:58.198 treq: not specified, sq flow control disable supported 00:33:58.198 portid: 1 00:33:58.198 trsvcid: 4420 00:33:58.198 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:58.198 traddr: 10.0.0.1 00:33:58.198 eflags: none 00:33:58.198 sectype: none 00:33:58.198 21:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:33:58.198 21:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:58.198 21:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:58.198 21:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:33:58.198 21:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:58.198 21:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:33:58.198 21:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:58.198 21:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:58.198 21:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:58.198 21:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:58.198 21:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:58.198 21:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:58.198 21:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:58.198 21:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:58.198 21:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:33:58.199 21:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:58.199 21:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:33:58.199 21:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:58.199 21:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:58.199 21:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:58.199 21:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:58.199 EAL: No free 2048 kB hugepages reported on node 1 00:34:01.480 Initializing NVMe Controllers 00:34:01.480 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:01.480 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:01.480 Initialization complete. Launching workers. 00:34:01.480 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 42354, failed: 0 00:34:01.480 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 42354, failed to submit 0 00:34:01.480 success 0, unsuccess 42354, failed 0 00:34:01.480 21:51:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:01.480 21:51:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:01.480 EAL: No free 2048 kB hugepages reported on node 1 00:34:04.763 Initializing NVMe Controllers 00:34:04.763 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:04.763 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:04.763 Initialization complete. Launching workers. 00:34:04.763 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 74864, failed: 0 00:34:04.763 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18890, failed to submit 55974 00:34:04.763 success 0, unsuccess 18890, failed 0 00:34:04.763 21:51:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:04.763 21:51:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:04.763 EAL: No free 2048 kB hugepages reported on node 1 00:34:08.049 Initializing NVMe Controllers 00:34:08.049 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:08.049 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:08.049 Initialization complete. Launching workers. 00:34:08.049 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 72285, failed: 0 00:34:08.049 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18050, failed to submit 54235 00:34:08.049 success 0, unsuccess 18050, failed 0 00:34:08.049 21:51:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:34:08.049 21:51:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:08.049 21:51:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:34:08.049 21:51:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:08.049 21:51:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:08.049 21:51:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:08.049 21:51:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:08.049 21:51:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:34:08.049 21:51:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:34:08.049 21:51:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:10.585 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:10.585 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:10.585 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:10.585 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:10.585 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:10.585 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:10.585 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:10.585 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:10.585 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:10.585 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:10.585 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:10.585 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:10.585 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:10.585 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:10.585 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:10.585 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:11.523 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:34:11.781 00:34:11.781 real 0m17.954s 00:34:11.781 user 0m7.572s 00:34:11.781 sys 0m5.578s 00:34:11.781 21:51:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:34:11.781 21:51:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:11.781 ************************************ 00:34:11.781 END TEST kernel_target_abort 00:34:11.781 ************************************ 00:34:11.781 21:51:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:34:11.781 21:51:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:34:11.781 21:51:11 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:11.781 21:51:11 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:34:11.781 21:51:11 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:11.781 21:51:11 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:34:11.781 21:51:11 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:11.781 21:51:11 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:11.781 rmmod nvme_tcp 00:34:11.781 rmmod nvme_fabrics 00:34:11.781 rmmod nvme_keyring 00:34:11.781 21:51:11 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:11.781 21:51:11 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:34:11.781 21:51:11 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:34:11.781 21:51:11 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1673574 ']' 00:34:11.781 21:51:11 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1673574 00:34:11.781 21:51:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@949 -- # '[' -z 1673574 ']' 00:34:11.781 21:51:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@953 -- # kill -0 1673574 00:34:11.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (1673574) - No such process 00:34:11.781 21:51:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@976 -- # echo 'Process with pid 1673574 is not found' 00:34:11.781 Process with pid 1673574 is not found 00:34:11.781 21:51:11 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:34:11.781 21:51:11 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:15.153 Waiting for block devices as requested 00:34:15.153 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:34:15.153 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:15.153 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:15.153 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:15.153 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:15.153 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:15.446 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:15.446 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:15.446 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:15.446 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:15.707 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:15.707 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:15.707 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:15.707 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:15.965 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:15.965 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:15.965 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:16.224 21:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:16.224 21:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:16.224 21:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:16.224 21:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:16.224 21:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:16.224 21:51:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:16.224 21:51:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:18.128 21:51:18 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:18.128 00:34:18.128 real 0m50.694s 00:34:18.128 user 1m9.757s 00:34:18.128 sys 0m17.213s 00:34:18.128 21:51:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # xtrace_disable 00:34:18.128 21:51:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:18.128 ************************************ 00:34:18.128 END TEST nvmf_abort_qd_sizes 00:34:18.128 ************************************ 00:34:18.128 21:51:18 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:34:18.128 21:51:18 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:34:18.128 21:51:18 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:34:18.128 21:51:18 -- common/autotest_common.sh@10 -- # set +x 00:34:18.387 ************************************ 00:34:18.387 START TEST keyring_file 00:34:18.387 ************************************ 00:34:18.387 21:51:18 keyring_file -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:34:18.387 * Looking for test storage... 00:34:18.387 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:34:18.387 21:51:18 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:34:18.387 21:51:18 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:18.387 21:51:18 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:34:18.387 21:51:18 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:18.387 21:51:18 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:18.387 21:51:18 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:18.387 21:51:18 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:18.387 21:51:18 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:18.387 21:51:18 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:18.387 21:51:18 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:18.387 21:51:18 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:18.387 21:51:18 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:18.387 21:51:18 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:18.387 21:51:18 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:34:18.387 21:51:18 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:34:18.388 21:51:18 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:18.388 21:51:18 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:18.388 21:51:18 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:18.388 21:51:18 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:18.388 21:51:18 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:18.388 21:51:18 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:18.388 21:51:18 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:18.388 21:51:18 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:18.388 21:51:18 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.388 21:51:18 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.388 21:51:18 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.388 21:51:18 keyring_file -- paths/export.sh@5 -- # export PATH 00:34:18.388 21:51:18 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.388 21:51:18 keyring_file -- nvmf/common.sh@47 -- # : 0 00:34:18.388 21:51:18 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:18.388 21:51:18 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:18.388 21:51:18 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:18.388 21:51:18 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:18.388 21:51:18 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:18.388 21:51:18 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:18.388 21:51:18 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:18.388 21:51:18 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:18.388 21:51:18 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:34:18.388 21:51:18 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:34:18.388 21:51:18 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:34:18.388 21:51:18 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:34:18.388 21:51:18 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:34:18.388 21:51:18 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:34:18.388 21:51:18 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:34:18.388 21:51:18 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:34:18.388 21:51:18 keyring_file -- keyring/common.sh@17 -- # name=key0 00:34:18.388 21:51:18 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:34:18.388 21:51:18 keyring_file -- keyring/common.sh@17 -- # digest=0 00:34:18.388 21:51:18 keyring_file -- keyring/common.sh@18 -- # mktemp 00:34:18.388 21:51:18 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.rUMtAkuFWk 00:34:18.388 21:51:18 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:34:18.388 21:51:18 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:34:18.388 21:51:18 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:34:18.388 21:51:18 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:34:18.388 21:51:18 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:34:18.388 21:51:18 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:34:18.388 21:51:18 keyring_file -- nvmf/common.sh@705 -- # python - 00:34:18.388 21:51:18 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.rUMtAkuFWk 00:34:18.388 21:51:18 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.rUMtAkuFWk 00:34:18.388 21:51:18 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.rUMtAkuFWk 00:34:18.388 21:51:18 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:34:18.388 21:51:18 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:34:18.388 21:51:18 keyring_file -- keyring/common.sh@17 -- # name=key1 00:34:18.388 21:51:18 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:34:18.388 21:51:18 keyring_file -- keyring/common.sh@17 -- # digest=0 00:34:18.388 21:51:18 keyring_file -- keyring/common.sh@18 -- # mktemp 00:34:18.388 21:51:18 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.cJmsduvyiy 00:34:18.388 21:51:18 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:34:18.388 21:51:18 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:34:18.388 21:51:18 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:34:18.388 21:51:18 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:34:18.388 21:51:18 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:34:18.388 21:51:18 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:34:18.388 21:51:18 keyring_file -- nvmf/common.sh@705 -- # python - 00:34:18.388 21:51:18 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.cJmsduvyiy 00:34:18.388 21:51:18 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.cJmsduvyiy 00:34:18.388 21:51:18 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.cJmsduvyiy 00:34:18.388 21:51:18 keyring_file -- keyring/file.sh@30 -- # tgtpid=1683862 00:34:18.388 21:51:18 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:34:18.388 21:51:18 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1683862 00:34:18.388 21:51:18 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 1683862 ']' 00:34:18.388 21:51:18 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:18.388 21:51:18 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:18.388 21:51:18 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:18.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:18.388 21:51:18 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:18.388 21:51:18 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:18.646 [2024-06-07 21:51:18.686447] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:34:18.646 [2024-06-07 21:51:18.686507] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1683862 ] 00:34:18.646 EAL: No free 2048 kB hugepages reported on node 1 00:34:18.646 [2024-06-07 21:51:18.776260] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:18.646 [2024-06-07 21:51:18.867314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:19.581 21:51:19 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:19.581 21:51:19 keyring_file -- common/autotest_common.sh@863 -- # return 0 00:34:19.581 21:51:19 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:34:19.581 21:51:19 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:19.581 21:51:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:19.581 [2024-06-07 21:51:19.634812] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:19.581 null0 00:34:19.581 [2024-06-07 21:51:19.666863] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:34:19.581 [2024-06-07 21:51:19.667232] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:34:19.581 [2024-06-07 21:51:19.674881] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:34:19.581 21:51:19 keyring_file -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:19.581 21:51:19 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:19.581 21:51:19 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:34:19.581 21:51:19 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:19.581 21:51:19 keyring_file -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:34:19.581 21:51:19 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:19.581 21:51:19 keyring_file -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:34:19.581 21:51:19 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:19.581 21:51:19 keyring_file -- common/autotest_common.sh@652 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:19.581 21:51:19 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:19.581 21:51:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:19.581 [2024-06-07 21:51:19.686917] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:34:19.581 request: 00:34:19.581 { 00:34:19.581 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:34:19.581 "secure_channel": false, 00:34:19.581 "listen_address": { 00:34:19.581 "trtype": "tcp", 00:34:19.581 "traddr": "127.0.0.1", 00:34:19.581 "trsvcid": "4420" 00:34:19.581 }, 00:34:19.581 "method": "nvmf_subsystem_add_listener", 00:34:19.581 "req_id": 1 00:34:19.581 } 00:34:19.581 Got JSON-RPC error response 00:34:19.581 response: 00:34:19.581 { 00:34:19.581 "code": -32602, 00:34:19.581 "message": "Invalid parameters" 00:34:19.581 } 00:34:19.581 21:51:19 keyring_file -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:34:19.581 21:51:19 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:34:19.581 21:51:19 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:34:19.581 21:51:19 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:34:19.581 21:51:19 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:34:19.581 21:51:19 keyring_file -- keyring/file.sh@46 -- # bperfpid=1684047 00:34:19.581 21:51:19 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1684047 /var/tmp/bperf.sock 00:34:19.581 21:51:19 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:34:19.581 21:51:19 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 1684047 ']' 00:34:19.581 21:51:19 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:19.581 21:51:19 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:19.581 21:51:19 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:19.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:19.581 21:51:19 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:19.581 21:51:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:19.581 [2024-06-07 21:51:19.740647] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:34:19.581 [2024-06-07 21:51:19.740702] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1684047 ] 00:34:19.581 EAL: No free 2048 kB hugepages reported on node 1 00:34:19.581 [2024-06-07 21:51:19.822272] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:19.840 [2024-06-07 21:51:19.910200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:20.406 21:51:20 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:20.406 21:51:20 keyring_file -- common/autotest_common.sh@863 -- # return 0 00:34:20.406 21:51:20 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.rUMtAkuFWk 00:34:20.406 21:51:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.rUMtAkuFWk 00:34:20.664 21:51:20 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.cJmsduvyiy 00:34:20.664 21:51:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.cJmsduvyiy 00:34:20.922 21:51:21 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:34:20.922 21:51:21 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:34:20.922 21:51:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:20.922 21:51:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:20.922 21:51:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:21.180 21:51:21 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.rUMtAkuFWk == \/\t\m\p\/\t\m\p\.\r\U\M\t\A\k\u\F\W\k ]] 00:34:21.180 21:51:21 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:34:21.180 21:51:21 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:34:21.180 21:51:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:21.180 21:51:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:21.180 21:51:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:21.438 21:51:21 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.cJmsduvyiy == \/\t\m\p\/\t\m\p\.\c\J\m\s\d\u\v\y\i\y ]] 00:34:21.438 21:51:21 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:34:21.438 21:51:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:21.438 21:51:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:21.438 21:51:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:21.438 21:51:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:21.438 21:51:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:21.696 21:51:21 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:34:21.696 21:51:21 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:34:21.696 21:51:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:21.696 21:51:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:21.696 21:51:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:21.696 21:51:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:21.696 21:51:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:21.954 21:51:22 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:34:21.954 21:51:22 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:21.954 21:51:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:22.213 [2024-06-07 21:51:22.291736] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:22.213 nvme0n1 00:34:22.213 21:51:22 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:34:22.213 21:51:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:22.213 21:51:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:22.213 21:51:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:22.213 21:51:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:22.213 21:51:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:22.471 21:51:22 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:34:22.471 21:51:22 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:34:22.471 21:51:22 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:22.471 21:51:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:22.471 21:51:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:22.471 21:51:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:22.471 21:51:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:22.729 21:51:22 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:34:22.729 21:51:22 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:22.729 Running I/O for 1 seconds... 00:34:24.102 00:34:24.102 Latency(us) 00:34:24.102 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:24.102 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:34:24.102 nvme0n1 : 1.02 6453.57 25.21 0.00 0.00 19668.73 3932.16 23354.65 00:34:24.102 =================================================================================================================== 00:34:24.102 Total : 6453.57 25.21 0.00 0.00 19668.73 3932.16 23354.65 00:34:24.102 0 00:34:24.102 21:51:24 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:34:24.102 21:51:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:34:24.102 21:51:24 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:34:24.102 21:51:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:24.102 21:51:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:24.102 21:51:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:24.102 21:51:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:24.102 21:51:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:24.361 21:51:24 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:34:24.361 21:51:24 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:34:24.361 21:51:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:24.361 21:51:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:24.361 21:51:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:24.361 21:51:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:24.361 21:51:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:24.619 21:51:24 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:34:24.619 21:51:24 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:24.619 21:51:24 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:34:24.619 21:51:24 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:24.619 21:51:24 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:34:24.619 21:51:24 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:24.619 21:51:24 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:34:24.619 21:51:24 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:24.619 21:51:24 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:24.619 21:51:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:24.878 [2024-06-07 21:51:24.993238] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:34:24.878 [2024-06-07 21:51:24.994118] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x666650 (107): Transport endpoint is not connected 00:34:24.878 [2024-06-07 21:51:24.995111] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x666650 (9): Bad file descriptor 00:34:24.878 [2024-06-07 21:51:24.996111] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:24.878 [2024-06-07 21:51:24.996123] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:34:24.878 [2024-06-07 21:51:24.996134] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:24.878 request: 00:34:24.878 { 00:34:24.878 "name": "nvme0", 00:34:24.878 "trtype": "tcp", 00:34:24.878 "traddr": "127.0.0.1", 00:34:24.878 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:24.878 "adrfam": "ipv4", 00:34:24.878 "trsvcid": "4420", 00:34:24.878 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:24.878 "psk": "key1", 00:34:24.878 "method": "bdev_nvme_attach_controller", 00:34:24.878 "req_id": 1 00:34:24.878 } 00:34:24.878 Got JSON-RPC error response 00:34:24.878 response: 00:34:24.878 { 00:34:24.878 "code": -5, 00:34:24.878 "message": "Input/output error" 00:34:24.878 } 00:34:24.878 21:51:25 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:34:24.878 21:51:25 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:34:24.878 21:51:25 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:34:24.878 21:51:25 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:34:24.878 21:51:25 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:34:24.878 21:51:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:24.878 21:51:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:24.878 21:51:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:24.878 21:51:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:24.878 21:51:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:25.137 21:51:25 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:34:25.137 21:51:25 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:34:25.137 21:51:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:25.137 21:51:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:25.137 21:51:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:25.137 21:51:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:25.137 21:51:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:25.395 21:51:25 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:34:25.395 21:51:25 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:34:25.396 21:51:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:34:25.654 21:51:25 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:34:25.654 21:51:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:34:25.913 21:51:26 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:34:25.913 21:51:26 keyring_file -- keyring/file.sh@77 -- # jq length 00:34:25.913 21:51:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:26.172 21:51:26 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:34:26.172 21:51:26 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.rUMtAkuFWk 00:34:26.172 21:51:26 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.rUMtAkuFWk 00:34:26.172 21:51:26 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:34:26.172 21:51:26 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.rUMtAkuFWk 00:34:26.172 21:51:26 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:34:26.172 21:51:26 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:26.172 21:51:26 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:34:26.172 21:51:26 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:26.172 21:51:26 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.rUMtAkuFWk 00:34:26.172 21:51:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.rUMtAkuFWk 00:34:26.432 [2024-06-07 21:51:26.492143] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.rUMtAkuFWk': 0100660 00:34:26.432 [2024-06-07 21:51:26.492171] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:34:26.432 request: 00:34:26.432 { 00:34:26.432 "name": "key0", 00:34:26.432 "path": "/tmp/tmp.rUMtAkuFWk", 00:34:26.432 "method": "keyring_file_add_key", 00:34:26.432 "req_id": 1 00:34:26.432 } 00:34:26.432 Got JSON-RPC error response 00:34:26.432 response: 00:34:26.432 { 00:34:26.432 "code": -1, 00:34:26.432 "message": "Operation not permitted" 00:34:26.432 } 00:34:26.432 21:51:26 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:34:26.432 21:51:26 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:34:26.432 21:51:26 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:34:26.432 21:51:26 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:34:26.432 21:51:26 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.rUMtAkuFWk 00:34:26.432 21:51:26 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.rUMtAkuFWk 00:34:26.432 21:51:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.rUMtAkuFWk 00:34:26.691 21:51:26 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.rUMtAkuFWk 00:34:26.691 21:51:26 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:34:26.691 21:51:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:26.691 21:51:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:26.691 21:51:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:26.691 21:51:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:26.691 21:51:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:26.951 21:51:27 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:34:26.951 21:51:27 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:26.951 21:51:27 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:34:26.951 21:51:27 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:26.951 21:51:27 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:34:26.951 21:51:27 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:26.951 21:51:27 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:34:26.951 21:51:27 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:26.951 21:51:27 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:26.951 21:51:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:27.210 [2024-06-07 21:51:27.238151] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.rUMtAkuFWk': No such file or directory 00:34:27.210 [2024-06-07 21:51:27.238178] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:34:27.210 [2024-06-07 21:51:27.238208] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:34:27.210 [2024-06-07 21:51:27.238216] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:27.210 [2024-06-07 21:51:27.238224] bdev_nvme.c:6263:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:34:27.210 request: 00:34:27.210 { 00:34:27.210 "name": "nvme0", 00:34:27.210 "trtype": "tcp", 00:34:27.210 "traddr": "127.0.0.1", 00:34:27.210 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:27.210 "adrfam": "ipv4", 00:34:27.210 "trsvcid": "4420", 00:34:27.210 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:27.210 "psk": "key0", 00:34:27.210 "method": "bdev_nvme_attach_controller", 00:34:27.210 "req_id": 1 00:34:27.210 } 00:34:27.210 Got JSON-RPC error response 00:34:27.210 response: 00:34:27.210 { 00:34:27.210 "code": -19, 00:34:27.210 "message": "No such device" 00:34:27.210 } 00:34:27.210 21:51:27 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:34:27.210 21:51:27 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:34:27.210 21:51:27 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:34:27.210 21:51:27 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:34:27.210 21:51:27 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:34:27.210 21:51:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:34:27.469 21:51:27 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:34:27.469 21:51:27 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:34:27.469 21:51:27 keyring_file -- keyring/common.sh@17 -- # name=key0 00:34:27.469 21:51:27 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:34:27.469 21:51:27 keyring_file -- keyring/common.sh@17 -- # digest=0 00:34:27.469 21:51:27 keyring_file -- keyring/common.sh@18 -- # mktemp 00:34:27.469 21:51:27 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Xg12RYl3iv 00:34:27.469 21:51:27 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:34:27.469 21:51:27 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:34:27.469 21:51:27 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:34:27.469 21:51:27 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:34:27.469 21:51:27 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:34:27.469 21:51:27 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:34:27.469 21:51:27 keyring_file -- nvmf/common.sh@705 -- # python - 00:34:27.470 21:51:27 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Xg12RYl3iv 00:34:27.470 21:51:27 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Xg12RYl3iv 00:34:27.470 21:51:27 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.Xg12RYl3iv 00:34:27.470 21:51:27 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Xg12RYl3iv 00:34:27.470 21:51:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Xg12RYl3iv 00:34:27.729 21:51:27 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:27.729 21:51:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:27.988 nvme0n1 00:34:27.988 21:51:28 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:34:27.988 21:51:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:27.988 21:51:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:27.988 21:51:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:27.988 21:51:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:27.988 21:51:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:28.247 21:51:28 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:34:28.247 21:51:28 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:34:28.247 21:51:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:34:28.506 21:51:28 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:34:28.506 21:51:28 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:34:28.506 21:51:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:28.506 21:51:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:28.506 21:51:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:28.765 21:51:28 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:34:28.765 21:51:28 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:34:28.765 21:51:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:28.765 21:51:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:28.765 21:51:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:28.765 21:51:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:28.765 21:51:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:29.024 21:51:29 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:34:29.024 21:51:29 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:34:29.024 21:51:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:34:29.283 21:51:29 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:34:29.283 21:51:29 keyring_file -- keyring/file.sh@104 -- # jq length 00:34:29.283 21:51:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:29.542 21:51:29 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:34:29.542 21:51:29 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Xg12RYl3iv 00:34:29.542 21:51:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Xg12RYl3iv 00:34:29.542 21:51:29 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.cJmsduvyiy 00:34:29.542 21:51:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.cJmsduvyiy 00:34:29.801 21:51:30 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:29.801 21:51:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:30.060 nvme0n1 00:34:30.318 21:51:30 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:34:30.318 21:51:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:34:30.576 21:51:30 keyring_file -- keyring/file.sh@112 -- # config='{ 00:34:30.576 "subsystems": [ 00:34:30.576 { 00:34:30.576 "subsystem": "keyring", 00:34:30.576 "config": [ 00:34:30.576 { 00:34:30.576 "method": "keyring_file_add_key", 00:34:30.577 "params": { 00:34:30.577 "name": "key0", 00:34:30.577 "path": "/tmp/tmp.Xg12RYl3iv" 00:34:30.577 } 00:34:30.577 }, 00:34:30.577 { 00:34:30.577 "method": "keyring_file_add_key", 00:34:30.577 "params": { 00:34:30.577 "name": "key1", 00:34:30.577 "path": "/tmp/tmp.cJmsduvyiy" 00:34:30.577 } 00:34:30.577 } 00:34:30.577 ] 00:34:30.577 }, 00:34:30.577 { 00:34:30.577 "subsystem": "iobuf", 00:34:30.577 "config": [ 00:34:30.577 { 00:34:30.577 "method": "iobuf_set_options", 00:34:30.577 "params": { 00:34:30.577 "small_pool_count": 8192, 00:34:30.577 "large_pool_count": 1024, 00:34:30.577 "small_bufsize": 8192, 00:34:30.577 "large_bufsize": 135168 00:34:30.577 } 00:34:30.577 } 00:34:30.577 ] 00:34:30.577 }, 00:34:30.577 { 00:34:30.577 "subsystem": "sock", 00:34:30.577 "config": [ 00:34:30.577 { 00:34:30.577 "method": "sock_set_default_impl", 00:34:30.577 "params": { 00:34:30.577 "impl_name": "posix" 00:34:30.577 } 00:34:30.577 }, 00:34:30.577 { 00:34:30.577 "method": "sock_impl_set_options", 00:34:30.577 "params": { 00:34:30.577 "impl_name": "ssl", 00:34:30.577 "recv_buf_size": 4096, 00:34:30.577 "send_buf_size": 4096, 00:34:30.577 "enable_recv_pipe": true, 00:34:30.577 "enable_quickack": false, 00:34:30.577 "enable_placement_id": 0, 00:34:30.577 "enable_zerocopy_send_server": true, 00:34:30.577 "enable_zerocopy_send_client": false, 00:34:30.577 "zerocopy_threshold": 0, 00:34:30.577 "tls_version": 0, 00:34:30.577 "enable_ktls": false 00:34:30.577 } 00:34:30.577 }, 00:34:30.577 { 00:34:30.577 "method": "sock_impl_set_options", 00:34:30.577 "params": { 00:34:30.577 "impl_name": "posix", 00:34:30.577 "recv_buf_size": 2097152, 00:34:30.577 "send_buf_size": 2097152, 00:34:30.577 "enable_recv_pipe": true, 00:34:30.577 "enable_quickack": false, 00:34:30.577 "enable_placement_id": 0, 00:34:30.577 "enable_zerocopy_send_server": true, 00:34:30.577 "enable_zerocopy_send_client": false, 00:34:30.577 "zerocopy_threshold": 0, 00:34:30.577 "tls_version": 0, 00:34:30.577 "enable_ktls": false 00:34:30.577 } 00:34:30.577 } 00:34:30.577 ] 00:34:30.577 }, 00:34:30.577 { 00:34:30.577 "subsystem": "vmd", 00:34:30.577 "config": [] 00:34:30.577 }, 00:34:30.577 { 00:34:30.577 "subsystem": "accel", 00:34:30.577 "config": [ 00:34:30.577 { 00:34:30.577 "method": "accel_set_options", 00:34:30.577 "params": { 00:34:30.577 "small_cache_size": 128, 00:34:30.577 "large_cache_size": 16, 00:34:30.577 "task_count": 2048, 00:34:30.577 "sequence_count": 2048, 00:34:30.577 "buf_count": 2048 00:34:30.577 } 00:34:30.577 } 00:34:30.577 ] 00:34:30.577 }, 00:34:30.577 { 00:34:30.577 "subsystem": "bdev", 00:34:30.577 "config": [ 00:34:30.577 { 00:34:30.577 "method": "bdev_set_options", 00:34:30.577 "params": { 00:34:30.577 "bdev_io_pool_size": 65535, 00:34:30.577 "bdev_io_cache_size": 256, 00:34:30.577 "bdev_auto_examine": true, 00:34:30.577 "iobuf_small_cache_size": 128, 00:34:30.577 "iobuf_large_cache_size": 16 00:34:30.577 } 00:34:30.577 }, 00:34:30.577 { 00:34:30.577 "method": "bdev_raid_set_options", 00:34:30.577 "params": { 00:34:30.577 "process_window_size_kb": 1024 00:34:30.577 } 00:34:30.577 }, 00:34:30.577 { 00:34:30.577 "method": "bdev_iscsi_set_options", 00:34:30.577 "params": { 00:34:30.577 "timeout_sec": 30 00:34:30.577 } 00:34:30.577 }, 00:34:30.577 { 00:34:30.577 "method": "bdev_nvme_set_options", 00:34:30.577 "params": { 00:34:30.577 "action_on_timeout": "none", 00:34:30.577 "timeout_us": 0, 00:34:30.577 "timeout_admin_us": 0, 00:34:30.577 "keep_alive_timeout_ms": 10000, 00:34:30.577 "arbitration_burst": 0, 00:34:30.577 "low_priority_weight": 0, 00:34:30.577 "medium_priority_weight": 0, 00:34:30.577 "high_priority_weight": 0, 00:34:30.577 "nvme_adminq_poll_period_us": 10000, 00:34:30.577 "nvme_ioq_poll_period_us": 0, 00:34:30.577 "io_queue_requests": 512, 00:34:30.577 "delay_cmd_submit": true, 00:34:30.577 "transport_retry_count": 4, 00:34:30.577 "bdev_retry_count": 3, 00:34:30.577 "transport_ack_timeout": 0, 00:34:30.577 "ctrlr_loss_timeout_sec": 0, 00:34:30.577 "reconnect_delay_sec": 0, 00:34:30.577 "fast_io_fail_timeout_sec": 0, 00:34:30.577 "disable_auto_failback": false, 00:34:30.577 "generate_uuids": false, 00:34:30.577 "transport_tos": 0, 00:34:30.577 "nvme_error_stat": false, 00:34:30.577 "rdma_srq_size": 0, 00:34:30.577 "io_path_stat": false, 00:34:30.577 "allow_accel_sequence": false, 00:34:30.577 "rdma_max_cq_size": 0, 00:34:30.577 "rdma_cm_event_timeout_ms": 0, 00:34:30.577 "dhchap_digests": [ 00:34:30.577 "sha256", 00:34:30.577 "sha384", 00:34:30.577 "sha512" 00:34:30.577 ], 00:34:30.577 "dhchap_dhgroups": [ 00:34:30.577 "null", 00:34:30.577 "ffdhe2048", 00:34:30.577 "ffdhe3072", 00:34:30.577 "ffdhe4096", 00:34:30.577 "ffdhe6144", 00:34:30.577 "ffdhe8192" 00:34:30.577 ] 00:34:30.577 } 00:34:30.577 }, 00:34:30.577 { 00:34:30.577 "method": "bdev_nvme_attach_controller", 00:34:30.577 "params": { 00:34:30.577 "name": "nvme0", 00:34:30.577 "trtype": "TCP", 00:34:30.577 "adrfam": "IPv4", 00:34:30.577 "traddr": "127.0.0.1", 00:34:30.577 "trsvcid": "4420", 00:34:30.577 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:30.577 "prchk_reftag": false, 00:34:30.577 "prchk_guard": false, 00:34:30.577 "ctrlr_loss_timeout_sec": 0, 00:34:30.577 "reconnect_delay_sec": 0, 00:34:30.577 "fast_io_fail_timeout_sec": 0, 00:34:30.577 "psk": "key0", 00:34:30.577 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:30.577 "hdgst": false, 00:34:30.577 "ddgst": false 00:34:30.577 } 00:34:30.577 }, 00:34:30.577 { 00:34:30.577 "method": "bdev_nvme_set_hotplug", 00:34:30.577 "params": { 00:34:30.577 "period_us": 100000, 00:34:30.577 "enable": false 00:34:30.577 } 00:34:30.577 }, 00:34:30.577 { 00:34:30.577 "method": "bdev_wait_for_examine" 00:34:30.577 } 00:34:30.577 ] 00:34:30.577 }, 00:34:30.577 { 00:34:30.577 "subsystem": "nbd", 00:34:30.577 "config": [] 00:34:30.577 } 00:34:30.577 ] 00:34:30.577 }' 00:34:30.577 21:51:30 keyring_file -- keyring/file.sh@114 -- # killprocess 1684047 00:34:30.577 21:51:30 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 1684047 ']' 00:34:30.577 21:51:30 keyring_file -- common/autotest_common.sh@953 -- # kill -0 1684047 00:34:30.577 21:51:30 keyring_file -- common/autotest_common.sh@954 -- # uname 00:34:30.577 21:51:30 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:34:30.577 21:51:30 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1684047 00:34:30.577 21:51:30 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:34:30.577 21:51:30 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:34:30.577 21:51:30 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1684047' 00:34:30.577 killing process with pid 1684047 00:34:30.577 21:51:30 keyring_file -- common/autotest_common.sh@968 -- # kill 1684047 00:34:30.577 Received shutdown signal, test time was about 1.000000 seconds 00:34:30.577 00:34:30.577 Latency(us) 00:34:30.577 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:30.577 =================================================================================================================== 00:34:30.577 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:30.577 21:51:30 keyring_file -- common/autotest_common.sh@973 -- # wait 1684047 00:34:30.836 21:51:30 keyring_file -- keyring/file.sh@117 -- # bperfpid=1686043 00:34:30.836 21:51:30 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1686043 /var/tmp/bperf.sock 00:34:30.836 21:51:30 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 1686043 ']' 00:34:30.836 21:51:30 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:30.836 21:51:30 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:34:30.836 21:51:30 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:30.836 21:51:30 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:30.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:30.836 21:51:30 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:30.836 21:51:30 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:34:30.836 "subsystems": [ 00:34:30.836 { 00:34:30.836 "subsystem": "keyring", 00:34:30.836 "config": [ 00:34:30.836 { 00:34:30.836 "method": "keyring_file_add_key", 00:34:30.836 "params": { 00:34:30.836 "name": "key0", 00:34:30.836 "path": "/tmp/tmp.Xg12RYl3iv" 00:34:30.836 } 00:34:30.836 }, 00:34:30.836 { 00:34:30.836 "method": "keyring_file_add_key", 00:34:30.836 "params": { 00:34:30.836 "name": "key1", 00:34:30.836 "path": "/tmp/tmp.cJmsduvyiy" 00:34:30.836 } 00:34:30.836 } 00:34:30.836 ] 00:34:30.836 }, 00:34:30.836 { 00:34:30.836 "subsystem": "iobuf", 00:34:30.836 "config": [ 00:34:30.836 { 00:34:30.836 "method": "iobuf_set_options", 00:34:30.836 "params": { 00:34:30.836 "small_pool_count": 8192, 00:34:30.836 "large_pool_count": 1024, 00:34:30.836 "small_bufsize": 8192, 00:34:30.836 "large_bufsize": 135168 00:34:30.836 } 00:34:30.836 } 00:34:30.836 ] 00:34:30.836 }, 00:34:30.836 { 00:34:30.836 "subsystem": "sock", 00:34:30.836 "config": [ 00:34:30.836 { 00:34:30.836 "method": "sock_set_default_impl", 00:34:30.836 "params": { 00:34:30.836 "impl_name": "posix" 00:34:30.836 } 00:34:30.836 }, 00:34:30.836 { 00:34:30.836 "method": "sock_impl_set_options", 00:34:30.836 "params": { 00:34:30.836 "impl_name": "ssl", 00:34:30.836 "recv_buf_size": 4096, 00:34:30.836 "send_buf_size": 4096, 00:34:30.836 "enable_recv_pipe": true, 00:34:30.836 "enable_quickack": false, 00:34:30.836 "enable_placement_id": 0, 00:34:30.836 "enable_zerocopy_send_server": true, 00:34:30.836 "enable_zerocopy_send_client": false, 00:34:30.836 "zerocopy_threshold": 0, 00:34:30.836 "tls_version": 0, 00:34:30.836 "enable_ktls": false 00:34:30.836 } 00:34:30.836 }, 00:34:30.836 { 00:34:30.836 "method": "sock_impl_set_options", 00:34:30.836 "params": { 00:34:30.836 "impl_name": "posix", 00:34:30.836 "recv_buf_size": 2097152, 00:34:30.836 "send_buf_size": 2097152, 00:34:30.836 "enable_recv_pipe": true, 00:34:30.836 "enable_quickack": false, 00:34:30.836 "enable_placement_id": 0, 00:34:30.836 "enable_zerocopy_send_server": true, 00:34:30.836 "enable_zerocopy_send_client": false, 00:34:30.836 "zerocopy_threshold": 0, 00:34:30.836 "tls_version": 0, 00:34:30.836 "enable_ktls": false 00:34:30.836 } 00:34:30.836 } 00:34:30.836 ] 00:34:30.836 }, 00:34:30.837 { 00:34:30.837 "subsystem": "vmd", 00:34:30.837 "config": [] 00:34:30.837 }, 00:34:30.837 { 00:34:30.837 "subsystem": "accel", 00:34:30.837 "config": [ 00:34:30.837 { 00:34:30.837 "method": "accel_set_options", 00:34:30.837 "params": { 00:34:30.837 "small_cache_size": 128, 00:34:30.837 "large_cache_size": 16, 00:34:30.837 "task_count": 2048, 00:34:30.837 "sequence_count": 2048, 00:34:30.837 "buf_count": 2048 00:34:30.837 } 00:34:30.837 } 00:34:30.837 ] 00:34:30.837 }, 00:34:30.837 { 00:34:30.837 "subsystem": "bdev", 00:34:30.837 "config": [ 00:34:30.837 { 00:34:30.837 "method": "bdev_set_options", 00:34:30.837 "params": { 00:34:30.837 "bdev_io_pool_size": 65535, 00:34:30.837 "bdev_io_cache_size": 256, 00:34:30.837 "bdev_auto_examine": true, 00:34:30.837 "iobuf_small_cache_size": 128, 00:34:30.837 "iobuf_large_cache_size": 16 00:34:30.837 } 00:34:30.837 }, 00:34:30.837 { 00:34:30.837 "method": "bdev_raid_set_options", 00:34:30.837 "params": { 00:34:30.837 "process_window_size_kb": 1024 00:34:30.837 } 00:34:30.837 }, 00:34:30.837 { 00:34:30.837 "method": "bdev_iscsi_set_options", 00:34:30.837 "params": { 00:34:30.837 "timeout_sec": 30 00:34:30.837 } 00:34:30.837 }, 00:34:30.837 { 00:34:30.837 "method": "bdev_nvme_set_options", 00:34:30.837 "params": { 00:34:30.837 "action_on_timeout": "none", 00:34:30.837 "timeout_us": 0, 00:34:30.837 "timeout_admin_us": 0, 00:34:30.837 "keep_alive_timeout_ms": 10000, 00:34:30.837 "arbitration_burst": 0, 00:34:30.837 "low_priority_weight": 0, 00:34:30.837 "medium_priority_weight": 0, 00:34:30.837 "high_priority_weight": 0, 00:34:30.837 "nvme_adminq_poll_period_us": 10000, 00:34:30.837 "nvme_ioq_poll_period_us": 0, 00:34:30.837 "io_queue_requests": 512, 00:34:30.837 "delay_cmd_submit": true, 00:34:30.837 "transport_retry_count": 4, 00:34:30.837 "bdev_retry_count": 3, 00:34:30.837 "transport_ack_timeout": 0, 00:34:30.837 "ctrlr_loss_timeout_sec": 0, 00:34:30.837 "reconnect_delay_sec": 0, 00:34:30.837 "fast_io_fail_timeout_sec": 0, 00:34:30.837 "disable_auto_failback": false, 00:34:30.837 "generate_uuids": false, 00:34:30.837 "transport_tos": 0, 00:34:30.837 "nvme_error_stat": false, 00:34:30.837 "rdma_srq_size": 0, 00:34:30.837 "io_path_stat": false, 00:34:30.837 "allow_accel_sequence": false, 00:34:30.837 "rdma_max_cq_size": 0, 00:34:30.837 "rdma_cm_event_timeout_ms": 0, 00:34:30.837 "dhchap_digests": [ 00:34:30.837 "sha256", 00:34:30.837 "sha384", 00:34:30.837 "sha512" 00:34:30.837 ], 00:34:30.837 "dhchap_dhgroups": [ 00:34:30.837 "null", 00:34:30.837 "ffdhe2048", 00:34:30.837 "ffdhe3072", 00:34:30.837 "ffdhe4096", 00:34:30.837 "ffdhe6144", 00:34:30.837 "ffdhe8192" 00:34:30.837 ] 00:34:30.837 } 00:34:30.837 }, 00:34:30.837 { 00:34:30.837 "method": "bdev_nvme_attach_controller", 00:34:30.837 "params": { 00:34:30.837 "name": "nvme0", 00:34:30.837 "trtype": "TCP", 00:34:30.837 "adrfam": "IPv4", 00:34:30.837 "traddr": "127.0.0.1", 00:34:30.837 "trsvcid": "4420", 00:34:30.837 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:30.837 "prchk_reftag": false, 00:34:30.837 "prchk_guard": false, 00:34:30.837 "ctrlr_loss_timeout_sec": 0, 00:34:30.837 "reconnect_delay_sec": 0, 00:34:30.837 "fast_io_fail_timeout_sec": 0, 00:34:30.837 "psk": "key0", 00:34:30.837 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:30.837 "hdgst": false, 00:34:30.837 "ddgst": false 00:34:30.837 } 00:34:30.837 }, 00:34:30.837 { 00:34:30.837 "method": "bdev_nvme_set_hotplug", 00:34:30.837 "params": { 00:34:30.837 "period_us": 100000, 00:34:30.837 "enable": false 00:34:30.837 } 00:34:30.837 }, 00:34:30.837 { 00:34:30.837 "method": "bdev_wait_for_examine" 00:34:30.837 } 00:34:30.837 ] 00:34:30.837 }, 00:34:30.837 { 00:34:30.837 "subsystem": "nbd", 00:34:30.837 "config": [] 00:34:30.837 } 00:34:30.837 ] 00:34:30.837 }' 00:34:30.837 21:51:30 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:30.837 [2024-06-07 21:51:30.925985] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:34:30.837 [2024-06-07 21:51:30.926052] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1686043 ] 00:34:30.837 EAL: No free 2048 kB hugepages reported on node 1 00:34:30.837 [2024-06-07 21:51:31.005547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:30.837 [2024-06-07 21:51:31.096124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:31.095 [2024-06-07 21:51:31.262483] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:31.660 21:51:31 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:31.660 21:51:31 keyring_file -- common/autotest_common.sh@863 -- # return 0 00:34:31.660 21:51:31 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:34:31.660 21:51:31 keyring_file -- keyring/file.sh@120 -- # jq length 00:34:31.660 21:51:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:31.918 21:51:32 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:34:31.918 21:51:32 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:34:31.918 21:51:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:31.918 21:51:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:31.918 21:51:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:31.918 21:51:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:31.918 21:51:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:32.175 21:51:32 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:34:32.175 21:51:32 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:34:32.175 21:51:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:32.175 21:51:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:32.175 21:51:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:32.175 21:51:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:32.175 21:51:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:32.434 21:51:32 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:34:32.434 21:51:32 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:34:32.434 21:51:32 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:34:32.434 21:51:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:34:32.693 21:51:32 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:34:32.693 21:51:32 keyring_file -- keyring/file.sh@1 -- # cleanup 00:34:32.693 21:51:32 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.Xg12RYl3iv /tmp/tmp.cJmsduvyiy 00:34:32.693 21:51:32 keyring_file -- keyring/file.sh@20 -- # killprocess 1686043 00:34:32.693 21:51:32 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 1686043 ']' 00:34:32.693 21:51:32 keyring_file -- common/autotest_common.sh@953 -- # kill -0 1686043 00:34:32.693 21:51:32 keyring_file -- common/autotest_common.sh@954 -- # uname 00:34:32.693 21:51:32 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:34:32.693 21:51:32 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1686043 00:34:32.693 21:51:32 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:34:32.693 21:51:32 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:34:32.693 21:51:32 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1686043' 00:34:32.693 killing process with pid 1686043 00:34:32.693 21:51:32 keyring_file -- common/autotest_common.sh@968 -- # kill 1686043 00:34:32.693 Received shutdown signal, test time was about 1.000000 seconds 00:34:32.693 00:34:32.693 Latency(us) 00:34:32.693 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:32.693 =================================================================================================================== 00:34:32.693 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:34:32.693 21:51:32 keyring_file -- common/autotest_common.sh@973 -- # wait 1686043 00:34:32.950 21:51:33 keyring_file -- keyring/file.sh@21 -- # killprocess 1683862 00:34:32.950 21:51:33 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 1683862 ']' 00:34:32.950 21:51:33 keyring_file -- common/autotest_common.sh@953 -- # kill -0 1683862 00:34:32.950 21:51:33 keyring_file -- common/autotest_common.sh@954 -- # uname 00:34:32.950 21:51:33 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:34:32.950 21:51:33 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1683862 00:34:32.950 21:51:33 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:34:32.950 21:51:33 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:34:32.950 21:51:33 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1683862' 00:34:32.950 killing process with pid 1683862 00:34:32.950 21:51:33 keyring_file -- common/autotest_common.sh@968 -- # kill 1683862 00:34:32.950 [2024-06-07 21:51:33.084705] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:34:32.950 21:51:33 keyring_file -- common/autotest_common.sh@973 -- # wait 1683862 00:34:33.208 00:34:33.208 real 0m15.027s 00:34:33.208 user 0m36.780s 00:34:33.208 sys 0m3.104s 00:34:33.208 21:51:33 keyring_file -- common/autotest_common.sh@1125 -- # xtrace_disable 00:34:33.208 21:51:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:33.208 ************************************ 00:34:33.208 END TEST keyring_file 00:34:33.208 ************************************ 00:34:33.208 21:51:33 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:34:33.208 21:51:33 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:34:33.208 21:51:33 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:34:33.208 21:51:33 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:34:33.208 21:51:33 -- common/autotest_common.sh@10 -- # set +x 00:34:33.467 ************************************ 00:34:33.467 START TEST keyring_linux 00:34:33.467 ************************************ 00:34:33.467 21:51:33 keyring_linux -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:34:33.467 * Looking for test storage... 00:34:33.467 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:34:33.467 21:51:33 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:34:33.467 21:51:33 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:33.467 21:51:33 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:34:33.467 21:51:33 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:33.467 21:51:33 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:33.467 21:51:33 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:33.467 21:51:33 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:33.467 21:51:33 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:33.467 21:51:33 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:33.467 21:51:33 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:33.467 21:51:33 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:33.467 21:51:33 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:33.467 21:51:33 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:33.467 21:51:33 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:34:33.467 21:51:33 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:34:33.467 21:51:33 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:33.467 21:51:33 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:33.467 21:51:33 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:33.467 21:51:33 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:33.467 21:51:33 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:33.467 21:51:33 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:33.467 21:51:33 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:33.467 21:51:33 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:33.467 21:51:33 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.467 21:51:33 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.467 21:51:33 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.467 21:51:33 keyring_linux -- paths/export.sh@5 -- # export PATH 00:34:33.467 21:51:33 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.467 21:51:33 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:34:33.467 21:51:33 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:33.467 21:51:33 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:33.467 21:51:33 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:33.467 21:51:33 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:33.467 21:51:33 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:33.467 21:51:33 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:33.467 21:51:33 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:33.467 21:51:33 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:33.467 21:51:33 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:34:33.467 21:51:33 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:34:33.467 21:51:33 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:34:33.467 21:51:33 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:34:33.467 21:51:33 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:34:33.467 21:51:33 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:34:33.467 21:51:33 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:34:33.467 21:51:33 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:34:33.467 21:51:33 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:34:33.467 21:51:33 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:34:33.467 21:51:33 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:34:33.467 21:51:33 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:34:33.467 21:51:33 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:34:33.467 21:51:33 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:34:33.467 21:51:33 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:34:33.467 21:51:33 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:34:33.467 21:51:33 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:34:33.467 21:51:33 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:34:33.467 21:51:33 keyring_linux -- nvmf/common.sh@705 -- # python - 00:34:33.467 21:51:33 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:34:33.467 21:51:33 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:34:33.467 /tmp/:spdk-test:key0 00:34:33.467 21:51:33 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:34:33.467 21:51:33 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:34:33.467 21:51:33 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:34:33.467 21:51:33 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:34:33.467 21:51:33 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:34:33.467 21:51:33 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:34:33.467 21:51:33 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:34:33.467 21:51:33 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:34:33.467 21:51:33 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:34:33.467 21:51:33 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:34:33.467 21:51:33 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:34:33.467 21:51:33 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:34:33.467 21:51:33 keyring_linux -- nvmf/common.sh@705 -- # python - 00:34:33.467 21:51:33 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:34:33.467 21:51:33 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:34:33.467 /tmp/:spdk-test:key1 00:34:33.467 21:51:33 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1686636 00:34:33.467 21:51:33 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1686636 00:34:33.467 21:51:33 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:34:33.467 21:51:33 keyring_linux -- common/autotest_common.sh@830 -- # '[' -z 1686636 ']' 00:34:33.467 21:51:33 keyring_linux -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:33.467 21:51:33 keyring_linux -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:33.467 21:51:33 keyring_linux -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:33.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:33.467 21:51:33 keyring_linux -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:33.467 21:51:33 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:33.726 [2024-06-07 21:51:33.771035] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:34:33.726 [2024-06-07 21:51:33.771098] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1686636 ] 00:34:33.726 EAL: No free 2048 kB hugepages reported on node 1 00:34:33.726 [2024-06-07 21:51:33.857807] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:33.726 [2024-06-07 21:51:33.949127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:34.659 21:51:34 keyring_linux -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:34.659 21:51:34 keyring_linux -- common/autotest_common.sh@863 -- # return 0 00:34:34.659 21:51:34 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:34:34.659 21:51:34 keyring_linux -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:34.659 21:51:34 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:34.659 [2024-06-07 21:51:34.699902] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:34.659 null0 00:34:34.659 [2024-06-07 21:51:34.731943] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:34:34.659 [2024-06-07 21:51:34.732343] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:34:34.659 21:51:34 keyring_linux -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:34.659 21:51:34 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:34:34.659 1022649080 00:34:34.659 21:51:34 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:34:34.659 440859466 00:34:34.659 21:51:34 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1686901 00:34:34.659 21:51:34 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1686901 /var/tmp/bperf.sock 00:34:34.659 21:51:34 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:34:34.659 21:51:34 keyring_linux -- common/autotest_common.sh@830 -- # '[' -z 1686901 ']' 00:34:34.659 21:51:34 keyring_linux -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:34.659 21:51:34 keyring_linux -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:34.659 21:51:34 keyring_linux -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:34.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:34.659 21:51:34 keyring_linux -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:34.659 21:51:34 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:34.659 [2024-06-07 21:51:34.805516] Starting SPDK v24.09-pre git sha1 422f7ef4e / DPDK 24.03.0 initialization... 00:34:34.659 [2024-06-07 21:51:34.805575] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1686901 ] 00:34:34.659 EAL: No free 2048 kB hugepages reported on node 1 00:34:34.659 [2024-06-07 21:51:34.884554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:34.917 [2024-06-07 21:51:34.971625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:35.483 21:51:35 keyring_linux -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:35.483 21:51:35 keyring_linux -- common/autotest_common.sh@863 -- # return 0 00:34:35.483 21:51:35 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:34:35.483 21:51:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:34:35.741 21:51:35 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:34:35.741 21:51:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:36.000 21:51:36 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:34:36.000 21:51:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:34:36.258 [2024-06-07 21:51:36.429772] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:36.258 nvme0n1 00:34:36.258 21:51:36 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:34:36.258 21:51:36 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:34:36.258 21:51:36 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:34:36.258 21:51:36 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:34:36.258 21:51:36 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:34:36.258 21:51:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:36.516 21:51:36 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:34:36.516 21:51:36 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:34:36.516 21:51:36 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:34:36.517 21:51:36 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:34:36.517 21:51:36 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:36.517 21:51:36 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:34:36.517 21:51:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:36.775 21:51:37 keyring_linux -- keyring/linux.sh@25 -- # sn=1022649080 00:34:36.775 21:51:37 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:34:36.775 21:51:37 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:34:36.775 21:51:37 keyring_linux -- keyring/linux.sh@26 -- # [[ 1022649080 == \1\0\2\2\6\4\9\0\8\0 ]] 00:34:36.775 21:51:37 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 1022649080 00:34:36.775 21:51:37 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:34:36.775 21:51:37 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:37.033 Running I/O for 1 seconds... 00:34:37.969 00:34:37.969 Latency(us) 00:34:37.969 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:37.969 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:37.969 nvme0n1 : 1.01 6625.49 25.88 0.00 0.00 19182.42 4766.25 25261.15 00:34:37.969 =================================================================================================================== 00:34:37.969 Total : 6625.49 25.88 0.00 0.00 19182.42 4766.25 25261.15 00:34:37.969 0 00:34:37.969 21:51:38 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:34:37.969 21:51:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:34:38.228 21:51:38 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:34:38.228 21:51:38 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:34:38.228 21:51:38 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:34:38.228 21:51:38 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:34:38.228 21:51:38 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:34:38.228 21:51:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:38.486 21:51:38 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:34:38.486 21:51:38 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:34:38.486 21:51:38 keyring_linux -- keyring/linux.sh@23 -- # return 00:34:38.486 21:51:38 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:38.486 21:51:38 keyring_linux -- common/autotest_common.sh@649 -- # local es=0 00:34:38.486 21:51:38 keyring_linux -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:38.486 21:51:38 keyring_linux -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:34:38.486 21:51:38 keyring_linux -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:38.486 21:51:38 keyring_linux -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:34:38.486 21:51:38 keyring_linux -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:38.486 21:51:38 keyring_linux -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:38.486 21:51:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:38.744 [2024-06-07 21:51:38.905239] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:34:38.744 [2024-06-07 21:51:38.905607] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7b65c0 (107): Transport endpoint is not connected 00:34:38.744 [2024-06-07 21:51:38.906601] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7b65c0 (9): Bad file descriptor 00:34:38.744 [2024-06-07 21:51:38.907600] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:38.744 [2024-06-07 21:51:38.907614] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:34:38.744 [2024-06-07 21:51:38.907624] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:38.744 request: 00:34:38.744 { 00:34:38.744 "name": "nvme0", 00:34:38.744 "trtype": "tcp", 00:34:38.744 "traddr": "127.0.0.1", 00:34:38.744 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:38.744 "adrfam": "ipv4", 00:34:38.744 "trsvcid": "4420", 00:34:38.744 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:38.744 "psk": ":spdk-test:key1", 00:34:38.744 "method": "bdev_nvme_attach_controller", 00:34:38.744 "req_id": 1 00:34:38.744 } 00:34:38.744 Got JSON-RPC error response 00:34:38.744 response: 00:34:38.744 { 00:34:38.744 "code": -5, 00:34:38.744 "message": "Input/output error" 00:34:38.744 } 00:34:38.744 21:51:38 keyring_linux -- common/autotest_common.sh@652 -- # es=1 00:34:38.744 21:51:38 keyring_linux -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:34:38.744 21:51:38 keyring_linux -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:34:38.744 21:51:38 keyring_linux -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:34:38.744 21:51:38 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:34:38.744 21:51:38 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:34:38.744 21:51:38 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:34:38.744 21:51:38 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:34:38.744 21:51:38 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:34:38.744 21:51:38 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:34:38.744 21:51:38 keyring_linux -- keyring/linux.sh@33 -- # sn=1022649080 00:34:38.744 21:51:38 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1022649080 00:34:38.744 1 links removed 00:34:38.744 21:51:38 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:34:38.744 21:51:38 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:34:38.744 21:51:38 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:34:38.744 21:51:38 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:34:38.744 21:51:38 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:34:38.744 21:51:38 keyring_linux -- keyring/linux.sh@33 -- # sn=440859466 00:34:38.744 21:51:38 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 440859466 00:34:38.744 1 links removed 00:34:38.744 21:51:38 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1686901 00:34:38.744 21:51:38 keyring_linux -- common/autotest_common.sh@949 -- # '[' -z 1686901 ']' 00:34:38.744 21:51:38 keyring_linux -- common/autotest_common.sh@953 -- # kill -0 1686901 00:34:38.744 21:51:38 keyring_linux -- common/autotest_common.sh@954 -- # uname 00:34:38.744 21:51:38 keyring_linux -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:34:38.744 21:51:38 keyring_linux -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1686901 00:34:38.744 21:51:39 keyring_linux -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:34:38.744 21:51:39 keyring_linux -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:34:38.744 21:51:39 keyring_linux -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1686901' 00:34:38.744 killing process with pid 1686901 00:34:38.744 21:51:39 keyring_linux -- common/autotest_common.sh@968 -- # kill 1686901 00:34:38.744 Received shutdown signal, test time was about 1.000000 seconds 00:34:38.744 00:34:38.744 Latency(us) 00:34:38.744 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:38.744 =================================================================================================================== 00:34:38.744 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:38.744 21:51:39 keyring_linux -- common/autotest_common.sh@973 -- # wait 1686901 00:34:39.003 21:51:39 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1686636 00:34:39.003 21:51:39 keyring_linux -- common/autotest_common.sh@949 -- # '[' -z 1686636 ']' 00:34:39.003 21:51:39 keyring_linux -- common/autotest_common.sh@953 -- # kill -0 1686636 00:34:39.003 21:51:39 keyring_linux -- common/autotest_common.sh@954 -- # uname 00:34:39.003 21:51:39 keyring_linux -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:34:39.003 21:51:39 keyring_linux -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1686636 00:34:39.003 21:51:39 keyring_linux -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:34:39.003 21:51:39 keyring_linux -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:34:39.003 21:51:39 keyring_linux -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1686636' 00:34:39.003 killing process with pid 1686636 00:34:39.003 21:51:39 keyring_linux -- common/autotest_common.sh@968 -- # kill 1686636 00:34:39.003 21:51:39 keyring_linux -- common/autotest_common.sh@973 -- # wait 1686636 00:34:39.572 00:34:39.572 real 0m6.089s 00:34:39.572 user 0m11.440s 00:34:39.572 sys 0m1.471s 00:34:39.572 21:51:39 keyring_linux -- common/autotest_common.sh@1125 -- # xtrace_disable 00:34:39.572 21:51:39 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:39.572 ************************************ 00:34:39.572 END TEST keyring_linux 00:34:39.572 ************************************ 00:34:39.572 21:51:39 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:34:39.572 21:51:39 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:34:39.572 21:51:39 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:34:39.572 21:51:39 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:34:39.572 21:51:39 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:34:39.572 21:51:39 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:34:39.572 21:51:39 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:34:39.572 21:51:39 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:34:39.572 21:51:39 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:34:39.572 21:51:39 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:34:39.572 21:51:39 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:34:39.572 21:51:39 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:34:39.572 21:51:39 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:34:39.572 21:51:39 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:34:39.572 21:51:39 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:34:39.572 21:51:39 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:34:39.572 21:51:39 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:34:39.572 21:51:39 -- common/autotest_common.sh@723 -- # xtrace_disable 00:34:39.572 21:51:39 -- common/autotest_common.sh@10 -- # set +x 00:34:39.572 21:51:39 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:34:39.572 21:51:39 -- common/autotest_common.sh@1391 -- # local autotest_es=0 00:34:39.572 21:51:39 -- common/autotest_common.sh@1392 -- # xtrace_disable 00:34:39.572 21:51:39 -- common/autotest_common.sh@10 -- # set +x 00:34:44.867 INFO: APP EXITING 00:34:44.867 INFO: killing all VMs 00:34:44.867 INFO: killing vhost app 00:34:44.867 WARN: no vhost pid file found 00:34:44.867 INFO: EXIT DONE 00:34:48.153 0000:86:00.0 (8086 0a54): Already using the nvme driver 00:34:48.153 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:34:48.154 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:34:48.154 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:34:48.154 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:34:48.154 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:34:48.154 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:34:48.154 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:34:48.154 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:34:48.154 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:34:48.154 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:34:48.154 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:34:48.154 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:34:48.154 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:34:48.154 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:34:48.154 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:34:48.154 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:34:51.442 Cleaning 00:34:51.442 Removing: /var/run/dpdk/spdk0/config 00:34:51.442 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:51.442 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:51.442 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:51.442 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:51.442 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:34:51.442 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:34:51.442 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:34:51.442 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:34:51.442 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:51.442 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:51.442 Removing: /var/run/dpdk/spdk1/config 00:34:51.442 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:34:51.442 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:34:51.442 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:34:51.442 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:34:51.442 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:34:51.442 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:34:51.442 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:34:51.442 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:34:51.442 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:34:51.442 Removing: /var/run/dpdk/spdk1/hugepage_info 00:34:51.442 Removing: /var/run/dpdk/spdk1/mp_socket 00:34:51.442 Removing: /var/run/dpdk/spdk2/config 00:34:51.442 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:34:51.442 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:34:51.442 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:34:51.442 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:34:51.442 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:34:51.442 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:34:51.442 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:34:51.442 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:34:51.442 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:34:51.442 Removing: /var/run/dpdk/spdk2/hugepage_info 00:34:51.442 Removing: /var/run/dpdk/spdk3/config 00:34:51.442 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:34:51.442 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:34:51.442 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:34:51.442 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:34:51.442 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:34:51.442 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:34:51.442 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:34:51.442 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:34:51.442 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:34:51.442 Removing: /var/run/dpdk/spdk3/hugepage_info 00:34:51.442 Removing: /var/run/dpdk/spdk4/config 00:34:51.442 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:34:51.442 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:34:51.442 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:34:51.442 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:34:51.442 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:34:51.442 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:34:51.442 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:34:51.442 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:34:51.442 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:34:51.442 Removing: /var/run/dpdk/spdk4/hugepage_info 00:34:51.442 Removing: /dev/shm/bdev_svc_trace.1 00:34:51.442 Removing: /dev/shm/nvmf_trace.0 00:34:51.442 Removing: /dev/shm/spdk_tgt_trace.pid1241152 00:34:51.442 Removing: /var/run/dpdk/spdk0 00:34:51.442 Removing: /var/run/dpdk/spdk1 00:34:51.442 Removing: /var/run/dpdk/spdk2 00:34:51.442 Removing: /var/run/dpdk/spdk3 00:34:51.442 Removing: /var/run/dpdk/spdk4 00:34:51.442 Removing: /var/run/dpdk/spdk_pid1238735 00:34:51.442 Removing: /var/run/dpdk/spdk_pid1239955 00:34:51.442 Removing: /var/run/dpdk/spdk_pid1241152 00:34:51.442 Removing: /var/run/dpdk/spdk_pid1241858 00:34:51.443 Removing: /var/run/dpdk/spdk_pid1242928 00:34:51.443 Removing: /var/run/dpdk/spdk_pid1243202 00:34:51.443 Removing: /var/run/dpdk/spdk_pid1244301 00:34:51.443 Removing: /var/run/dpdk/spdk_pid1244566 00:34:51.702 Removing: /var/run/dpdk/spdk_pid1244758 00:34:51.702 Removing: /var/run/dpdk/spdk_pid1246649 00:34:51.702 Removing: /var/run/dpdk/spdk_pid1248061 00:34:51.702 Removing: /var/run/dpdk/spdk_pid1248390 00:34:51.702 Removing: /var/run/dpdk/spdk_pid1248712 00:34:51.702 Removing: /var/run/dpdk/spdk_pid1249150 00:34:51.702 Removing: /var/run/dpdk/spdk_pid1249603 00:34:51.702 Removing: /var/run/dpdk/spdk_pid1249830 00:34:51.702 Removing: /var/run/dpdk/spdk_pid1250069 00:34:51.702 Removing: /var/run/dpdk/spdk_pid1250376 00:34:51.702 Removing: /var/run/dpdk/spdk_pid1251587 00:34:51.702 Removing: /var/run/dpdk/spdk_pid1254967 00:34:51.702 Removing: /var/run/dpdk/spdk_pid1255357 00:34:51.702 Removing: /var/run/dpdk/spdk_pid1255810 00:34:51.702 Removing: /var/run/dpdk/spdk_pid1255899 00:34:51.702 Removing: /var/run/dpdk/spdk_pid1256380 00:34:51.702 Removing: /var/run/dpdk/spdk_pid1256640 00:34:51.702 Removing: /var/run/dpdk/spdk_pid1257198 00:34:51.702 Removing: /var/run/dpdk/spdk_pid1257344 00:34:51.703 Removing: /var/run/dpdk/spdk_pid1257605 00:34:51.703 Removing: /var/run/dpdk/spdk_pid1257773 00:34:51.703 Removing: /var/run/dpdk/spdk_pid1258067 00:34:51.703 Removing: /var/run/dpdk/spdk_pid1258324 00:34:51.703 Removing: /var/run/dpdk/spdk_pid1258730 00:34:51.703 Removing: /var/run/dpdk/spdk_pid1258981 00:34:51.703 Removing: /var/run/dpdk/spdk_pid1259311 00:34:51.703 Removing: /var/run/dpdk/spdk_pid1259663 00:34:51.703 Removing: /var/run/dpdk/spdk_pid1259884 00:34:51.703 Removing: /var/run/dpdk/spdk_pid1259947 00:34:51.703 Removing: /var/run/dpdk/spdk_pid1260232 00:34:51.703 Removing: /var/run/dpdk/spdk_pid1260511 00:34:51.703 Removing: /var/run/dpdk/spdk_pid1260797 00:34:51.703 Removing: /var/run/dpdk/spdk_pid1261074 00:34:51.703 Removing: /var/run/dpdk/spdk_pid1261360 00:34:51.703 Removing: /var/run/dpdk/spdk_pid1261639 00:34:51.703 Removing: /var/run/dpdk/spdk_pid1261916 00:34:51.703 Removing: /var/run/dpdk/spdk_pid1262204 00:34:51.703 Removing: /var/run/dpdk/spdk_pid1262481 00:34:51.703 Removing: /var/run/dpdk/spdk_pid1262766 00:34:51.703 Removing: /var/run/dpdk/spdk_pid1263046 00:34:51.703 Removing: /var/run/dpdk/spdk_pid1263331 00:34:51.703 Removing: /var/run/dpdk/spdk_pid1263608 00:34:51.703 Removing: /var/run/dpdk/spdk_pid1263890 00:34:51.703 Removing: /var/run/dpdk/spdk_pid1264173 00:34:51.703 Removing: /var/run/dpdk/spdk_pid1264456 00:34:51.703 Removing: /var/run/dpdk/spdk_pid1264745 00:34:51.703 Removing: /var/run/dpdk/spdk_pid1265040 00:34:51.703 Removing: /var/run/dpdk/spdk_pid1265331 00:34:51.703 Removing: /var/run/dpdk/spdk_pid1265667 00:34:51.703 Removing: /var/run/dpdk/spdk_pid1265909 00:34:51.703 Removing: /var/run/dpdk/spdk_pid1266252 00:34:51.703 Removing: /var/run/dpdk/spdk_pid1270643 00:34:51.703 Removing: /var/run/dpdk/spdk_pid1321645 00:34:51.703 Removing: /var/run/dpdk/spdk_pid1326539 00:34:51.703 Removing: /var/run/dpdk/spdk_pid1337621 00:34:51.703 Removing: /var/run/dpdk/spdk_pid1343767 00:34:51.703 Removing: /var/run/dpdk/spdk_pid1348508 00:34:51.703 Removing: /var/run/dpdk/spdk_pid1349135 00:34:51.703 Removing: /var/run/dpdk/spdk_pid1362714 00:34:51.703 Removing: /var/run/dpdk/spdk_pid1362716 00:34:51.703 Removing: /var/run/dpdk/spdk_pid1363756 00:34:51.703 Removing: /var/run/dpdk/spdk_pid1364552 00:34:51.703 Removing: /var/run/dpdk/spdk_pid1365593 00:34:51.703 Removing: /var/run/dpdk/spdk_pid1366129 00:34:51.703 Removing: /var/run/dpdk/spdk_pid1366218 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1366559 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1366655 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1366657 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1367702 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1368507 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1369536 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1370320 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1370329 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1370589 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1371990 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1373095 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1382353 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1382770 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1387733 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1394583 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1398119 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1410176 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1420343 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1422255 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1423221 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1442624 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1446968 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1481350 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1486833 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1489038 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1490880 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1491143 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1491163 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1491419 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1491997 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1494086 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1495204 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1495763 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1498163 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1498974 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1499611 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1504391 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1515635 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1519821 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1526637 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1528096 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1529890 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1535246 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1539812 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1548440 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1548448 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1553808 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1554067 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1554325 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1554625 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1554815 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1559949 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1560581 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1565632 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1568455 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1574855 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1581366 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1592186 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1599879 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1599929 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1621344 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1622122 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1622658 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1623200 00:34:51.962 Removing: /var/run/dpdk/spdk_pid1624165 00:34:52.221 Removing: /var/run/dpdk/spdk_pid1624830 00:34:52.221 Removing: /var/run/dpdk/spdk_pid1625373 00:34:52.221 Removing: /var/run/dpdk/spdk_pid1626161 00:34:52.221 Removing: /var/run/dpdk/spdk_pid1630857 00:34:52.221 Removing: /var/run/dpdk/spdk_pid1631176 00:34:52.221 Removing: /var/run/dpdk/spdk_pid1638429 00:34:52.221 Removing: /var/run/dpdk/spdk_pid1638661 00:34:52.221 Removing: /var/run/dpdk/spdk_pid1641000 00:34:52.221 Removing: /var/run/dpdk/spdk_pid1649539 00:34:52.221 Removing: /var/run/dpdk/spdk_pid1649554 00:34:52.221 Removing: /var/run/dpdk/spdk_pid1655509 00:34:52.221 Removing: /var/run/dpdk/spdk_pid1657504 00:34:52.221 Removing: /var/run/dpdk/spdk_pid1659739 00:34:52.221 Removing: /var/run/dpdk/spdk_pid1660929 00:34:52.221 Removing: /var/run/dpdk/spdk_pid1663172 00:34:52.221 Removing: /var/run/dpdk/spdk_pid1664378 00:34:52.221 Removing: /var/run/dpdk/spdk_pid1674314 00:34:52.221 Removing: /var/run/dpdk/spdk_pid1674836 00:34:52.221 Removing: /var/run/dpdk/spdk_pid1675367 00:34:52.221 Removing: /var/run/dpdk/spdk_pid1677942 00:34:52.221 Removing: /var/run/dpdk/spdk_pid1678575 00:34:52.221 Removing: /var/run/dpdk/spdk_pid1679146 00:34:52.221 Removing: /var/run/dpdk/spdk_pid1683862 00:34:52.221 Removing: /var/run/dpdk/spdk_pid1684047 00:34:52.221 Removing: /var/run/dpdk/spdk_pid1686043 00:34:52.221 Removing: /var/run/dpdk/spdk_pid1686636 00:34:52.221 Removing: /var/run/dpdk/spdk_pid1686901 00:34:52.221 Clean 00:34:52.221 21:51:52 -- common/autotest_common.sh@1450 -- # return 0 00:34:52.221 21:51:52 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:34:52.221 21:51:52 -- common/autotest_common.sh@729 -- # xtrace_disable 00:34:52.221 21:51:52 -- common/autotest_common.sh@10 -- # set +x 00:34:52.221 21:51:52 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:34:52.221 21:51:52 -- common/autotest_common.sh@729 -- # xtrace_disable 00:34:52.221 21:51:52 -- common/autotest_common.sh@10 -- # set +x 00:34:52.221 21:51:52 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:52.221 21:51:52 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:34:52.221 21:51:52 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:34:52.480 21:51:52 -- spdk/autotest.sh@391 -- # hash lcov 00:34:52.480 21:51:52 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:34:52.480 21:51:52 -- spdk/autotest.sh@393 -- # hostname 00:34:52.480 21:51:52 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-16 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:34:52.480 geninfo: WARNING: invalid characters removed from testname! 00:35:24.650 21:52:21 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:25.587 21:52:25 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:28.876 21:52:28 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:31.412 21:52:31 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:34.699 21:52:34 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:37.249 21:52:37 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:40.540 21:52:40 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:35:40.540 21:52:40 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:40.540 21:52:40 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:35:40.540 21:52:40 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:40.541 21:52:40 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:40.541 21:52:40 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:40.541 21:52:40 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:40.541 21:52:40 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:40.541 21:52:40 -- paths/export.sh@5 -- $ export PATH 00:35:40.541 21:52:40 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:40.541 21:52:40 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:35:40.541 21:52:40 -- common/autobuild_common.sh@437 -- $ date +%s 00:35:40.541 21:52:40 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1717789960.XXXXXX 00:35:40.541 21:52:40 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1717789960.oF64Bw 00:35:40.541 21:52:40 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:35:40.541 21:52:40 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:35:40.541 21:52:40 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:35:40.541 21:52:40 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:35:40.541 21:52:40 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:35:40.541 21:52:40 -- common/autobuild_common.sh@453 -- $ get_config_params 00:35:40.541 21:52:40 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:35:40.541 21:52:40 -- common/autotest_common.sh@10 -- $ set +x 00:35:40.541 21:52:40 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:35:40.541 21:52:40 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:35:40.541 21:52:40 -- pm/common@17 -- $ local monitor 00:35:40.541 21:52:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:40.541 21:52:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:40.541 21:52:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:40.541 21:52:40 -- pm/common@21 -- $ date +%s 00:35:40.541 21:52:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:40.541 21:52:40 -- pm/common@21 -- $ date +%s 00:35:40.541 21:52:40 -- pm/common@25 -- $ sleep 1 00:35:40.541 21:52:40 -- pm/common@21 -- $ date +%s 00:35:40.541 21:52:40 -- pm/common@21 -- $ date +%s 00:35:40.541 21:52:40 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1717789960 00:35:40.541 21:52:40 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1717789960 00:35:40.541 21:52:40 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1717789960 00:35:40.541 21:52:40 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1717789960 00:35:40.541 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1717789960_collect-vmstat.pm.log 00:35:40.541 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1717789960_collect-cpu-load.pm.log 00:35:40.541 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1717789960_collect-cpu-temp.pm.log 00:35:40.541 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1717789960_collect-bmc-pm.bmc.pm.log 00:35:41.110 21:52:41 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:35:41.110 21:52:41 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:35:41.110 21:52:41 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:41.110 21:52:41 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:35:41.110 21:52:41 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:35:41.110 21:52:41 -- spdk/autopackage.sh@19 -- $ timing_finish 00:35:41.110 21:52:41 -- common/autotest_common.sh@735 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:35:41.110 21:52:41 -- common/autotest_common.sh@736 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:35:41.110 21:52:41 -- common/autotest_common.sh@738 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:41.110 21:52:41 -- spdk/autopackage.sh@20 -- $ exit 0 00:35:41.110 21:52:41 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:35:41.110 21:52:41 -- pm/common@29 -- $ signal_monitor_resources TERM 00:35:41.110 21:52:41 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:35:41.110 21:52:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:41.110 21:52:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:35:41.110 21:52:41 -- pm/common@44 -- $ pid=1698204 00:35:41.110 21:52:41 -- pm/common@50 -- $ kill -TERM 1698204 00:35:41.110 21:52:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:41.110 21:52:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:35:41.110 21:52:41 -- pm/common@44 -- $ pid=1698205 00:35:41.110 21:52:41 -- pm/common@50 -- $ kill -TERM 1698205 00:35:41.110 21:52:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:41.110 21:52:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:35:41.110 21:52:41 -- pm/common@44 -- $ pid=1698207 00:35:41.110 21:52:41 -- pm/common@50 -- $ kill -TERM 1698207 00:35:41.110 21:52:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:41.110 21:52:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:35:41.110 21:52:41 -- pm/common@44 -- $ pid=1698230 00:35:41.110 21:52:41 -- pm/common@50 -- $ sudo -E kill -TERM 1698230 00:35:41.110 + [[ -n 1123420 ]] 00:35:41.110 + sudo kill 1123420 00:35:41.379 [Pipeline] } 00:35:41.399 [Pipeline] // stage 00:35:41.405 [Pipeline] } 00:35:41.423 [Pipeline] // timeout 00:35:41.428 [Pipeline] } 00:35:41.445 [Pipeline] // catchError 00:35:41.452 [Pipeline] } 00:35:41.470 [Pipeline] // wrap 00:35:41.476 [Pipeline] } 00:35:41.492 [Pipeline] // catchError 00:35:41.502 [Pipeline] stage 00:35:41.504 [Pipeline] { (Epilogue) 00:35:41.519 [Pipeline] catchError 00:35:41.521 [Pipeline] { 00:35:41.536 [Pipeline] echo 00:35:41.538 Cleanup processes 00:35:41.544 [Pipeline] sh 00:35:41.832 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:41.832 1698320 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:35:41.832 1698651 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:41.846 [Pipeline] sh 00:35:42.130 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:42.130 ++ grep -v 'sudo pgrep' 00:35:42.130 ++ awk '{print $1}' 00:35:42.130 + sudo kill -9 1698320 00:35:42.142 [Pipeline] sh 00:35:42.426 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:57.323 [Pipeline] sh 00:35:57.699 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:57.700 Artifacts sizes are good 00:35:57.713 [Pipeline] archiveArtifacts 00:35:57.720 Archiving artifacts 00:35:57.930 [Pipeline] sh 00:35:58.214 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:35:58.228 [Pipeline] cleanWs 00:35:58.238 [WS-CLEANUP] Deleting project workspace... 00:35:58.238 [WS-CLEANUP] Deferred wipeout is used... 00:35:58.244 [WS-CLEANUP] done 00:35:58.246 [Pipeline] } 00:35:58.266 [Pipeline] // catchError 00:35:58.277 [Pipeline] sh 00:35:58.555 + logger -p user.info -t JENKINS-CI 00:35:58.563 [Pipeline] } 00:35:58.579 [Pipeline] // stage 00:35:58.583 [Pipeline] } 00:35:58.598 [Pipeline] // node 00:35:58.602 [Pipeline] End of Pipeline 00:35:58.641 Finished: SUCCESS